Morphology
The following sections present several important algorithms that employ the mathematical morphological operations such as erosion, dilation, opening, closing, particle analysis, calculation the background map, etc.
Contents
 1 Introduction to the basic morphological operations
 2 Background Distance map
 3 Close
 4 Delete Objects
 5 Dilate
 6 Distance Map
 7 Erode
 8 Evaluate Segmentation
 9 Fill holes
 10 Find Edges
 11 ID objects
 12 Morphological Filter
 13 Open
 14 Particle Analysis
 15 Skeletonize
 16 Skeletonize 3D pot field
 17 Ultimate erode
 18 See also
Introduction to the basic morphological operations
In this introductory section we will explain several important concepts in mathematical morphology that are used in MIPAV algorithms. These concepts are as follows: translation, reflection, complement, and difference.
Let's introduce two objects A and B in Z2 with components a (ai,aj) and b (bi,bj) respectively.
The translation of A by x = (x1,x2), which is denoted as (A)x, can be defined as
Equation 1
The reflection of B, denoted as B^, is defined as Equation 2
The complement of object A is Equation 3
The difference of two objects or images A and B, denoted as AB, is defined as
Equation 4
Figure 1 illustrates the morphological operations mentioned above.
Data types
All morphological operations can be applied to the images of the following types:
Data type

Description

Boolean

1 bit per pixel/voxel (1 on, 0 off)

Unsigned byte

1 byte per pixel/voxel (0, 255)

Unsigned short

2 bytes per pixel/voxel (0, 65535)

Background Distance map
The Background Distance map operation converts a binary image (which consists of background and foreground pixels) into an image where every foreground pixel has a value corresponding to the minimum distance from the background. The algorithm uses the Euclidean distance metric to transform the image.
For 2D images, the algorithm determines the image resolution in X and Y directions, first. Then it identifies all edges pixels. And finally, it creates a distance metric, where each pixel of the metric is associated with a corresponding region of the source image. Each pixel is assigned a calculated distance value corresponding to the Euclidean distance between a center of that pixel and the nearest point of the background. The nearest point is located using the image resolution.
For two 2D points P(x1,y1) and Q(x2,y2) the "classical" Euclidian distance is computer as
Equation 5
The Background Distance map operation when computes the Euclidian distance takes into account the image resolutions in the X and Y dimensions. So that it is calculated as Equation 6
Applying Background Distance map
To apply the algorithm,
 Open an image of interest.
 Call Algorithms>Morphological> Background Distance map.
 In the Background Distance map dialog box that appears,
 Specify the destination of the Background Distance map image;
 And also specify whether you want to calculate the Background Distance map for the whole image or only for the selected VOIs.
 Press OK to run the algorithm.
When the algorithm finishes running, the Background Distance map appears in the specified frame (the default option is a new frame). See also Figure 2.
Close
The closing of an object A by kernel B is defined as the dilation of A by B, followed by the erosion of the result by B. Closing generally smooths sections of contours, fuses narrow breaks, eliminates small holes and fills gaps in the contour. For 2D images it could be expressed using the following equation:
Equation 7
Figure 3 below illustrates closing of an object A with a disk kernel B. Note that the resulted image was also smoothed by the circular kernel B.
Applying the closing
To apply the algorithm,
 Open an image of interest.
 Call Algorithms>Morphological >Close.
 In the Open dialog box that appears,
 Specify the number of dilations (from 1 to 20);
 Specify the number of erosions (from 1 to 20);
 Select the structural element or kernel;
 Specify the destination of the result image;
 And also specify whether you want apply the closing to the whole image or only for selected VOIs. For the dialog box options refer to Figure 4.
 Press OK to run the algorithm.
When the algorithm finishes running, the result image appears in the specified frame. See also Figure 5.
Number of dilations

Indicates the number of dilations to be performed.


Number of erosions

Indicates the number of erosions to be performed.
 
Kernel selection

Use this list box to select the structuring element that will be used for erosion.
 
Circle diameter (mm)

This option is available only if you choose the User Sized Circle option in the Kernel Selection box. Enter a value for the circle diameter, here.
 
Process image in 2.5D

TBD.
 
Destination


New image  if selected, indicates that the result image appears in a new image window. Replace image  if selected, indicates that the result image replaces the current active image.
 
Process
 
Whole image If selected, indicates that the erosion should be applied to the whole image. VOI region(s) If selected, indicates that the erosion should be applied only to selected VOI(s).
 
OK

Performs the dilation on the selected image based on your choices in this dialog box.
 
Cancel

Disregards changes you made in this dialog box and closes the dialog box.
 
Help

Displays online help for the algorithm.

(a)

(b)

Delete Objects
The Delete operation deletes objects larger than a maximum size indicated in the Delete dialog box and also objects smaller than the indicated minimum size.
Applying the Delete algorithm
To apply the algorithm,
 Open an image of interest.
 Call Algorithms>Morphological> Delete Objects.
 In the Delete Objects dialog box that appears,
 Specify the maximum size of the object;
 Specify the minimum size of the object;
 Specify the destination of the result image;
 And also specify whether you want apply the algorithm to the whole image or only for selected VOIs.
 Press OK to run the algorithm.
When the algorithm finishes running, the result image appears in the specified frame. See also Figure 6.
(a)

(b)

Dilate
We define the dilation as a process that consists of obtaining the reflection of B about its origin, and then shifting this reflection by x. The dilation of A by B is then the set of all x displacements such that B^ and A overlap by at least one nonzero element. That can be expressed using the following equation: Equation 8
The object B is commonly referred to as a structuring element or kernel in dilation, as well as in the other morphological operations. Figure 7 (a) below shows a simple dilation by the symmetric kernel, so that B^=B. Figure 7 (b) shows a kernel designed to achieve more dilation vertically than horizontally.
(a)

(b)

Applying the dilation
To apply the algorithm,
 Open an image of interest.
 Call Algorithms>Morphological> Dilate.
 In the Dilation dialog box that appears,
 Specify the number of dilations (from 1 to 20);
 Select the structural element or kernel;
 Specify the destination of the dilated image;
 And also specify whether you want apply the dilation to the whole image or only for selected VOIs. For the dialog box options refer to Figure 8.
 Press OK to run the algorithm.
When the algorithm finishes running, the result image appears in the specified frame. See also Figure 9.
Number of dilations

Indicates the number of dilations to be performed.


Kernel selection

Use this list box to select the structuring element that will be used for dilation.
 
Circle diameter (mm)

This option is available only if you choose the User Sized Circle option in the Kernel Selection box. Enter a value for the circle diameter, here.
 
Destination
 
New image If selected, indicates that the result image appears in a new image window. Replace image If selected, indicates that the result image replaces the current active image.
 
Process
 
Whole image If selected, indicates that the dilation should be applied to the whole image. VOI region(s) If selected, indicates that the dilation should be applied only to selected VOI(s).
 
OK

Performs the dilation on the selected image based on your choices in this dialog box.
 
Cancel

Disregards changes you made in this dialog box and closes the dialog box.
 
Help

Displays online help for the algorithm.

(a)

(b)

Distance Map
The algorithm uses the Euclidean distance metric to calculate a distance map for a selected image or image region. For 2D images,
 The algorithm determines the image resolution in X and Y directions.
 Then it identifies all edge pixels.
 Then, it creates a distance metric, where each pixel of the metric is associated with a corresponding region of the source image. Each pixel is assigned a calculated distance value corresponding to the Euclidean distance between a center of that pixel and the nearest edge point.
For more information, please refer to "Background Distance map".
Applying distance map
To apply the algorithm,
 Open an image of interest.
 Call Algorithms>Morphological> Distance Map.
 Specify whether you want apply the dilation to the whole image or only for selected VOIs.
 Press OK to run the algorithm.
When the algorithm finishes running, the result image appears in the specified frame. See also Figure 9.
Erode
The erosion of A by B is the set of all points x such that B, translated by x, is contained in A. For 2D images it could be expressed using the following equation:
Equation 9
Applying the erosion
To apply the algorithm,
 Open an image of interest.
 Call Algorithms>Morphological> Erode.
 In the Erode dialog box that appears,
 Specify the number of erosions (from 1 to 20);
 Select the structural element or kernel;
 Specify the destination of the result image;
 And also specify whether you want apply the erosion to the whole image or only for selected VOIs. For the dialog box options refer to Figure 11.
 Press OK to run the algorithm.
When the algorithm finishes running, the result image appears in the specified frame. See also Figure 12.
Number of erosions

Indicates the number of erosions to be performed.


Kernel selection

Use this list box to select the structuring element that will be used for erosion.
 
Circle diameter (mm)

This option is available only if you choose the User Sized Circle option in the Kernel Selection box. Enter a value for the circle diameter, here.
 
Destination
 
New image If selected, indicates that the result image appears in a new image window. Replace image If selected, indicates that the result image replaces the current active image.
 
Process
 
Whole image If selected, indicates that the erosion should be applied to the whole image. VOI region(s) If selected, indicates that the erosion should be applied only to selected VOI(s).
 
Process image in 2.5D

TBD.


OK

Performs the dilation on the selected image based on your choices in this dialog box.
 
Cancel

Disregards changes you made in this dialog box and closes the dialog box.
 
Help

Displays online help for the algorithm.

(a)

(b)

Evaluate Segmentation
Evaluate Segmentation operation compares segmentation results of a test image to segmentation results of an ideal gold standard true image. For each evaluated segmentation pair, the false negative volume fraction, the false positive volume fraction, and the positive volume fraction are sent to the Output window. See Figure 13)e.
Applying evaluate segmentation
To apply the algorithm,
 Open a segmented image of interest and its gold standard segmented image.
Note that the images should be of the following types: Boolean, Unsigned byte, or Short. Before running the algorithm, convert the images using the Utilities> Conversion Tools> Convert Type menu. See also "Convert Type" . You can also use the Paint Conversion tools menu options to convert the paint to the Unsigned Byte mask image. See Figure 13. 
 Select the gold standard segmented image.
 Call Algorithms>Morphological> Evaluate Segmentation.
 The Evaluate Segmentation dialog box appears.
 In the dialog box, select the test image, and then press OK to run the algorithm.
When the algorithm finishes running, the result statistics appears in the Output window. See also Figure 13.
(a)

(b)

(c)

(d)

(e)

Fill holes
To fill holes in objects MIPAV uses the following procedure:
 It segments an image to produce binary representation of objects;
 Then, it computes complement of a binary image as a mask image;
 It generates a seed image as the border of the image;
 Then, it propagates the seed into the mask;
 And finally, it complements the result of propagation to produce the final result.
Applying fill holes
To apply the algorithm,
 Open an image of interest.
 Call Algorithms>Morphological> Fill Holes.
 The Fill Objects dialog box appears. In the dialog box:
 Specify if you want to process the image in 2.5 D (this option is available for 3D images only);
 Specify the destination of the result image.
 Press OK to run the algorithm.
When the algorithm finishes running, the result image appears in the new image frame. See also Figure 14.
(a)

(b)

(c)

Find Edges
The technique finds the edges of the objects in an image using combinations of the following morphological operations: dilation, erosion, and XOR. The algorithm has two options  it could find outer edges or inner edges depending on the user's choise.The algorithm, first, determines the image resolution. And then it uses the resolution to calculate the kernel or structural element that will be used for the dilation or erosion.
If the Outer Edging option was selected, the algorithm dilates the original image, and then performs XOR operation of the dilated image with the original image.
If the Inner Edging option was selected, the algorithm erodes the original image, and then performs XOR operation of the eroded image with the original image.
Applying find edges
To apply the algorithm,
 Open an image of interest.
 Call Algorithms>Morphological> Find Edges.
 The Find Edges dialog box appears. In the dialog box:
 Specify what option you would like to use  Outer Edging or Inner Edging;
 Specify the destination of the result image;
 And also specify whether you want apply the algorithm to the whole image or only for selected VOI(s).
 Press OK to run the algorithm.
When the algorithm finishes running, the result image appears in the new image frame. See also Figure 15.
(a)

(b)

(c)

ID objects
The algorithm labels each object in an image with a different integer value and also deletes objects which are outside of the user defined threshold. It converts the image to black and white in order to prepare it for boundary tracing; and then it uses morphology functions, such as open, close and fill holes to remove pixels which do not belong to the objects of interest.
Applying id objects
To apply the algorithm,
 Open an image of interest.
 Call Algorithms>Morphological> ID objects.
 The Identify Objects dialog box appears. In the dialog box:
 Specify the destination of the result image;
 Specify whether you want apply the algorithm to the whole image or only to selected VOI(s);
 And also enter the max and min size for the particles you would like to exclude from the calculation.
 Press OK to run the algorithm.
When the algorithm finishes running, the result image appears in the designated image frame and the statistics appears in the Output window. See also Figure 16.
Note: you might consider smoothing the image and removing excess noise, first, and then apply ID Objects. 
Morphological Filter
Morphological filter can be used to correct an image for the shading that was introduced in the image forming procedure. In other words, it corrects for nonuniform illumination and nonuniform camera sensitivity.
The illumination over the microscope fieldofview I'ill(x,y) usually interacts in a multiplicative (nonuniform) way with the biological object a(x,y) to produce the image b(x,y):
Equation 10
b(x,y) = I_{ill}(x, y) ï¿½ a(x, y),
where the object a(x,y) representing various microscope imaging modalities, such as reflectable model r(x, y), optical density model 10'OD'('x ', 'y') or fluorescence model c'('x', 'y'). The Morphological filter works with the fluorescence model c'('x', 'y'), assuming that this model only holds for low concentrations. The camera and microscope are also contributing gain and offset to the image, thus the equation for c'('x', 'y') can be rewritten as:
Equation 11
c[m,n] = gain[m,n] ï¿½ b[m,n] offset[m,n]= gain[m,n] ï¿½ I_{ill}[m,n]ï¿½ a[m,n] offset[m,n]
where, a) the camera gain and offset can vary as a function of position and, therefore, contributing to the shading; and b) I'ill[m,n], gain[m,n] and offset[m,n] are slowly varying compared to a[m,n].
The algorithm, first, compute (using morphological smoothing) a smoothed version of c[m,n], where the smoothing is large compared to the size of the objects in the image. This smoothed version is the estimate of the background of the image. Second, it subtracts the smoothed version from c[m,n]. And then, it restores the desired average brightness value. This can be written as a following formula:
Equation 12
aË† [m,n] = c[m,n]  MorphSmooth{c[m,n]} constant,
where the morphological smoothing filter MorphSmooth'{'c'['m','n']'} involves two basic operations  the maximum filter and the minimum filter.
 In the maximum filter, defined over a window W of J x K pixels where both J and K are considered to be of odd size (e.g., 5 ï¿½ 3), the value in the output image, corresponding to the center pixel A in the input window, is the maximum brightness value found in the input window.
 In the minimum filter, defined over a similar J x K window (W), the value in the output image, corresponding to the center pixel A in the input window, is the minimum brightness value found in the input window.
Therefore, the maximum and minimum filters can be described as examples of the morphological dilation and erosion respectively.
Dilation:
Equation 13
D(A) = max_{ }[j,k]Å’W {a[m  j, n  k]} = max_{ }W (A)
Erosion:
Equation 14
E(A) = min_{ }[j,k ]Å’W {a[m  j, n  k]} = min _{W}(A)
Finally, the algorithm defines the morphological smoothing filter described as follows: Equation 15
MorphSmooth(A) = min(max(max(min(A)))),
where all of the filter operations are applied over the same J x K filter window W. To better understand equation 15, assume that you have two "subroutines" available, one for the minimum filter and the other one for the maximum filter, and that you apply them in a sequential fashion. See Figure 17.
(a)

(b)

(c)

Applying Morphological filter
Before applying Morphological filter you might consider smoothing the image using Gaussian Blur and (or) reducing the image noise using one of the noise reduction algorithms available in MIPAV. 
To apply the algorithm,
 Open an image of interest.
 Call Algorithms>Morphological >Morphological Filter.
 In the Morphological Filter dialog box that appears,
 Specify the window size in the X direction;
 Specify the window size in the Y direction;
 Specify the window size in the Z direction (for 3D images only);
 For 2.5D images, specify whether you want to process slices independently or not;
 Specify the destination of the result image;
 And also specify whether you want apply the opening to the whole image or only for selected VOIs. For the dialog box options refer to Figure 18.
 Press OK to run the algorithm.
When the algorithm finishes running, the result image appears in the specified frame. See also Figure 17.
Filter size

Specify the size of the filter window W in X, Y and Z (for 3D images) direction. See also equation 13 and 14.


Options

Process each slice independently  check this check box if you want to process slices independently. this option only works for 2.5D images.
 
Destination
 
New image  if selected, indicates that the result image appears in a new image window. Replace image  if selected, indicates that the result image replaces the current active image.
 
Process
 
Whole image If selected, indicates that the algorithm should be applied to the whole image. VOI region(s) If selected, indicates that the algorithm should be applied only to selected VOI(s).
 
OK

Performs the dilation on the selected image based on your choices in this dialog box.
 
Cancel

Disregards changes you made in this dialog box and closes the dialog box.
 
Help

Displays online help for the algorithm.

Open
The opening of image A by a structuring element (a kernel) B is simply a combination of two operations: an erosion of A by B, followed by a dilation of the result by B. See also Figure 19.
Opening generally smooths the contour of an image, breaks narrow strips and eliminates thin protrusions. The equation for the opening is as follows: Equation 16
Applying the opening
To apply the algorithm,
 Open an image of interest.
 Call Algorithms>Morphological >Open.
 In the Open dialog box that appears,
 Specify the number of erosions (from 1 to 20);
 Specify the number of dilations (from 1 to 20);
 Select the structural element or kernel;
 Specify the destination of the result image;
 And also specify whether you want apply the opening to the whole image or only for selected VOIs. For the dialog box options refer to Figure 20.
 Press OK to run the algorithm.
When the algorithm finishes running, the result image appears in the specified frame. See also Figure 21.
Number of erosions

Indicates the number of erosions to be performed.


Number of dilations

Indicates the number of dilations to be performed.
 
Kernel selection

Use this list box to select the structuring element that will be used for erosion.
 
Circle diameter (mm)

This option is available only if you choose the User Sized Circle option in the Kernel Selection box. Enter a value for the circle diameter, here.
 
Process image in 2.5D

TBD.
 
Destination


New image  if selected, indicates that the result image appears in a new image window. Replace image  if selected, indicates that the result image replaces the current active image.
 
Process
 
Whole image If selected, indicates that the algorithm should be applied to the whole image. VOI region(s) If selected, indicates that the algorithm should be applied only to selected VOI(s).
 
OK

Performs the dilation on the selected image based on your choices in this dialog box.
 
Cancel

Disregards changes you made in this dialog box and closes the dialog box.
 
Help

Displays online help for the algorithm.

(a)

(b)

Particle Analysis
This method combines several morphological operations to produce the information on the particle composition for the binary images.
 It performs a number of openings and closings with the user defined structural element. This allows to separate the foreground elements from the background.
 Then, it applies to the result skeletonize following by pruning.
 To separate slightly overlapping particles, the method then uses watershed segmentation, which is as follows:
 In the binary image, the black pixels are replaced with grey pixels of an intensity proportional to their distance from a white pixel (i.e. black pixels close to the edge are light grey, those closer to the middle are nearer black);
 This forms the Euclidian distance map (EDM) for the image;
 From this, it calculates the centres of the objects or the ultimate eroded points (UEPs), i.e. points that are equidistant from the edges. These points are then dilated until the meet another black pixel, then a watershed line is drawn. See also "Ultimate erode".
 The result statistics appears in the Output MIPAV window. See also Figure 22.
(a)

(b)

(c)

(d)

(e)

(f)

(g)


(h)

Image Types
Particle analysis can be applied only to binary images.
Applying particle analysis
To apply the algorithm,
 Open an image of interest.
 Call Algorithms>Morphological >Particle analysis.
 In the Particle Analysis New dialog box that appears,
 Specify the number of openings (from 1 to 20);
 For opening(s), also specify the structural element or kernel;
 Specify the number of closings (from 1 to 20);
 For closing(s), also specify the structural element or kernel;
 Specify the destination of the result image;
 And also specify whether you want apply the opening to the whole image or only for selected VOIs. For the dialog box options refer to Figure 23.
 Check the Show Intermediate Result Frames box, to view intermediate images.
 Press OK to run the algorithm.
When the algorithm finishes running, the particle statistics appears in the Output window and the result image appears in the specified frame. See also Figure 22.
Number of open

Indicates the number of openings to be performed.


Kernel selection

Use this list box to select the structuring element that will be used for opening.
 
Number of close

Indicates the number of closings to be performed.
 
Kernel selection

Use this list box to select the structuring element that will be used for closing.
 
Circle diameter (mm) for openings and closings

This option is available only if you choose the User Sized Circle option in the Kernel Selection box. Enter a value for the kernel diameter, here.
 
Destination


New image  if selected, indicates that the result image appears in a new image window. Replace image  if selected, indicates that the result image replaces the current active image.
 
Process
 
Whole image If selected, indicates that the erosion should be applied to the whole image. VOI region(s) If selected, indicates that the erosion should be applied only to selected VOI(s).
 
Show Intermediate Result Frames

Check to view intermediate images.
 
OK

Performs the dilation on the selected image based on your choices in this dialog box.
 
Cancel

Disregards changes you made in this dialog box and closes the dialog box.
 
Help

Displays online help for the algorithm.

Skeletonize
Skeletonize operation skeletonizes the image by means of a lookup table, which is used to repeatedly remove pixels from the edges of objects in a binary image, reducing them to single pixel wide skeletons. The method is based on a thinning algorithm by Zhang and Suen.
The lookup table shown below has an entry for each of the 256 possible 3x3 neighborhood configurations. An entry of `1' signifies to delete the indicated pixel on the first pass, `2' means to delete the indicated pixel on the second pass, and `3' indicates to delete the pixel on either pass.
The lookup table


0, 0, 0, 1, 0, 0, 1, 3, 0, 0, 3, 1, 1, 0, 1, 3, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 2, 0, 3, 0, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 3, 0, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 3, 0, 2, 0, 0, 1, 3, 1, 0, 0, 1, 3, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 3, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 3, 1, 3, 0, 0, 1, 3, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 3, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 1, 0, 0, 0, 0, 2, 2, 0, 0, 2, 0, 0, 0 
Applying the algorithm
To apply the algorithm,
 Open an image of interest.
 Call Algorithms>Morphological >Skeletonize.
 In the Skeletonize dialog box that appears,
 Specify the number of pixels to remove (prune);
 Specify the destination frame for the result image;
 And also specify whether you want apply the skeletonizing to the whole image or only for selected VOIs.
 Press OK to run the algorithm.
When the algorithm finishes running, the result image appears in the specified frame. See also Figure 24.
(a)

(b)

Skeletonize 3D pot field
The Skeletonize 3D Pot Field algorithm uses an iterative approach that simultaneously performs a hierarchical shape decomposition and a corresponding set of multiresolution skeletons. A skeleton of a model is extracted from the components of its decomposition, therefore it causes both processes and the qualities of their results being interdependent. In particular, if the quality of the extracted skeleton does not meed some user specified criteria, then the model is decomposed into finer components and a new skeleton is extracted from these components. The process of simultaneous shape decomposition and skeletonizing iterates until the quality of the skeleton becomes satisfactory.
For an input image, the algorithm computes a family of increasingly detailed curveskeletons. The algorithm is based upon computing a repulsive force field over a discretization of the 3D object (voxelized representation) and using topological characteristics of the resulting vector field, such as critical points and critical curves, to extract the curveskeleton.
The algorithm performs the following steps:
 It identifies the boundary voxels of the 3D object as the source of the repulsive force field.
 It computes the repulsive force function at each object voxel and produces a 3D vector field results.
 The algorithm detects the critical points of the 3D vector field and connects them using pathlines by integrating over the vectorfield. This step produces the core skeleton.
 Then, it computes the divergence of the vector field at each voxel. Points with low divergence values are selected as new seeds for new skeleton segments. Varying the divergence threshold (given as a percentage, i.e., the top 20%) creates the Level 1 hierarchy after the core skeleton.
 Finally, the algorithmcomputes the curvature at every boundary voxel and selects new seed points based on a usersupplied curvature threshold, given again as a percentage of the highest curvature value in the dataset (i.e. top 30%). This adds another level of hierarchy to the core and divergence skeletons, the Level 2 skeleton hierarchy.
(a)

(b)

^{1}These images are borrowed from the Nicu D. Cornea web site <<http://www.caip.rutgers.edu/~cornea/Skeletonization>>/ 
Algorithm parameters
The key idea behind the potential field approach used in the algorithm is to generate a vector field inside the image by charging the image boundary. In order to generate that field, the algorithm uses a number of user defined parameters, such as electrical point charge or field strength, that should be specified in the Skeletonize 3 D dialog box, see Figure 27.
Maximum value for outer pixel to be object
The algorithm places electrical point charges on the image boundary, considering each boundary voxel to be a point charge (a boundary voxel is determined as an image voxel which has a background neighbor). In order to differentiate the background voxels from image voxels, the Maximum value for outer pixel to be object parameter is added to the Skeletonize 3D dialog box. This parameter allows a user to enter the max intensity value for image voxels.
Field strength and Distance of electrical charge from object boundary
The repulsive force at an interior object voxel due to a nearby point charge is defined as a force pushing the voxel away from the charge with a strength that is inverse proportional to the distance between the voxel and the charge, raised at a certain power M which is called (in this algorithm) the field strength. The Field strength and Distance of electrical charge from object boundary parameters are defined by the user. The final repulsive force at each interior voxel is computed by summing the influences of all the point charges. The resulting vector field is also called a force field.
Note that a high field strength (M) will cause for a given inferior voxel the charges from the local or closer points to have a higher influence than from the more distant charges, therefore creating a vector field with sharper pathlines because it follows the local boundary topology more closely. A low value for the field strength parameter will produce a smoother vector field, with more rounded corners, since the vector direction at a particular point is now influenced by more point charges.
Setting the Distance of electrical charge from object boundary parameter to a very low value is not a good idea. E.g., imagine the example of a very long cylinder. Setting the threshold smaller than half the length of the cylinder will cause the field not to flow towards the one attracting point in the middle of the cylinder. Instead, it will go towards the center, creating a critical point at each slice along the cylinder.
Fractions of divergence points to use
The divergence of a vector field in a given region of space is a scalar quantity that characterizes the rate of flow leaving that region. A negative divergence value at a point indicates that the flow is moving mainly towards the given point. The algorithm takes the points with low divergence values, which indicate a low spot or "sink," from these points the new seed points are chosen using the threshold on the divergence value parameter. The threshold is given as a fraction of the lowest divergence value in the entire vector field. From each of these new seed points, a new fieldline is generated which will connect to the core skeleton.
By varying the Fractions of divergence points to use parameter the user can vary the number of seed points selected, and therefore the number of new skeleton segments, generating an entire hierarchy of skeletons that is called the Level 1 skeleton. Different values can be chosen based upon the application of the curveskeleton.
Applying Skeletonize 3D
Preliminary steps. In order to apply the Skeletonize 3D Pot Field algorithm to an image, the image must have a sufficient number of planes composed solely of background pixels at the x, y, and z boundaries of the image. For the default value of zero for the distance of the electrical charges from the object boundary, there must be background planes at x = 0, x = xDim 1, y = 0, y = yDim  1, z = 0, and z = zDim  1. Call the Utilities > Add Image Margins, Utilities> Pad, or Utilities >Insert Slice menus to run a tool that will help you to create a version of the image with the sufficient number of padding pixels.
To apply the algorithm,
 Open an image of interest, and then call Algorithms>Morphological >Skeletonize 3D Pot. The Skeletonize 3D dialog box appears.
 Fill out the dialog box. For the dialog box options, refer to Figure 27.
 Press OK to run the algorithm.
Be patient, because the first algorithm run may take a considerable time. When the algorithm finishes running, the result skeleton image appears. See Figure 26.
(a)

(b)

(c)

(d)

(e)

Algorithm notes
 Most of of the image calculation time (e.g. 98%) will spend in the potential field calculation. Therefore, the first time you run the algorithm, select the Save the Vector Fields radio button to save the x, y, and z vector fields and the Level 1 skeleton. On the following runs, select the Load the Vector Field from Files radio button to load the x, y, and z vector fields and the Level 1 skeleton. See also Figure 27.
 Then, you can also vary the Fractions of divergence points to use parameter to change the extensiveness of the skeleton generated. As more divergence points are used in the Level 2 skeleton calculation, the skeleton will become more extensive.
 The algorithm must be run on a single solid object. Use the following options to make sure that your image is a single solid object:
 A user selected threshold (defined using the Maximum value for outer pixel to be object parameter) helps separate the background voxels from the object voxels.
 The default selected checkbox for slice by slice hole filling is used to convert the image into a single solid object.
 After identifying all image objects in 3D, all but the largest object are deleted.
Parameters


Maximum value for outer pixel to be object

For the image voxels, it determines the max intensity value that a voxel can have in order to be the image voxel (not the boundary voxel). The default value is set to 1.0
 
Slice by slice hole filling

If checked, performs the slice by slice hole filling for the image. Refer to [Morphology.html#wp1047006 "Algorithm notes" ].
 
Distance of electrical charge from object boundary

This parameter determines the distance between the inferior voxel and the boundary point charge. If a boundary point is at a distance greater than the entered value, then it is ignored, i.e., it does not influence the field at this point. See also [Morphology.html#wp1047544 "Algorithm parameters" ].
 
Field strength

A vector field inside the image is determined by the strength of the repulsive force from the nearby charge that affects each interior voxel. The repulsive field strength is inverse proportional to the distance between the voxel and the charge (determined by the Distance of electrical charge from object boundary parameter) raised at a certain power M which is determined by the Field Strength parameter.The final repulsive force at each interior voxel is computed by summing the influences of all the point charges. If the Field Strength parameter value is set high, it will create a vector field with sharper pathlines, because it follows the local boundary topology more closely. A low value for the Field Strength parameter will produce a smoother vector field, with more rounded corners. See also "Algorithm parameters".
 
Fractions of divergence points to use

By varying this parameter you can vary a number of seed points and correspondingly the number of new skeleton segments, generating an entire hierarchy of skeletons for the Level 1 skeleton. See also "Algorithm parameters".
 
Save the vector fields to files

The first time this program is run on an image select this option to save the x, y, and z vector fields and the Level 1 skeleton.
 
Load the vector field from files

On the following runs select the Load the vector field from files option to load the previously saved x, y, and z vector fields and the Level 1 skeleton.
 
On following runs vary the Fractions of divergence points to use parameter to change the extensiveness of the skeleton generated. As more divergence points are used in the Level 2 skeleton calculation, the skeleton will become more extensive.
 
Open XVF file
 
Load

Loads the file.
 
Remove

Removes the file from loading.
 
Don't save the vector field

If checked, doesn't allow to save the vector field.
 
Output all skeleton points

If checked, sends to the output all skeleton points.
 
Output only segmented end points

If checked, sends to the output only segmented end points.
 
OK

Performs the skeletonizind on the selected image based on your choices in this dialog box.
 
Cancel

Disregards changes you made in this dialog box and closes the dialog box.
 
Help

Displays online help for the algorithm.

Skeletonize 3D Pot references
[[1]] This is a port of the C code for pfSkel: Potential Field Based 3D Skeleton Extraction written by Nicu D. Cornea, Electrical and Computer Engineering Department, Rutgers, The State University of New Jersey, Piscataway, New Jersey, 08854, cornea@caip.rutgers.edu. The code was downloaded from [[2]]
Chuang JH, Tsai C, Ko MC (2000) Skeletonization of ThreeDimensional Object Using Generalized Potential Field. IEEE Trans. Pattern Analysis and Machine Intelligence, 22(11):12411251
Nicu D. Cornea, Deborah Silver, and Patrick Min, CurveSkeleton Applications, Proceedings IEEE Visualization, 2005, pp. 95102.
Nicu D. Cornea, Deborah Silver, Xiaosong Yuan, and Raman Balasubramanian, Computing Hierarchical CurveSkeletons of 3D Objects, SpringerVerlag, The Visual Computer, Vol. 21, No. 11, October, 2005, pp. 945955.
Ultimate erode
The algorithm generates the ultimate eroded points (UEPs) of an Euclidian distance map for an image. These UEPs represent the centers of particles that would be separated by segmentation. The UEP's gray value is equal to the radius of the virtual inscribed circle of the corresponding particle.
Image Types
It requires a binary image (boolean type) as an input.
Applying Ultimate Erode
To apply the algorithm,
 Call Algorithms>Morphological >Ultimate Erode.
 In the Ultimate Erode dialog box that appears,
 Use the Remove objects closer than text box to specify the min distance between the objects;
 Specify the image frame for the result image;
 And also specify whether you want apply Ultimate Erode to the whole image or only for selected VOIs.
 Press OK to run the algorithm.
When the algorithm finishes running, the result image appears in the specified frame.
See also Figure 28.
(a)

(b)
