Morphology

From MIPAV
Jump to: navigation, search

The following sections present several important algorithms that employ the mathematical morphological operations such as erosion, dilation, opening, closing, particle analysis, calculation the background map, etc.

Introduction to the basic morphological operations

In this introductory section we will explain several important concepts in mathematical morphology that are used in MIPAV algorithms. These concepts are as follows: translation, reflection, complement, and difference.

Let's introduce two objects A and B in Z2 with components a (ai,aj) and b (bi,bj) respectively.

The translation of A by x = (x1,x2), which is denoted as (A)x, can be defined as

Equation 1


Translation.jpg


The reflection of B, denoted as B^, is defined as Equation 2


Reflection.jpg


The complement of object A is Equation 3

Complement.jpg


The difference of two objects or images A and B, denoted as A-B, is defined as

Equation 4


Difference.jpg


Figure 1 illustrates the morphological operations mentioned above.


Figure 1. (a) Image A; (b) image A translated; (c) image B; (d) reflection of image B; (e) image A and its complement; (f) the difference of A and B. The origin of the image is shown as a black dot.

Morphologybasics.jpg

Data types

All morphological operations can be applied to the images of the following types:

Data type
Description
Boolean
1 bit per pixel/voxel (1 on, 0 off)
Unsigned byte
1 byte per pixel/voxel (0, 255)
Unsigned short
2 bytes per pixel/voxel (0, 65535)

Background Distance map

The Background Distance map operation converts a binary image (which consists of background and foreground pixels) into an image where every foreground pixel has a value corresponding to the minimum distance from the background. The algorithm uses the Euclidean distance metric to transform the image.

For 2D images, the algorithm determines the image resolution in X and Y directions, first. Then it identifies all edges pixels. And finally, it creates a distance metric, where each pixel of the metric is associated with a corresponding region of the source image. Each pixel is assigned a calculated distance value corresponding to the Euclidean distance between a center of that pixel and the nearest point of the background. The nearest point is located using the image resolution.

For two 2D points P(x1,y1) and Q(x2,y2) the "classical" Euclidian distance is computer as

Equation 5

EuclidDistanceClassical.jpg


The Background Distance map operation when computes the Euclidian distance takes into account the image resolutions in the X and Y dimensions. So that it is calculated as Equation 6

EuclidDistanceImageResolutions.jpg


Applying Background Distance map

To apply the algorithm,

  1. Open an image of interest.
  2. Call Algorithms>Morphological> Background Distance map.
  3. In the Background Distance map dialog box that appears,
    • Specify the destination of the Background Distance map image;
    • And also specify whether you want to calculate the Background Distance map for the whole image or only for the selected VOIs.
  4. Press OK to run the algorithm.

When the algorithm finishes running, the Background Distance map appears in the specified frame (the default option is a new frame). See also Figure 2.

Figure 2. Applying the Background Distance map operation to the image. You can see 1) the original image (left), 2) the dialog box with the default settings (center) and 3) the background distance map (right)

MorphologyBackgroundDistanceOriginal.jpg

MorphologyBackgroundDistanceMApDB.jpg

MorphologyBackgroundDistanceMap.jpg

Close

The closing of an object A by kernel B is defined as the dilation of A by B, followed by the erosion of the result by B. Closing generally smooths sections of contours, fuses narrow breaks, eliminates small holes and fills gaps in the contour. For 2D images it could be expressed using the following equation:

Equation 7

ClosingFormula.jpg


Figure 3 below illustrates closing of an object A with a disk kernel B. Note that the resulted image was also smoothed by the circular kernel B.

Figure 3. Illustration of closing (a) the original image A, (b) the dilation of image A by kernel B, (c) the erosion of result of (b) by the same kernel B

ClosingIllustration.jpg


Applying the closing

To apply the algorithm,

  1. Open an image of interest.
  2. Call Algorithms>Morphological >Close.
  3. In the Open dialog box that appears,
    • Specify the number of dilations (from 1 to 20);
    • Specify the number of erosions (from 1 to 20);
    • Select the structural element or kernel;
    • Specify the destination of the result image;
    • And also specify whether you want apply the closing to the whole image or only for selected VOIs. For the dialog box options refer to Figure 4.
  4. Press OK to run the algorithm.

When the algorithm finishes running, the result image appears in the specified frame. See also Figure 5.


Figure 4. Close dialog box

Number of dilations
Indicates the number of dilations to be performed.

ClosingDialogBox.jpg
Number of erosions
Indicates the number of erosions to be performed.
Kernel selection
Use this list box to select the structuring element that will be used for erosion.
Circle diameter (mm)
This option is available only if you choose the User Sized Circle option in the Kernel Selection box.
Enter a value for the circle diameter, here.
Process image in 2.5D
TBD.
Destination
New image - if selected, indicates that the result image appears in a new image window.
Replace image - if selected, indicates that the result image replaces the current active image.
Process
Whole image
If selected, indicates that the erosion should be applied to the whole image.
VOI region(s)
If selected, indicates that the erosion should be applied only to selected VOI(s).
OK
Performs the dilation on the selected image based on your choices in this dialog box.
Cancel
Disregards changes you made in this dialog box and closes the dialog box.
Help
Displays online help for the algorithm.
Figure 5. The original image (a) and the result image after the closing that includes 15 dilations followed by 15 erosions applied with the "3x3 -4 connected" kernel

MorphologyClosingOriginal.jpg

(a)
MorphologyClosingResult.jpg

(b)

Delete Objects

The Delete operation deletes objects larger than a maximum size indicated in the Delete dialog box and also objects smaller than the indicated minimum size.


Applying the Delete algorithm

To apply the algorithm,

  1. Open an image of interest.
  2. Call Algorithms>Morphological> Delete Objects.
  3. In the Delete Objects dialog box that appears,
    • Specify the maximum size of the object;
    • Specify the minimum size of the object;
    • Specify the destination of the result image;
    • And also specify whether you want apply the algorithm to the whole image or only for selected VOIs.
  4. Press OK to run the algorithm.

When the algorithm finishes running, the result image appears in the specified frame. See also Figure 6.

Figure 6. Applying the Delete Objects operation; (a) the original image, (b) the result image. The parameters are shown in the dialog box

DeleteObjectsOriginal.jpg

(a)
DeleteObjectsDB.jpg

DeleteObjectsResult.jpg

(b)

Dilate

We define the dilation as a process that consists of obtaining the reflection of B about its origin, and then shifting this reflection by x. The dilation of A by B is then the set of all x displacements such that B^ and A overlap by at least one nonzero element. That can be expressed using the following equation: Equation 8

Dilation.jpg

The object B is commonly referred to as a structuring element or kernel in dilation, as well as in the other morphological operations. Figure 7 (a) below shows a simple dilation by the symmetric kernel, so that B^=B. Figure 7 (b) shows a kernel designed to achieve more dilation vertically than horizontally.

Figure 7. (a) An original image A, a square structural element B and its reflection, and the dilation of A by B; (b) an original image A, an elongated structural element B and its reflection, and the dilation of A by B

Dilationillustration.jpg

(a)
Dilationillustration1.jpg

(b)

Applying the dilation

To apply the algorithm,

  1. Open an image of interest.
  2. Call Algorithms>Morphological> Dilate.
  3. In the Dilation dialog box that appears,
    • Specify the number of dilations (from 1 to 20);
    • Select the structural element or kernel;
    • Specify the destination of the dilated image;
    • And also specify whether you want apply the dilation to the whole image or only for selected VOIs. For the dialog box options refer to Figure 8.
  4. Press OK to run the algorithm.

When the algorithm finishes running, the result image appears in the specified frame. See also Figure 9.


Figure 8. Dilation dialog box

Number of dilations
Indicates the number of dilations to be performed.

MOrpholigyDilateDialogBox.jpg
Kernel selection
Use this list box to select the structuring element that will be used for dilation.
Circle diameter (mm)
This option is available only if you choose the User Sized Circle option in the Kernel Selection box.
Enter a value for the circle diameter, here.
Destination
New image
If selected, indicates that the result image appears in a new image window.
Replace image
If selected, indicates that the result image replaces the current active image.
Process
Whole image
If selected, indicates that the dilation should be applied to the whole image.
VOI region(s)
If selected, indicates that the dilation should be applied only to selected VOI(s).
OK
Performs the dilation on the selected image based on your choices in this dialog box.
Cancel
Disregards changes you made in this dialog box and closes the dialog box.
Help
Displays online help for the algorithm.

Figure 9. The original image (a) and the result image after 5 dilations applied with the "3x3-4 connected" kernel

MorphologyrDilationBefore.jpg
(a)



MorphologyrDilationAfter.jpg
(b)

Distance Map

The algorithm uses the Euclidean distance metric to calculate a distance map for a selected image or image region. For 2D images,

  1. The algorithm determines the image resolution in X and Y directions.
  2. Then it identifies all edge pixels.
  3. Then, it creates a distance metric, where each pixel of the metric is associated with a corresponding region of the source image. Each pixel is assigned a calculated distance value corresponding to the Euclidean distance between a center of that pixel and the nearest edge point.

For more information, please refer to "Background Distance map".

Applying distance map

To apply the algorithm,

  1. Open an image of interest.
  2. Call Algorithms>Morphological> Distance Map.
  3. Specify whether you want apply the dilation to the whole image or only for selected VOIs.
  4. Press OK to run the algorithm.

When the algorithm finishes running, the result image appears in the specified frame. See also Figure 9.

Figure 10. Applying the Background Distance map operation to the image. You can see 1) the original image (left), 2) the dialog box with the default settings (center) and 3) the background distance map (right)

MorphologyBackgroundDistanceOriginal.jpg
MOrphologyDistanceMapDB.jpg
MorphologyBackgroundDistanceMap.jpg

Erode

The erosion of A by B is the set of all points x such that B, translated by x, is contained in A. For 2D images it could be expressed using the following equation:

Equation 9

ErosionFormula.jpg


Applying the erosion

To apply the algorithm,

  1. Open an image of interest.
  2. Call Algorithms>Morphological> Erode.
  3. In the Erode dialog box that appears,
    • Specify the number of erosions (from 1 to 20);
    • Select the structural element or kernel;
    • Specify the destination of the result image;
    • And also specify whether you want apply the erosion to the whole image or only for selected VOIs. For the dialog box options refer to Figure 11.
  4. Press OK to run the algorithm.

When the algorithm finishes running, the result image appears in the specified frame. See also Figure 12.


Figure 11. Erosion dialog box

Number of erosions
Indicates the number of erosions to be performed.

Erode.jpg
Kernel selection
Use this list box to select the structuring element that will be used for erosion.
Circle diameter (mm)
This option is available only if you choose the User Sized Circle option in the Kernel Selection box.
Enter a value for the circle diameter, here.
Destination
New image
If selected, indicates that the result image appears in a new image window.
Replace image
If selected, indicates that the result image replaces the current active image.
Process
Whole image
If selected, indicates that the erosion should be applied to the whole image.
VOI region(s)
If selected, indicates that the erosion should be applied only to selected VOI(s).
Process image in 2.5D
TBD.
OK
Performs the dilation on the selected image based on your choices in this dialog box.
Cancel
Disregards changes you made in this dialog box and closes the dialog box.
Help
Displays online help for the algorithm.
Figure 12. The original image (a) and the result image after 5 erosions applied with the "3x3 -4 connected" kernel

MOrphologyErosionOriginal.jpg

(a)
MorphologyEroded.jpg

(b)

Evaluate Segmentation

Evaluate Segmentation operation compares segmentation results of a test image to segmentation results of an ideal gold standard true image. For each evaluated segmentation pair, the false negative volume fraction, the false positive volume fraction, and the positive volume fraction are sent to the Output window. See Figure 13)-e.

Applying evaluate segmentation

To apply the algorithm,

  1. Open a segmented image of interest and its gold standard segmented image.

Noteicon.gif

Note that the images should be of the following types: Boolean, Unsigned byte, or Short. Before running the algorithm, convert the images using the Utilities> Conversion Tools> Convert Type menu. See also "Convert Type" . You can also use the Paint Conversion tools menu options to convert the paint to the Unsigned Byte mask image. See Figure 13.


  1. Select the gold standard segmented image.
  2. Call Algorithms>Morphological> Evaluate Segmentation.
  3. The Evaluate Segmentation dialog box appears.
  4. In the dialog box, select the test image, and then press OK to run the algorithm.

When the algorithm finishes running, the result statistics appears in the Output window. See also Figure 13.


Figure 13. The Evaluate Segmentation algorithm : (a) - a gold standard segmented image, (b) - its unsigned byte mask, (c) - a segmented image for evaluation, (d) - its unsigned byte mask, and (d) - the Evaluate Segmentation dialog box and the Output window

EvSegmImage1.jpg

(a)
EvSegmImage1Mask.jpg

(b)
EvSegmImage2.jpg

(c)
EvSegmImage2Mask.jpg

(d)
EvSegmOutput.jpg
(e)

Fill holes

To fill holes in objects MIPAV uses the following procedure:

  • It segments an image to produce binary representation of objects;
  • Then, it computes complement of a binary image as a mask image;
  • It generates a seed image as the border of the image;
  • Then, it propagates the seed into the mask;
  • And finally, it complements the result of propagation to produce the final result.

Applying fill holes

To apply the algorithm,

  1. Open an image of interest.
  2. Call Algorithms>Morphological> Fill Holes.
  3. The Fill Objects dialog box appears. In the dialog box:
    • Specify if you want to process the image in 2.5 D (this option is available for 3D images only);
    • Specify the destination of the result image.
  4. Press OK to run the algorithm.

When the algorithm finishes running, the result image appears in the new image frame. See also Figure 14.

Figure 14. The Fill Holes algorithm: (a) the original image; (b) the Fill Objects dialog box; (c) the result image

FillHolesOriginal.jpg

(a)
FillObjectsDialogBox.jpg

(b)
FillHolesResult.jpg

(c)

Find Edges

The technique finds the edges of the objects in an image using combinations of the following morphological operations: dilation, erosion, and XOR. The algorithm has two options - it could find outer edges or inner edges depending on the user's choise.The algorithm, first, determines the image resolution. And then it uses the resolution to calculate the kernel or structural element that will be used for the dilation or erosion.

If the Outer Edging option was selected, the algorithm dilates the original image, and then performs XOR operation of the dilated image with the original image.

If the Inner Edging option was selected, the algorithm erodes the original image, and then performs XOR operation of the eroded image with the original image.

Applying find edges

To apply the algorithm,

  1. Open an image of interest.
  2. Call Algorithms>Morphological> Find Edges.
  3. The Find Edges dialog box appears. In the dialog box:
    • Specify what option you would like to use - Outer Edging or Inner Edging;
    • Specify the destination of the result image;
    • And also specify whether you want apply the algorithm to the whole image or only for selected VOI(s).
  4. Press OK to run the algorithm.

When the algorithm finishes running, the result image appears in the new image frame. See also Figure 15.

Figure 15. The Find Edges algorithm: (a) the original image; (b) the Find Edges dialog box; (c) the result image

FindEdgesOriginal.jpg
(a)
FindEdgesDB.jpg
(b)
FindEdgesResult.jpg
(c)

ID objects

The algorithm labels each object in an image with a different integer value and also deletes objects which are outside of the user defined threshold. It converts the image to black and white in order to prepare it for boundary tracing; and then it uses morphology functions, such as open, close and fill holes to remove pixels which do not belong to the objects of interest.

Figure 16. ID Objects algorithm

IDObjects.jpg

Applying id objects

To apply the algorithm,

  1. Open an image of interest.
  2. Call Algorithms>Morphological> ID objects.
  3. The Identify Objects dialog box appears. In the dialog box:
    • Specify the destination of the result image;
    • Specify whether you want apply the algorithm to the whole image or only to selected VOI(s);
    • And also enter the max and min size for the particles you would like to exclude from the calculation.
  4. Press OK to run the algorithm.

When the algorithm finishes running, the result image appears in the designated image frame and the statistics appears in the Output window. See also Figure 16.

Noteicon.gif

Note: you might consider smoothing the image and removing excess noise, first, and then apply ID Objects.


Morphological Filter

Morphological filter can be used to correct an image for the shading that was introduced in the image forming procedure. In other words, it corrects for non-uniform illumination and non-uniform camera sensitivity.

The illumination over the microscope field-of-view I'ill(x,y) usually interacts in a multiplicative (non-uniform) way with the biological object a(x,y) to produce the image b(x,y):

Equation 10

b(x,y) = Iill(x, y) � a(x, y),

where the object a(x,y) representing various microscope imaging modalities, such as reflectable model r(x, y), optical density model 10-'OD'('x ', 'y') or fluorescence model c'('x', 'y'). The Morphological filter works with the fluorescence model c'('x', 'y'), assuming that this model only holds for low concentrations. The camera and microscope are also contributing gain and offset to the image, thus the equation for c'('x', 'y') can be rewritten as:

Equation 11

c[m,n] = gain[m,n] � b[m,n] offset[m,n]= gain[m,n] � Iill[m,n]� a[m,n] offset[m,n]

where, a) the camera gain and offset can vary as a function of position and, therefore, contributing to the shading; and b) I'ill[m,n], gain[m,n] and offset[m,n] are slowly varying compared to a[m,n].

The algorithm, first, compute (using morphological smoothing) a smoothed version of c[m,n], where the smoothing is large compared to the size of the objects in the image. This smoothed version is the estimate of the background of the image. Second, it subtracts the smoothed version from c[m,n]. And then, it restores the desired average brightness value. This can be written as a following formula:

Equation 12

aˆ [m,n] = c[m,n] - MorphSmooth{c[m,n]} constant,

where the morphological smoothing filter MorphSmooth'{'c'['m','n']'} involves two basic operations - the maximum filter and the minimum filter.

    • In the maximum filter, defined over a window W of J x K pixels where both J and K are considered to be of odd size (e.g., 5 � 3), the value in the output image, corresponding to the center pixel A in the input window, is the maximum brightness value found in the input window.
    • In the minimum filter, defined over a similar J x K window (W), the value in the output image, corresponding to the center pixel A in the input window, is the minimum brightness value found in the input window.

Therefore, the maximum and minimum filters can be described as examples of the morphological dilation and erosion respectively.

Dilation:

Equation 13

D(A) = max [j,k]Å’W {a[m - j, n - k]} = max W (A)

Erosion:

Equation 14

E(A) = min [j,k ]Å’W {a[m - j, n - k]} = min W(A)

Finally, the algorithm defines the morphological smoothing filter described as follows: Equation 15

MorphSmooth(A) = min(max(max(min(A)))),

where all of the filter operations are applied over the same J x K filter window W. To better understand equation 15, assume that you have two "subroutines" available, one for the minimum filter and the other one for the maximum filter, and that you apply them in a sequential fashion. See Figure 17.

Figure 17. a: the original image; b: the same image after applying the noise reduction algorithm; c: the image from (b) after applying Morphological filter with parameters shown in Figure 18

MOrphFIlterOriginal.jpg

MOrphFIlterafterNoiseRed.jpg

MOrphFIlterResult.jpg

(a)
(b)
(c)

Applying Morphological filter

Tipicon.gif

Before applying Morphological filter you might consider smoothing the image using Gaussian Blur and (or) reducing the image noise using one of the noise reduction algorithms available in MIPAV.


To apply the algorithm,

  1. Open an image of interest.
  2. Call Algorithms>Morphological >Morphological Filter.
  3. In the Morphological Filter dialog box that appears,
    • Specify the window size in the X direction;
    • Specify the window size in the Y direction;
    • Specify the window size in the Z direction (for 3D images only);
    • For 2.5D images, specify whether you want to process slices independently or not;
    • Specify the destination of the result image;
    • And also specify whether you want apply the opening to the whole image or only for selected VOIs. For the dialog box options refer to Figure 18.
  4. Press OK to run the algorithm.

When the algorithm finishes running, the result image appears in the specified frame. See also Figure 17.


Figure 18. Morphological Filter dialog box Â

Filter size
Specify the size of the filter window W in X, Y and Z (for 3D images) direction. See also equation 13 and 14.

MOrphFIlterDialogBox.jpg
Options
Process each slice independently - check this check box if you want to process slices independently. this option only works for 2.5D images.
Destination
New image - if selected, indicates that the result image appears in a new image window.
Replace image - if selected, indicates that the result image replaces the current active image.
Process
Whole image
If selected, indicates that the algorithm should be applied to the whole image.
VOI region(s)
If selected, indicates that the algorithm should be applied only to selected VOI(s).
OK
Performs the dilation on the selected image based on your choices in this dialog box.
Cancel
Disregards changes you made in this dialog box and closes the dialog box.
Help
Displays online help for the algorithm.

Open

The opening of image A by a structuring element (a kernel) B is simply a combination of two operations: an erosion of A by B, followed by a dilation of the result by B. See also Figure 19.

Figure 19. Illustration of opening (a) the original image A, (b) the erosion of image A by kernel B, (c) the dilation of result of (b) by the same kernel B, (d) image A after opening

MorphologyOpeningIllustration.jpg

Opening generally smooths the contour of an image, breaks narrow strips and eliminates thin protrusions. The equation for the opening is as follows: Equation 16

OpeningFormula.jpg


Applying the opening

To apply the algorithm,

  1. Open an image of interest.
  2. Call Algorithms>Morphological >Open.
  3. In the Open dialog box that appears,
    • Specify the number of erosions (from 1 to 20);
    • Specify the number of dilations (from 1 to 20);
    • Select the structural element or kernel;
    • Specify the destination of the result image;
    • And also specify whether you want apply the opening to the whole image or only for selected VOIs. For the dialog box options refer to Figure 20.
  4. Press OK to run the algorithm.

When the algorithm finishes running, the result image appears in the specified frame. See also Figure 21.


Figure 20. Open dialog box

Number of erosions
Indicates the number of erosions to be performed.

OpenDialogBox.jpg
Number of dilations
Indicates the number of dilations to be performed.
Kernel selection
Use this list box to select the structuring element that will be used for erosion.
Circle diameter (mm)
This option is available only if you choose the User Sized Circle option in the Kernel Selection box.
Enter a value for the circle diameter, here.
Process image in 2.5D
TBD.
Destination
New image - if selected, indicates that the result image appears in a new image window.
Replace image - if selected, indicates that the result image replaces the current active image.
Process
Whole image
If selected, indicates that the algorithm should be applied to the whole image.
VOI region(s)
If selected, indicates that the algorithm should be applied only to selected VOI(s).
OK
Performs the dilation on the selected image based on your choices in this dialog box.
Cancel
Disregards changes you made in this dialog box and closes the dialog box.
Help
Displays online help for the algorithm.
Figure 21. The original image (a) and the result image after the opening that includes 19 erosions followed by 19 dilations applied with the "3x3 -4 connected" kernel

MorphologyOpenOriginal.jpg
(a)
MorphologyOpenResult.jpg
(b)

Particle Analysis

This method combines several morphological operations to produce the information on the particle composition for the binary images.

  1. It performs a number of openings and closings with the user defined structural element. This allows to separate the foreground elements from the background.
  2. Then, it applies to the result skeletonize following by pruning.
  3. To separate slightly overlapping particles, the method then uses watershed segmentation, which is as follows:
    • In the binary image, the black pixels are replaced with grey pixels of an intensity proportional to their distance from a white pixel (i.e. black pixels close to the edge are light grey, those closer to the middle are nearer black);
    • This forms the Euclidian distance map (EDM) for the image;
    • From this, it calculates the centres of the objects or the ultimate eroded points (UEPs), i.e. points that are equidistant from the edges. These points are then dilated until the meet another black pixel, then a watershed line is drawn. See also "Ultimate erode".
  4. The result statistics appears in the Output MIPAV window. See also Figure 22.
Figure 22. Particle Analysis: (a) - an original binary image. The following steps were performed: (b) erosion; (c) dilation; (d) skeletonize; (e) prune; (f) watershed; (g) - the result image; (h) - particle statistics as it appears in the Output windowÂ

PAOriginal.jpg

(a)
PAErode1.jpg

(b)
PADilate1.jpg
((
(c)
PASkeleton.jpg

(d)
PACheck.jpg

(e)
PAWatershed.jpg

(f)
PAResult.jpg

(g)


PAOutput.jpg

(h)

Image Types

Particle analysis can be applied only to binary images.

Applying particle analysis

To apply the algorithm,

  1. Open an image of interest.
  2. Call Algorithms>Morphological >Particle analysis.
  3. In the Particle Analysis New dialog box that appears,
    • Specify the number of openings (from 1 to 20);
    • For opening(s), also specify the structural element or kernel;
    • Specify the number of closings (from 1 to 20);
    • For closing(s), also specify the structural element or kernel;
    • Specify the destination of the result image;
    • And also specify whether you want apply the opening to the whole image or only for selected VOIs. For the dialog box options refer to Figure 23.
  4. Check the Show Intermediate Result Frames box, to view intermediate images.
  5. Press OK to run the algorithm.

When the algorithm finishes running, the particle statistics appears in the Output window and the result image appears in the specified frame. See also Figure 22.


Figure 23. Particle Analysis New dialog box

Number of open
Indicates the number of openings to be performed.

PANewDB.jpg
Kernel selection
Use this list box to select the structuring element that will be used for opening.
Number of close
Indicates the number of closings to be performed.
Kernel selection
Use this list box to select the structuring element that will be used for closing.
Circle diameter (mm) for openings and closings
This option is available only if you choose the User Sized Circle option in the Kernel Selection box.
Enter a value for the kernel diameter, here.
Destination
New image - if selected, indicates that the result image appears in a new image window.
Replace image - if selected, indicates that the result image replaces the current active image.
Process
Whole image
If selected, indicates that the erosion should be applied to the whole image.
VOI region(s)
If selected, indicates that the erosion should be applied only to selected VOI(s).
Show Intermediate Result Frames
Check to view intermediate images.
OK
Performs the dilation on the selected image based on your choices in this dialog box.
Cancel
Disregards changes you made in this dialog box and closes the dialog box.
Help
Displays online help for the algorithm.


Skeletonize

Skeletonize operation skeletonizes the image by means of a lookup table, which is used to repeatedly remove pixels from the edges of objects in a binary image, reducing them to single pixel wide skeletons. The method is based on a thinning algorithm by Zhang and Suen.

The lookup table shown below has an entry for each of the 256 possible 3x3 neighborhood configurations. An entry of `1' signifies to delete the indicated pixel on the first pass, `2' means to delete the indicated pixel on the second pass, and `3' indicates to delete the pixel on either pass.

The lookup table
0, 0, 0, 1, 0, 0, 1, 3, 0, 0, 3, 1, 1, 0, 1, 3, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 2, 0, 3, 0, 3, 3, 0, 0, 0, 0,
0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 3, 0, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0,
2, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 3, 0, 2, 0, 0, 1, 3, 1, 0, 0, 1, 3, 0, 0, 0, 0, 0, 0, 0, 1,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 3, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 3, 1, 3, 0, 0, 1, 3, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 2, 3, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 3, 3, 0, 1, 0, 0, 0, 0, 2, 2, 0, 0,
2, 0, 0, 0


Applying the algorithm

To apply the algorithm,

  1. Open an image of interest.
  2. Call Algorithms>Morphological >Skeletonize.
  3. In the Skeletonize dialog box that appears,
    • Specify the number of pixels to remove (prune);
    • Specify the destination frame for the result image;
    • And also specify whether you want apply the skeletonizing to the whole image or only for selected VOIs.
  4. Press OK to run the algorithm.

When the algorithm finishes running, the result image appears in the specified frame. See also Figure 24.

Figure 24. Skeletonize operation: (a) the original image, (b) the result after applying the operation with the parameters shown in the dialog box

SkeletonozeOriginal.jpg

(a)
SkeletonizeDB.jpg

SkeletonizeResult.jpg

(b)

Skeletonize 3D pot field

The Skeletonize 3D Pot Field algorithm uses an iterative approach that simultaneously performs a hierarchical shape decomposition and a corresponding set of multi-resolution skeletons. A skeleton of a model is extracted from the components of its decomposition, therefore it causes both processes and the qualities of their results being interdependent. In particular, if the quality of the extracted skeleton does not meed some user specified criteria, then the model is decomposed into finer components and a new skeleton is extracted from these components. The process of simultaneous shape decomposition and skeletonizing iterates until the quality of the skeleton becomes satisfactory.

For an input image, the algorithm computes a family of increasingly detailed curve-skeletons. The algorithm is based upon computing a repulsive force field over a discretization of the 3D object (voxelized representation) and using topological characteristics of the resulting vector field, such as critical points and critical curves, to extract the curve-skeleton.

The algorithm performs the following steps:

  1. It identifies the boundary voxels of the 3D object as the source of the repulsive force field.
  2. It computes the repulsive force function at each object voxel and produces a 3D vector field results.
  3. The algorithm detects the critical points of the 3D vector field and connects them using path-lines by integrating over the vector-field. This step produces the core skeleton.
  4. Then, it computes the divergence of the vector field at each voxel. Points with low divergence values are selected as new seeds for new skeleton segments. Varying the divergence threshold (given as a percentage, i.e., the top 20%) creates the Level 1 hierarchy after the core skeleton.
  5. Finally, the algorithmcomputes the curvature at every boundary voxel and selects new seed points based on a user-supplied curvature threshold, given again as a percentage of the highest curvature value in the dataset (i.e. top 30%). This adds another level of hierarchy to the core and divergence skeletons, the Level 2 skeleton hierarchy.
Figure 25. (a) - the original image, (b) - the skeleton[1]

Skeletonize3dOriginal.jpg

(a)
Skeletonize3DResult.jpg

(b)
1These images are borrowed from the Nicu D. Cornea web site <<http://www.caip.rutgers.edu/~cornea/Skeletonization>>/

Algorithm parameters

The key idea behind the potential field approach used in the algorithm is to generate a vector field inside the image by charging the image boundary. In order to generate that field, the algorithm uses a number of user defined parameters, such as electrical point charge or field strength, that should be specified in the Skeletonize 3 D dialog box, see Figure 27.


Maximum value for outer pixel to be object

The algorithm places electrical point charges on the image boundary, considering each boundary voxel to be a point charge (a boundary voxel is determined as an image voxel which has a background neighbor). In order to differentiate the background voxels from image voxels, the Maximum value for outer pixel to be object parameter is added to the Skeletonize 3D dialog box. This parameter allows a user to enter the max intensity value for image voxels.

Field strength and Distance of electrical charge from object boundary

The repulsive force at an interior object voxel due to a nearby point charge is defined as a force pushing the voxel away from the charge with a strength that is inverse proportional to the distance between the voxel and the charge, raised at a certain power M which is called (in this algorithm) the field strength. The Field strength and Distance of electrical charge from object boundary parameters are defined by the user. The final repulsive force at each interior voxel is computed by summing the influences of all the point charges. The resulting vector field is also called a force field.

Note that a high field strength (M) will cause for a given inferior voxel the charges from the local or closer points to have a higher influence than from the more distant charges, therefore creating a vector field with sharper path-lines because it follows the local boundary topology more closely. A low value for the field strength parameter will produce a smoother vector field, with more rounded corners, since the vector direction at a particular point is now influenced by more point charges.

Setting the Distance of electrical charge from object boundary parameter to a very low value is not a good idea. E.g., imagine the example of a very long cylinder. Setting the threshold smaller than half the length of the cylinder will cause the field not to flow towards the one attracting point in the middle of the cylinder. Instead, it will go towards the center, creating a critical point at each slice along the cylinder.

Fractions of divergence points to use

The divergence of a vector field in a given region of space is a scalar quantity that characterizes the rate of flow leaving that region. A negative divergence value at a point indicates that the flow is moving mainly towards the given point. The algorithm takes the points with low divergence values, which indicate a low spot or "sink," from these points the new seed points are chosen using the threshold on the divergence value parameter. The threshold is given as a fraction of the lowest divergence value in the entire vector field. From each of these new seed points, a new field-line is generated which will connect to the core skeleton.

By varying the Fractions of divergence points to use parameter the user can vary the number of seed points selected, and therefore the number of new skeleton segments, generating an entire hierarchy of skeletons that is called the Level 1 skeleton. Different values can be chosen based upon the application of the curve-skeleton.

Applying Skeletonize 3D

Preliminary steps. In order to apply the Skeletonize 3D Pot Field algorithm to an image, the image must have a sufficient number of planes composed solely of background pixels at the x, y, and z boundaries of the image. For the default value of zero for the distance of the electrical charges from the object boundary, there must be background planes at x = 0, x = xDim -1, y = 0, y = yDim - 1, z = 0, and z = zDim - 1. Call the Utilities > Add Image Margins, Utilities> Pad, or Utilities >Insert Slice menus to run a tool that will help you to create a version of the image with the sufficient number of padding pixels.

To apply the algorithm,

  1. Open an image of interest, and then call Algorithms>Morphological >Skeletonize 3D Pot. The Skeletonize 3D dialog box appears.
  2. Fill out the dialog box. For the dialog box options, refer to Figure 27.
  3. Press OK to run the algorithm.

Be patient, because the first algorithm run may take a considerable time. When the algorithm finishes running, the result skeleton image appears. See Figure 26.

Figure 26. (a): the original image; (b): the Pad dialog box; (c): the padded image; (d): the Output window shows the skeletonize 3D statistics; (d): the Level 1 skeleton obtained using the default parameter values, see Figure 27.

Skel3DOrig 1.jpg


(a)
Skel3PadDB 1.jpg

(b)
Skel3PadedOrig 1.jpg


(c)
Skel3DOutputwindow 1.jpg

(d)
Skel3DOSkel 1.jpg

(e)

Algorithm notes

  • Most of of the image calculation time (e.g. 98%) will spend in the potential field calculation. Therefore, the first time you run the algorithm, select the Save the Vector Fields radio button to save the x, y, and z vector fields and the Level 1 skeleton. On the following runs, select the Load the Vector Field from Files radio button to load the x, y, and z vector fields and the Level 1 skeleton. See also Figure 27.
  • Then, you can also vary the Fractions of divergence points to use parameter to change the extensiveness of the skeleton generated. As more divergence points are used in the Level 2 skeleton calculation, the skeleton will become more extensive.
  • The algorithm must be run on a single solid object. Use the following options to make sure that your image is a single solid object:
      • A user selected threshold (defined using the Maximum value for outer pixel to be object parameter) helps separate the background voxels from the object voxels.
      • The default selected checkbox for slice by slice hole filling is used to convert the image into a single solid object.
      • After identifying all image objects in 3D, all but the largest object are deleted.

Figure 27. Skeletonize 3D dialog box

Parameters
Skeletonize3D.jpg
Maximum value for outer pixel to be object
For the image voxels, it determines the max intensity value that a voxel can have in order to be the image voxel (not the boundary voxel). The default value is set to 1.0
Slice by slice hole filling
If checked, performs the slice by slice hole filling for the image. Refer to [Morphology.html#wp1047006 "Algorithm notes" ].
Distance of electrical charge from object boundary
This parameter determines the distance between the inferior voxel and the boundary point charge. If a boundary point is at a distance greater than the entered value, then it is ignored, i.e., it does not influence the field at this point. See also [Morphology.html#wp1047544 "Algorithm parameters" ].
Field strength
A vector field inside the image is determined by the strength of the repulsive force from the nearby charge that affects each interior voxel. The repulsive field strength is inverse proportional to the distance between the voxel and the charge (determined by the Distance of electrical charge from object boundary parameter) raised at a certain power M which is determined by the Field Strength parameter.The final repulsive force at each interior voxel is computed by summing the influences of all the point charges.
If the Field Strength parameter value is set high, it will create a vector field with sharper path-lines, because it follows the local boundary topology more closely. A low value for the Field Strength parameter will produce a smoother vector field, with more rounded corners. See also "Algorithm parameters".
Fractions of divergence points to use
By varying this parameter you can vary a number of seed points and correspondingly the number of new skeleton segments, generating an entire hierarchy of skeletons for the Level 1 skeleton. See also "Algorithm parameters".
Save the vector fields to files
The first time this program is run on an image select this option to save the x, y, and z vector fields and the Level 1 skeleton.
Load the vector field from files
On the following runs select the Load the vector field from files option to load the previously saved x, y, and z vector fields and the Level 1 skeleton.
On following runs vary the Fractions of divergence points to use parameter to change the extensiveness of the skeleton generated. As more divergence points are used in the Level 2 skeleton calculation, the skeleton will become more extensive.
Open XVF file
Load
Loads the file.
Remove
Removes the file from loading.
Don't save the vector field
If checked, doesn't allow to save the vector field.
Output all skeleton points
If checked, sends to the output all skeleton points.
Output only segmented end points
If checked, sends to the output only segmented end points.
OK
Performs the skeletonizind on the selected image based on your choices in this dialog box.
Cancel
Disregards changes you made in this dialog box and closes the dialog box.
Help
Displays online help for the algorithm.

Skeletonize 3D Pot references

[[1]] This is a port of the C code for pfSkel: Potential Field Based 3D Skeleton Extraction written by Nicu D. Cornea, Electrical and Computer Engineering Department, Rutgers, The State University of New Jersey, Piscataway, New Jersey, 08854, cornea@caip.rutgers.edu. The code was downloaded from [[2]]
Chuang J-H, Tsai C, Ko M-C (2000) Skeletonization of Three-Dimensional Object Using Generalized Potential Field. IEEE Trans. Pattern Analysis and Machine Intelligence, 22(11):1241-1251
Nicu D. Cornea, Deborah Silver, and Patrick Min, Curve-Skeleton Applications, Proceedings IEEE Visualization, 2005, pp. 95-102. Nicu D. Cornea, Deborah Silver, Xiaosong Yuan, and Raman Balasubramanian, Computing Hierarchical Curve-Skeletons of 3D Objects, Springer-Verlag, The Visual Computer, Vol. 21, No. 11, October, 2005, pp. 945-955.

Ultimate erode

The algorithm generates the ultimate eroded points (UEPs) of an Euclidian distance map for an image. These UEPs represent the centers of particles that would be separated by segmentation. The UEP's gray value is equal to the radius of the virtual inscribed circle of the corresponding particle.

Image Types

It requires a binary image (boolean type) as an input.

Applying Ultimate Erode

To apply the algorithm,

  1. Call Algorithms>Morphological >Ultimate Erode.
  2. In the Ultimate Erode dialog box that appears,
    • Use the Remove objects closer than text box to specify the min distance between the objects;
    • Specify the image frame for the result image;
    • And also specify whether you want apply Ultimate Erode to the whole image or only for selected VOIs.
  3. Press OK to run the algorithm.

When the algorithm finishes running, the result image appears in the specified frame.

See also Figure 28.


Figure 28. Ultimate Erode - (a) the original binary image, (b) the result after applying the operation with the parameters shown in the dialog box

UltimateErodeOriginal.jpg

(a)
UltimateErodeDialogBox.jpg

UltimateErode Result.jpg

(b)

See also

Segmenting Images Using Contours and Masks