Extract Brain: Extract Brain Surface (BET) and Extract Brain: Extract Brain Surface (BSE): Difference between pages

From MIPAV
(Difference between pages)
Jump to navigation Jump to search
m (1 revision imported)
 
MIPAV>Olga Vovk
 
Line 1: Line 1:
This algorithm extracts the surface of the brain from a T1-weighted MRI, eliminating the areas outside of the brain. This algorithm is similar to the brain extraction tool developed at the Oxford Center for Functional Magnetic Resonance Imaging of the Brain.
This algorithm strips areas outside the brain from a T1-weighted magnetic resonance image (MRI). It is based on the Brain Surface Extraction (BSE) algorithms developed at the Signal and Image Processing Institute at the University of Southern California by David W. Shattuck. This is MIPAV's interpretation of the BSE process and may produce slightly different results compared to other BSE implementations.


=== Background ===
=== Background ===


The procedure used by this algorithm to extract the brain surface consists of four major steps:
This algorithm works to isolate the brain from the rest of a T1-weighted MRI using a series of image manipulations. Essentially, it relies on the fact that the brain is the largest area surrounded by a strong edge within an MRI of a patient's head. There are essentially four phases to the BSE algorithm:


* Step 1, Estimating image parameters
* Step 1, Filtering the image to remove irregularities.
* Step 2, Selecting an initial closed mesh within the brain
* Step 2, Detecting edges in the image.
* Step 3, Evolving a mesh in a surface tangent direction, in a surface normal direction, and in a vertex normal direction
* Step 3, Performing morphological erosions and brain isolation.
* Step 4, Identifying the voxels inside and on the brain surface
* Step 4, Performing surface cleanup and image masking.


==== Step 1, Estimating image parameters ====
==== Step 1, Filtering the image to remove irregularities ====


A number of important parameters must be calculated from histograms composed of 1024 bins. These parameters are used to estimate the initial axes of the ellipsoid. As well as affect the evolution of an initial ellipsoid to the surface of the brain. Using an ellipsoid to initial surface as opposed to a sphere significantly improves the segmentation especially around the eyes and sinus cavities.
The first step that MIPAV performs is to filter the original image to remove irregularities, thus making the next step-edge detection-easier. The filter chosen for this was the Filters (Spatial): Regularized Isotropic (Nonlinear) Diffusion. Figure 1-A shows the original image, and Figure 1-B shows the image after it is filtered.


There are three such intensity values:
==== Step 2, Detecting edges in the image ====


* Minimum threshold: ''t''<sub>min </sub><nowiki>= 0.5 background threshold</nowiki>
Next, MIPAV performs a thresholded zero-crossing detection of the filtered image's laplacian. This process marks positive areas of the laplacian image as objects by setting them to 1 and identifies nonobject areas by setting their values to 0 (Figure 1-C).
* Background threshold: ''t''<sub>back</sub> = histogram index with the greatest count
* Median threshold: ''t''<sub>med </sub><nowiki>= median pixel value with the initial ellipsoid that are greater than </nowiki>''t''<sub>back</sub>


==== Step 2, Selecting an initial closed mesh ====
==== Step 3, Performing morphological erosions and brain isolation ====


MIPAV uses a different approach to constructing an initial mesh (i.e., ellipsoid), which is used as the initial surface for approximating the brain surface. The final approximation to the brain surface lies near the CSF and scalp. The shape of the mesh at the top of the head is nearly the shape of the scalp.
During this step, the software performs a number of 3D (or, optionally, 2.5D) morphological erosions on the edge image mask to remove small areas identified as objects that are not a part of the brain. It then performs a search for the largest 3D region within the image, which should be the brain (Figure 1-D). It erases everything outside this region and then performs another morphological operation, dilating the brain image back to approximately its original size and shape before the erosion (Figure 1-E).


As a way of identifying the scalp voxels that lie at the top of the head, this algorithm locates all the bright voxels using the threshold ''t''<sub>bright</sub>. Voxels in the lower half of the head also show up due to fatty tissue, much of it at the base of the neck. Empirically, the number of bright voxels near the scalp at the top of the head appear to be noticeably smaller than those voxels in the bottom half of the head. This algorithm stores only those voxels that are near the scalp at the top of the head. The voxel locations are fit with an ellipsoid by means of a least squares algorithm for estimating the coefficients of the quadratic equation that represent the ellipsoid.
==== Step 4, Performing surface cleanup and image masking ====


The ellipsoid obtained is reduced in scale by approximately half. The initial ellipsoids in all the test images evolved to reasonable brain surface approximations in a reasonable amount of time. Figure 1 shows the ellipsoid of intersection of the ellipsoid with the middle slice of the images.
Once MIPAV isolates the brain, it needs to clean up the segmentation a bit by performing more morphological operations. It first performs a 2.5D closing with a circular kernel in an attempt to fill in interior gaps and holes that may be present. Since it is better to have too much of the original volume in the extracted brain than to miss some of the brain, MIPAV performs an extra dilation during the closing operation, making the mask image slightly larger. If a smaller mask is desired, the closing kernel size can be reduced (keep in mind that this size is in millimeters and is the diameter of the kernel, not its radius).


{| width="90%" border="1" frame="hsides" frame="hsides"
As an option, MIPAV can then fill in any holes that still exist within the brain mask. Finally, it uses the mask to extract the brain image data from the original volume (Figure1-F).
|-
| width="9%" valign="top" |
[[Image:noteicon.gif]]
| width="81%" bgcolor="#B0E0E6" | '''Note:''' If it is unable to calculate the parameters to form an ellipsoid, MIPAV estimates a sphere instead to initialize the surface extraction process.
|}


<br />
==== Selecting parameters ====


<div>
Careful parameter selection must be done for the BSE algorithm to produce good results. For example, excessive erosion or dilation, closing kernel size, or edge detection kernel size can remove detail from the brain surface or remove it completely.


{| border="1" cellpadding="5"
===== Edge detection kernel size parameter =====
|+ <div>'''Figure 1. The initial ellipsoids intersected with the middle slices.''' </div>
|-
|
<div><div align="left">[[Image:exampleExtractBrainSurfaceMRIs.jpg]]</div> </div><div> </div>
|}


</div><div> </div><div> </div><div> </div>
The edge detection kernel size parameter is especially sensitive. Small changes to it (e.g., from the default of 0.6 up to 0.7 or down to 0.5 in the following example) can result in large changes to the extracted brain volume. Refer to Figure 1.


The ellipsoid is topologically equivalent to a sphere centered at the origin and with unit radius. Therefore, the initial mesh that approximates an ellipsoid is constructed by tessellating this sphere and then by applying an affine transformation to map the sphere-mesh vertices to ellipsoid-mesh vertices. The tessellation is performed by starting with an octahedron inscribed in the sphere. The octahedron vertices are (�1, 0, 0), (0,�1, 0), and (0, 0,�1). There are initially 6 vertices, 12 edges, and 8 triangles. Each level of subdivision requires computing the midpoints of the edges and replacing each triangle by four subtriangles as shown in Figure 2.
{| width="90%" border="1" frame="hsides" frame="hsides"
 
<div>
 
{| border="1" cellpadding="5"
|+ <div>'''Figure 2. Subdivision of a triangle.''' </div>
|-
|-
|
| width="9%" valign="top" |
<div><div align="left">[[Image:ExtractBrainSurface_subdivide2.jpg]]</div> </div>
[[Image:recommendationicon.gif]]
| width="81%" bgcolor="#B0E0E6" | '''Recommendation:''' To find an optimal set of parameters values, run this algorithm repeatedly on a representative volume of the MRI images that you want to process with different parameter values and with Show intermediate images selected.
|}
|}


</div>
<br />


To avoid a lot of dynamic allocation of memory, it is possible to compute the total number of vertices, edges, and triangles that are required for the mesh given the number of subdivisions. Let ''V''<sub>0</sub>, ''E''<sub>0</sub>, and ''T''<sub>0</sub> be the current quantities of vertices, edges, and triangles. Each subdivide edge leads to a new vertex, so the new number of vertices is ''V''<sub>1</sub> = ''V''<sub>0</sub> ''E''<sub>0</sub>. Each triangle is replaced by four triangles, so the new number of triangles is ''T''<sub>1</sub> = 4''T''<sub>0</sub>. Each edge is split into two edges and each subdivided triangle creates three additional edges, so the new number of edges is <br />''E''<sub>1</sub> = 2''E''<sub>0</sub> 3''T''<sub>0</sub>. These recurrence formulas are iterated over the desired number of subdivisions to compute the total objects required.
Figure 1 shows images that were produced from running this algorithm with the default parameters against a 256 x 256 x 47 MRI. In each image, the middle slice is shown.


All edges are subdivided first. When each edge is subdivided, the index of the newly generated vertex must be stored for later use by triangle sharing the edge that is being subdivided. A hash map is used to store an edge as a pair of vertex indices as the key of the map. The value of the map is the index of the vertex that is generated as the midpoint. The triangles are then subdivided. The edges of the triangle are looked up in the hash map to find the indices of the three new vertices to be used by the subtriangles.
==== Image types ====
 
The subtriangles are stored as triples of vertex indices. After the triangles are subdivided, the original edges must all be discarded and replaced by the subdivided edges in the new mesh. These steps are repeated for all subdivision steps. Once the subdivided mesh is generated, we also need to know the vertices adjacent to a given vertex. This information is required to compute the average of the neighboring vertices. The mean length of all the edges of the mesh can be computed using the edge hash map as storage for the edges, so after triangle division the edge hash map must exist throughout the mesh updates. Figure 3 shows a rendering of an initial ellipsoid, both solid and wireframe.
 
<div>
 
{| border="1" cellpadding="5"
|+ <div>'''Figure 3. An initial ellipsoid: (A) solid and (B) wireframe.''' </div>
|-
|
<div><div align="left">[[Image:exampleEllipsoids3.jpg]]</div> </div>
|}
 
</div>
 
The median intensity threshold, ''t''<sub>med</sub>'','' is obtained by iterating over all the voxels with the image with intensity larger than ''t''<sub>back</sub> and which are located inside the initial ellipsoid. The corresponding set of intensities is sorted and the middle value selected for the median intensity threshold.
 
==== Step 3, Evolution of the mesh ====
 
The first step is to compute the average edge length L of the edges in the mesh. This is easily done by iterating over the hash map of edges, looking up the vertices from the indices stored in each edge, calculating the edge length, accumulating the lengths, then dividing by the total number of edges.
 
Second, the vertex normals are needed for evolving the mesh in a direction normal to the surface. This is also easily done by iterating over the triangles. A running sum of non-unit triangle normals is maintained for each vertex. Once the sums are calculated, the vertex normals are normalized by an iteration over the array storing them.
 
Third, the surface tangents, normals, and curvatures must be computed. If [[Image:ExtractBrainSurfaceBET7.jpg]] is a vertex, then [[Image:ExtractBrainSurfaceBET8.jpg]] is the average of all the neighbors of [[Image:ExtractBrainSurfaceBET9.jpg]]. The neighbors are rapidly accessed by using the adjacency structure that keeps track of the array of indices of neighbors for each vertex, so the mean vertex is easily computed. The difference between vertex and mean is [[Image:ExtractBrainSurfaceBET10.jpg]]. This vector is decomposed into a surface normal and surface tangential component. The normal component is in the direction of the vertex normal [[Image:ExtractBrainSurfaceBET11.jpg]],
 
<math>
\overrightarrow {S_N} = ( \overrightarrow{S} \overrightarrow {V_N}) \overrightarrow {V_N}
</math>


The tangential component is
You can apply this algorithm only to 3D MRI images. The resulting image is of the same data type as the original image.


<math>
==== References ====
\overrightarrow {S_T} = \overrightarrow {S} - \overrightarrow {S_N}
</math>


The estimate of surface curvature is chosen to be:
Refer to the following references for more information about the Brain Surface Extraction algorithm and general background information on brain extraction algorithms.


<math>
A. I. Scher, E. S. C. Korf, S. W. Hartley, L. J. Launer. "An Epidemiologic Approach to Automatic Post-Processing of Brain MRI."
\kappa = \frac {2 \left | \overrightarrow {S_N} \right |} {L^2}
</math>


where ''L'' is the mean edge length of the mesh. The minimum and maximum curvatures are maintained as the various surface quantities are computed for all the vertices. These values are denoted [[Image:ExtractBrainSurfaceBET15.jpg]]<sub>min</sub> and [[Image:ExtractBrainSurfaceBET16.jpg]]<sub>max</sub> and are used to compute two parameters that are used in the update of the mesh in the surface normal direction,
David W. Shattuck, Stephanie R. Sandor-Leahy, Kirt A. Schaper, David A. Rottenburg, Richard M. Leahy. "Magnetic Resonance Image Tissue Classification Using a Partial Volume Model." ''NeuroImage'' 2001; 13(5):856-876.


<math>
<div> </div><div>
E = \frac{1} {2 (\kappa_{min} + \kappa_{max})}, F = \frac {6} {\kappa_{max} - \kappa_{min}}
</math>
 
Once all these values are computed, each vertex of the mesh is updated by
 
<math>
\overrightarrow {V} + = 0.5\overrightarrow{S_T} + c_1 \overrightarrow{S_N} +c_2\overrightarrow {V_N}
</math>
 
 
It is important to note that the ''BET: Brain Extraction Tool'' paper (refer to "Reference" (below)) uses the notation [[Image:ExtractBrainSurfaceBET19.jpg]] to denote the surface normal, usually a non-unit vector, and [[Image:ExtractBrainSurfaceBET20.jpg]] to denote the normalized vector for [[Image:ExtractBrainSurfaceBET21.jpg]]. However, Equation (13) in the ''BET'' paper incorrectly lists the last term in the update equation to use [[Image:ExtractBrainSurfaceBET22.jpg]] when in fact it should be [[Image:ExtractBrainSurfaceBET23.jpg]]. Both vectors are parallel, but not necessarily pointing in the same direction. The discussion about Equation (8) in the ''BET'' paper makes it appear that the vector should be the vertex normal, not the surface normal. In any case, MIPAV uses [[Image:ExtractBrainSurfaceBET24.jpg]].
 
The [[Image:ExtractBrainSurfaceBET25.jpg]] allows the mesh to move a little bit in the tangential direction to "align" the vertices with the desired goal of being located at the mean vertices. This allows the mesh to shift about somewhat.
 
The [[Image:ExtractBrainSurfaceBET26.jpg]] is a ''smoothing'' term. The coefficient [[Image:ExtractBrainSurfaceBET27.jpg]] represents a stiffness of the mesh against motion in the surface normal direction. The formula for [[Image:ExtractBrainSurfaceBET28.jpg]] in the ''BET'' paper is
 
<math>
c_1=\frac {1} {2} (1 + \tanh ((F)(\kappa -E)))
</math>
 
 
where [[Image:ExtractBrainSurfaceBET30.jpg]] is the estimate of surface curvature at the vertex. This formula made the surface too stiff and unable to expand to a good approximation of the brain surface. So the formula was modified to add a ''stiffness parameter,'' [[Image:ExtractBrainSurfaceBET31.jpg]], with a default value of 0.1 and use.
 
<math>
c_1=\frac {\sigma} {2} (1 + \tanh (F(\kappa -E)))
</math>
 
Moreover, [[Image:ExtractBrainSurfaceBET33.jpg]] is allowed to increase over time, then decrease later, to properly evolve the surface.
 
The [[Image:ExtractBrainSurfaceBET34.jpg]] term controls the surface evolution based on the MRI data. The algorithm is almost exactly the formula described in the ''BET ''paper. A ray pointing inside the surface is sampled to a certain maximum depth and the sampled intensities are used to construct a minimum ''I''<sub>min</sub> and a maximum ''I''<sub>max</sub>. The coefficient for the update is then
 
<math>
c_2 = 0.1L \left( -b_t + \frac {I_{min^{-t}min}}{I_{max^{-t}min}} \right )
</math>
where
 
;;; ''L'' is the mean edge length
;;; ''b''<sub>t</sub> is what the ''BET'' paper calls the brain selection term (value chosen to be 1/2)
;;; ''t''<sub>min</sub> is the minimum intensity threshold calculated during histogram analysis
 
Figure 4 shows a rendering of a brain surface extracted using the methods discussed in this section.
 
<div>


{| border="1" cellpadding="5"
{| border="1" cellpadding="5"
|+ <div>'''Figure 4. Two views of a brain surface mesh.''' </div>
|+ <div>'''Figure 1. Examples of Extract Brain Surface (BSE) image processing''' </div>
|-
|-
|
| rowspan="1" colspan="2" |
<div><div align="left">[[Image:exampleBrainSurfaceMesh4.jpg]]</div> </div>
<div><div><center>[[Image:exampleBSE_BeforeAfter.jpg]]</center></div> </div>
|}
|}


  </div>
  </div>


==== Step 4, Selecting brain voxels ====
<div id="ApplyingBSE"><div>


This algorithm selects in two phases those voxels that are inside or on the triangle mesh that represents the surface of the brain. The first phase makes a pass over the triangles in the mesh. Each triangle is rasterized in the 3D voxel grid. The rasterization is done three times in the coordinate directions. The idea for a given direction is to cast rays in that direction and find voxels that are approximately on or near the triangle. The final result of all the rasterization is a surface that is a few voxels thick. More important is that it is a closed surface.
=== Applying BSE ===
 
The end result is a ternary 3D image whose voxels are 0 for background, 1 for surface, and 2 for interior. The distinction between the surface and interior voxels allows an application to color the surface voxels in image slices to see how accurate the segmentation is.  Figure 5 shows a few slices of the original MRI with the brain voxels colored.
 
<div>
 
{| border="1" cellpadding="5"
|+ <div>'''Figure 5. MRI slices with colored brain voxels.''' </div>
|-
|
<div><div align="left">[[Image:exampleMRIcolor5.jpg]]</div> </div>
|}
 
</div>
 
==== Image types ====
 
You can apply this algorithm only to 3D MRI images.
 
==== Special notes ====
 
You can save the triangle mesh in the sur format that the level surface viewer supports. This allows you to render the extracted surface. The code also gives you access to the ternary mask image to do with as you please.
 
==== Reference ====
 
Refer to the following reference for more information about this algorithm:
 
Smith, Stephen. ''BET: Brain Extraction Tool.'' FMRIB Technical Report TR00SMS2, Oxford Center for Functional Magnetic Resonance Imaging of the Brain, Department of Clinical Neurology, Oxford University, Oxford, England.
 
=== Applying the Extract Brain Surface algorithm ===


To use this algorithm, do the following:
To use this algorithm, do the following:


# Select Algorithms &gt; Extract Brain. The [[#ExtractBrainDialogBox|Extract Brain dialog box]] opens.
# Select Algorithms &gt; Extract Brain Surface (BSE). The Extract Brain Surface (BSE) dialog box opens (Figure 2).
# Complete the information in the dialog box.
# Complete the information in the dialog box.
# Click OK. The algorithm begins to run, and a progress bar appears with the status. When the algorithm finishes running, the progress bar disappears, and the results replace the original image.
# Click OK. The algorithm begins to run, and a progress bar appears with the status. When the algorithm finishes running, the progress bar disappears, and the results replace the original image.


<div id="ExtractBrainDialogBox"></div>
<div>
<div>


{| border="1" cellpadding="5"
{| border="1" cellpadding="5"
|+ <div>'''Extract Brain dialog box ''' </div>
|+ <div>'''Figure 2. Extract Brain Surface (BSE) algorithm dialog box ''' </div>
|-
| rowspan="1" colspan="2" |
<div><div><center>[[Image:dialogboxExtractBrain.jpg]]</center></div> </div>
|-
|-
|
|
<div>'''Axial image orientation''' </div>
<div>Filtering </div>
|
|
<div>Specifies that the image is in the axial orientation. </div><div>Unless you first change the default values of the Initial mesh position boxes, marking or clearing this check box changes their values. </div>
| rowspan="3" colspan="1" |
<div><div><center>[[Image:ExtractBrainBSEDialogbox.jpg]]</center></div> </div>
|-
|-
|
|
<div>'''Estimate initial boundary using a sphere''' </div>
<div>'''Iterations (1-5)''' </div>
|
|
<div>Uses a sphere shape instead of an ellipse for estimating the initial boundary. </div><div>Unless you first change the default values of the Initial mesh position boxes, marking or clearing this check box changes their values. </div>
<div>Specifies the number of regularized isotropic (nonlinear) diffusion filter passes to apply to the image. This parameter is used to find anatomical boundaries seperating the brain from the skull and tissues. For images with a lot of noise, increasing this parameter will smoth noisy regions while maintaining image boundaries. </div>
|-
|-
|
|
<div>'''Display initial ellipsoid result''' </div>
<div>'''Gaussian standard deviation (0.1-5.0)''' </div>
|
|
<div>Displays the ellipsoid sphere to be used in extracting the brain. This option allows you to verify that the initial ellipsoid sphere correctly covers the area in the image where the brain is located. </div>
<div>Specifies the standard deviation of the Gaussian filter used to regularize the image. A higher standard deviation gives preference to high-contrast edges for each voxel in the region. </div>
|-
| rowspan="1" colspan="3" |
<div>Edge Detection </div>
|-
|-
|
|
<div>'''Second stage edge erosion''' </div>
<div>'''Kernel size (0.1-5.0)''' </div>
| rowspan="1" colspan="2" |
<div>Specifies the size of the symmetric Gaussian kernel to use in the Laplacian Edge Detection algorithm. An increase in kernel size will yield an image which contains only the strongest edges. Equivalent to using a narrow filter on the image, a small kernal size will result in more edges. </div>
|-
|
|
<div>Performs erosion process at the edge of the brain to remove nonbrain voxels. When you select this check box, the Erode at percent above median text box is enabled. </div>
<div>'''Perform fast convolutions (requires more memory)''' </div>
| rowspan="1" colspan="2" |
<div>Specifies whether to perform Marr-Hildreth edge detection with separable image convolutions.  </div><div>The separable image convolution completes about twice as fast, but it requires approximately three times more memory. If memory is not a constraint, select this check box. </div>
|-
|-
|
| rowspan="1" colspan="3" |
<div>'''Iterations''' </div>
<div>Erosion/Dilation </div>
|
<div>Specifies the number of times to run the algorithm when evolving the ellipsoid into the brain surface. The default is 1000. </div>
|-
|-
|
|
<div>'''Depth''' </div>
<div>'''Iterations (1-10)''' </div>
|
| rowspan="1" colspan="2" |
<div>Specifies the maximum depth inside the surface of the brain to use in sampling intensities. The default value is 5. </div>
<div>Specifies the number of: </div>
 
* Erosions that should be applied to the edge image before the brain is isolated from the rest of the volume;
* Dilations to perform afterward.
 
<div>A higher number of iterations will help distinguish brain tissue from blood vessels and the inner cortical surface. Noise resulting from blood vessels or low image contrast may be present when few iterations are used. </div>
|-
|-
|
|
<div>'''Image influence''' </div>
<div>'''Process slices independently''' </div>
|
| rowspan="1" colspan="2" |
<div>Controls the surface evolution by sampling to the specified depth to calculate the maximum and minimum intensities. The default value is 0.10. </div>
<div>Applies the algorithm to each slice of the dataset independently. Separable image operations will again produce results more quickly while using increased memory. Since this part of the brain surface extraction is meant to fill large pits and close holes in the surface, indepent processing may not yield optimal results. </div>
|-
|-
|
| rowspan="1" colspan="3" |
<div>'''Stiffness''' </div>
<div>Closing </div>
|
<div>Controls the stiffness of the mesh against motion in the surface normal direction. The default value is 0.15. </div>
|-
|-
|
|
<div>'''Erode at percent above median''' </div>
<div>'''Kernel diameter (in mm) (0.1-50.0)''' </div>
|
| rowspan="1" colspan="2" |
<div>Removes voxels (nonbrain material) that are at the edge of the segmentation and that have an intensity above the percent of the median voxel intensity of the brain. Type the appropriate percentage in this text box. </div><div>This text box is enabled when you select Second stage edge erosion. </div>
<div>Specifies the size of the kernel to use (in millimeters). The value defaults to a number of millimeters that ensures that the kernel is 6 pixels in diameter and takes into account the volume resolutions. Closing operations act to fill smaller pits and close holes in the segmented brain tissue. </div>
|-
|-
|
|
<div>'''Use the volume center of mass''' </div>
<div>'''Fill all interior holes''' </div>
|
| rowspan="1" colspan="2" |
<div>Contructs the mesh using the volume center of mass. When you select this check box, you ''cannot'' specify the Initial ''X, Y,'' or ''Z'' mesh positions. </div>
<div>Fills in any holes that still exist within the brain mask. When optimal parameters for a given image have been used, this option will generally produce a volume of interest that lies between the inner cortical surface and the outer cortical boundary. </div>
|-
|-
|
| rowspan="1" colspan="3" |
<div>'''Initial mesh ''''''X'''''' position''' </div>
<div>Options </div>
|
<div>Constructs the mesh using the ''X'' position that you specify. To specify a value for this text box, clear Use the volume center of mass. </div><div>Unless you first change the default value of this box, marking or clearing either the Axial image orientation or Estimate initial boundary using a sphere check boxes changes the default value.  </div>
|-
|-
|
|
<div>'''Initial mesh ''''''Y'''''' position''' </div>
<div>'''Show intermediate images''' </div>
|
| rowspan="1" colspan="2" |
<div>Constructs the mesh using the ''Y'' position that you specify. To specify a value for this text box, clear Use the volume center of mass. </div><div>Unless you first change the default value of this box, marking or clearing either the Axial image orientation or Estimate initial boundary using a sphere check boxes changes the default value. </div>
<div>Shows, when selected, in addition to the final brain image, the images that are generated at various points while the BSE algorithm is running. Selecting this check box may help you in finding the optimal parameters for running the BSE algorithm on a volume. </div><div>For an image named ''ImageName'', the debugging images displayed would include: </div>
 
* The filtered image (named ''ImageName_filter'')
* The edge image (named ''ImageName_edge'')
* The eroded edge image (named ''ImageName_erode_brain'')
* The isolated brain mask after erosion and dilation (named ''ImageName_erode_brain_dilate'')
* The brain mask after closing (named ''ImageName_close'')
* The closing image is shown before any interior mask holes are filled
|-
|-
|
|
<div>'''Initial mesh ''''''Z'''''' position''' </div>
<div>Extract brain to paint </div>
|
| rowspan="1" colspan="2" |
<div>Constructs the mesh using the ''Z'' position that you specify. To specify a value for this text box, clear Use the volume center of mass. </div><div>Unless you first change the default value of this box, marking or clearing either the Axial image orientation or Estimate initial boundary using a sphere check boxes changes the default value. </div>
<div>Paints the extracted brain onto the current image. See also Figure 3. </div>
|-
|-
|
|
<div>'''OK''' </div>
<div>'''OK''' </div>
|
| rowspan="1" colspan="2" |
<div>Applies the algorithm according to the specifications in this dialog box. </div>
<div>Applies the algorithm according to the specifications in this dialog box. </div>
|-
|-
|
|
<div>'''Cancel''' </div>
<div>'''Cancel''' </div>
|
| rowspan="1" colspan="2" |
<div>Disregards any changes that you made in the dialog box and closes this dialog box. </div>
<div>Disregards any changes that you made in the dialog box and closes this dialog box. </div>
|-
|-
|
|
<div>'''Help''' </div>
<div>'''Help''' </div>
| rowspan="1" colspan="2" |
<div>Displays online help for this dialog box. </div>
|}
</div><div>
{| border="1" cellpadding="5"
|+ <div>'''Figure 3. The Extract Brain to Paint option: on your left is the original image and on your right is the result image with the brain extracted to paint.''' </div>
|-
|
|
<div>Displays online help for this dialog box. </div>
<div style="font-style: normal; font-weight: normal; margin-bottom: 0pt; margin-left: 0pt; margin-right: 0pt; margin-top: 1pt; text-align: left; text-decoration: none; text-indent: 0pt; text-transform: none; vertical-align: baseline"><font size="2pt"><font color="#000000"><div><center>[[Image:ExtractBrainIntoPaint1.jpg]]</center></div><br /> </font></font></div>
|
<div style="font-style: normal; font-weight: normal; margin-bottom: 0pt; margin-left: 0pt; margin-right: 0pt; margin-top: 1pt; text-align: left; text-decoration: none; text-indent: 0pt; text-transform: none; vertical-align: baseline"><font size="2pt"><font color="#000000"><div><center>[[Image:ExtractBrainIntoPaint2.jpg]]</center></div><br /> </font></font></div>
|}
|}


  </div><div> </div>
  </div>


[[Category:Help]]
[[Category:Help]]
[[Category:Help:Algorithms]]
[[Category:Help:Algorithms]]

Revision as of 20:48, 20 July 2012

This algorithm strips areas outside the brain from a T1-weighted magnetic resonance image (MRI). It is based on the Brain Surface Extraction (BSE) algorithms developed at the Signal and Image Processing Institute at the University of Southern California by David W. Shattuck. This is MIPAV's interpretation of the BSE process and may produce slightly different results compared to other BSE implementations.

Background

This algorithm works to isolate the brain from the rest of a T1-weighted MRI using a series of image manipulations. Essentially, it relies on the fact that the brain is the largest area surrounded by a strong edge within an MRI of a patient's head. There are essentially four phases to the BSE algorithm:

  • Step 1, Filtering the image to remove irregularities.
  • Step 2, Detecting edges in the image.
  • Step 3, Performing morphological erosions and brain isolation.
  • Step 4, Performing surface cleanup and image masking.

Step 1, Filtering the image to remove irregularities

The first step that MIPAV performs is to filter the original image to remove irregularities, thus making the next step-edge detection-easier. The filter chosen for this was the Filters (Spatial): Regularized Isotropic (Nonlinear) Diffusion. Figure 1-A shows the original image, and Figure 1-B shows the image after it is filtered.

Step 2, Detecting edges in the image

Next, MIPAV performs a thresholded zero-crossing detection of the filtered image's laplacian. This process marks positive areas of the laplacian image as objects by setting them to 1 and identifies nonobject areas by setting their values to 0 (Figure 1-C).

Step 3, Performing morphological erosions and brain isolation

During this step, the software performs a number of 3D (or, optionally, 2.5D) morphological erosions on the edge image mask to remove small areas identified as objects that are not a part of the brain. It then performs a search for the largest 3D region within the image, which should be the brain (Figure 1-D). It erases everything outside this region and then performs another morphological operation, dilating the brain image back to approximately its original size and shape before the erosion (Figure 1-E).

Step 4, Performing surface cleanup and image masking

Once MIPAV isolates the brain, it needs to clean up the segmentation a bit by performing more morphological operations. It first performs a 2.5D closing with a circular kernel in an attempt to fill in interior gaps and holes that may be present. Since it is better to have too much of the original volume in the extracted brain than to miss some of the brain, MIPAV performs an extra dilation during the closing operation, making the mask image slightly larger. If a smaller mask is desired, the closing kernel size can be reduced (keep in mind that this size is in millimeters and is the diameter of the kernel, not its radius).

As an option, MIPAV can then fill in any holes that still exist within the brain mask. Finally, it uses the mask to extract the brain image data from the original volume (Figure1-F).

Selecting parameters

Careful parameter selection must be done for the BSE algorithm to produce good results. For example, excessive erosion or dilation, closing kernel size, or edge detection kernel size can remove detail from the brain surface or remove it completely.

Edge detection kernel size parameter

The edge detection kernel size parameter is especially sensitive. Small changes to it (e.g., from the default of 0.6 up to 0.7 or down to 0.5 in the following example) can result in large changes to the extracted brain volume. Refer to Figure 1.

Recommendationicon.gif

Recommendation: To find an optimal set of parameters values, run this algorithm repeatedly on a representative volume of the MRI images that you want to process with different parameter values and with Show intermediate images selected.


Figure 1 shows images that were produced from running this algorithm with the default parameters against a 256 x 256 x 47 MRI. In each image, the middle slice is shown.

Image types

You can apply this algorithm only to 3D MRI images. The resulting image is of the same data type as the original image.

References

Refer to the following references for more information about the Brain Surface Extraction algorithm and general background information on brain extraction algorithms.

A. I. Scher, E. S. C. Korf, S. W. Hartley, L. J. Launer. "An Epidemiologic Approach to Automatic Post-Processing of Brain MRI."

David W. Shattuck, Stephanie R. Sandor-Leahy, Kirt A. Schaper, David A. Rottenburg, Richard M. Leahy. "Magnetic Resonance Image Tissue Classification Using a Partial Volume Model." NeuroImage 2001; 13(5):856-876.

Figure 1. Examples of Extract Brain Surface (BSE) image processing
ExampleBSE BeforeAfter.jpg

Applying BSE

To use this algorithm, do the following:

  1. Select Algorithms > Extract Brain Surface (BSE). The Extract Brain Surface (BSE) dialog box opens (Figure 2).
  2. Complete the information in the dialog box.
  3. Click OK. The algorithm begins to run, and a progress bar appears with the status. When the algorithm finishes running, the progress bar disappears, and the results replace the original image.
Figure 2. Extract Brain Surface (BSE) algorithm dialog box
Filtering
ExtractBrainBSEDialogbox.jpg
Iterations (1-5)
Specifies the number of regularized isotropic (nonlinear) diffusion filter passes to apply to the image. This parameter is used to find anatomical boundaries seperating the brain from the skull and tissues. For images with a lot of noise, increasing this parameter will smoth noisy regions while maintaining image boundaries.
Gaussian standard deviation (0.1-5.0)
Specifies the standard deviation of the Gaussian filter used to regularize the image. A higher standard deviation gives preference to high-contrast edges for each voxel in the region.
Edge Detection
Kernel size (0.1-5.0)
Specifies the size of the symmetric Gaussian kernel to use in the Laplacian Edge Detection algorithm. An increase in kernel size will yield an image which contains only the strongest edges. Equivalent to using a narrow filter on the image, a small kernal size will result in more edges.
Perform fast convolutions (requires more memory)
Specifies whether to perform Marr-Hildreth edge detection with separable image convolutions.
The separable image convolution completes about twice as fast, but it requires approximately three times more memory. If memory is not a constraint, select this check box.
Erosion/Dilation
Iterations (1-10)
Specifies the number of:
  • Erosions that should be applied to the edge image before the brain is isolated from the rest of the volume;
  • Dilations to perform afterward.
A higher number of iterations will help distinguish brain tissue from blood vessels and the inner cortical surface. Noise resulting from blood vessels or low image contrast may be present when few iterations are used.
Process slices independently
Applies the algorithm to each slice of the dataset independently. Separable image operations will again produce results more quickly while using increased memory. Since this part of the brain surface extraction is meant to fill large pits and close holes in the surface, indepent processing may not yield optimal results.
Closing
Kernel diameter (in mm) (0.1-50.0)
Specifies the size of the kernel to use (in millimeters). The value defaults to a number of millimeters that ensures that the kernel is 6 pixels in diameter and takes into account the volume resolutions. Closing operations act to fill smaller pits and close holes in the segmented brain tissue.
Fill all interior holes
Fills in any holes that still exist within the brain mask. When optimal parameters for a given image have been used, this option will generally produce a volume of interest that lies between the inner cortical surface and the outer cortical boundary.
Options
Show intermediate images
Shows, when selected, in addition to the final brain image, the images that are generated at various points while the BSE algorithm is running. Selecting this check box may help you in finding the optimal parameters for running the BSE algorithm on a volume.
For an image named ImageName, the debugging images displayed would include:
  • The filtered image (named ImageName_filter)
  • The edge image (named ImageName_edge)
  • The eroded edge image (named ImageName_erode_brain)
  • The isolated brain mask after erosion and dilation (named ImageName_erode_brain_dilate)
  • The brain mask after closing (named ImageName_close)
  • The closing image is shown before any interior mask holes are filled
Extract brain to paint
Paints the extracted brain onto the current image. See also Figure 3.
OK
Applies the algorithm according to the specifications in this dialog box.
Cancel
Disregards any changes that you made in the dialog box and closes this dialog box.
Help
Displays online help for this dialog box.
Figure 3. The Extract Brain to Paint option: on your left is the original image and on your right is the result image with the brain extracted to paint.
ExtractBrainIntoPaint1.jpg

ExtractBrainIntoPaint2.jpg