Class AlgorithmMeanShiftSegmentation

  • All Implemented Interfaces:
    java.awt.event.ActionListener, java.awt.event.WindowListener, java.lang.Runnable, java.util.EventListener

    public class AlgorithmMeanShiftSegmentation
    extends AlgorithmBase
    The java code is ported from C++ code downloaded from http://coewww.rutgers.edu/riul/research/code.html. The relevant web page section says: Edge Detection and Image SegmentatiON (EDISON) System C++ code, can be used through a graphical interface or command line. The system is described in Synergism in low level vision. For comments, please contact Bogdan Georgescu or Chris M. Christoudias. The EDISON system contains the image segmentation/edge preserving filtering algorithm described in the paper Mean shift: A robust approach toward feature space analysis and the edge detection algorithm described in the paper Edge detection with embedded confidence. Relevant points from reference 1: This code works on 2 dimensional gray level and color images. RGB is converted to LUV for processing. Location and range vectors are concatenated in a joint spatial-range domain of dimension d = p + 2, where p = 1 in the gray level case, 3 for color images, and p > 3 in the multispectral case. There is a spatial kernel bandwidth hs and a range kernel bandwidth hr. For the 256 by 256 gray level cameraman image mean shift filtering a uniform kernel having hs = 8, hr = 4, M = 10 was used. For the 512 by 512 color image baboon normal filters with hs going from 8 to 32 and hr going from 4 to 16 were used. Only features with large spatial support are represented in the filtered image when hs increases. On the other hand, only features with high color contrast survive when hr is large. Mean shift filtering is run on the image before mean shift segmentation is performed. In mean shift segmentation clusters are formed by grouping together pixels which are closer than hs in the spatial domain and hr in the range domain. Assign a label Li to each of the pixels ] depending on which cluster it belongs to. Optionally eliminate spatial regions containing less than M pixels. The following were segmented with uniform kernels: 256 by 256 gray level MIT building with hs = 8, hr = 7, M = 20 into 225 homogeneous regions. 256 by 256 color room image with hs = 8, hr = 5, M = 20. 512 by 512 color lake image with hs = 16, hr = 7, M = 40. 512 by 512 color image hand with hs = 16, hr = 19, M = 40. All 256 by 256 images used the same hs = 8 corresponding to a 17 by 17 spatial window, while all 512 by 512 images used hs = 16 corresponding to a 31 by 31 window. The range parameter hr and the smallest significant feature size M control the number of regions in the segmented image. The more an image deviates from the assumed piecewise constant model, larger values have to be used for hr and M to discard the effect of small local variations in feature space. 4 color landscape images were segmented with hs = 8, hr = 7, M = 100. 4 other color examples used hs = 8, hr = 7, M = 20. This code was ported by William Gandler. References ------------------------------------------------------------------------------------------------- [1] D. Comanicu, P. Meer: "Mean shift: A robust approach toward feature space analysis". IEEE Trans. Pattern Anal. Machine Intell., May 2002. [2] P. Meer, B. Georgescu: "Edge detection with embedded confidence". IEEE Trans. Pattern Anal. Machine Intell., 28, 2001. [3] C. Christoudias, B. Georgescu, P. Meer: "Synergism in low level vision". 16th International Conference of Pattern Recognition, Track 1 - Computer Vision and Robotics, Quebec City, Canada, August 2001.