Color a color space then the results

 Color image segmentation is widely used in multimedia application, it  is based on the color feature of image pixels by considering that homogenous colors in the image correspond to separate clusters and as a result meaningful objects in the image. Color provides information in addition to intensity & it is useful or even necessary for pattern recognition and computer vision. Today, a large number of multimedia data streams sent on the Internet, due to the bandwidth limitations; there is need to compress the data, and therefore it calls for image and video segmentation. Most gray level image segmentation techniques can be extended to color images such as histogram thresholding clustering region growing edge detection fuzzy approaches and neural networks gray 1. Gray level segmentation methods can be directly applied to each component of a color space then the results can be combined in some way to obtain final resulst.1.1 Image SegmentationSegmentation is the process of dividing an image into meaningful regions.

It is often considered the first and the most important step in image analysis20. Segmentation finds  application in a variety of fields from medicine to defense. For instance, in medicine, it is used for image-guided surgery, surgical simulation, therapy evaluation, neuroscience studies, and diagnosis. A major area of application of segmentation methods is in solving problems related to machine vision. Some examples of these are automatic character recognition, production line quality control, automatic processing of fingerprints, target recognition and tracking, and surgical robotics.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

A majority of these machine vision problem require partially or fully-automatic segmentation techniques either region based or edge based.1.      In Region-based techniques regions are constructed by associating or dissociating neighbor pixels. It works on the principle of homogeneity, by considering the fact that neighboring pixels inside a region posses similar characteristics and dissimilar to the pixels in other regions. Each pixel is compared with its neighboring pixel for similarity check such as grey level, color, texture, shape.

If the result is positive then that particular pixel is added to the pixel to grow the region. If we use not the original image but a feature image for the segmentation process, the features represent not a single pixel but a small neighborhood, depending on the mask sizes of the operators used. At the edges of the objects, however, where the mask includes pixels from both the object and the background, any feature that could be useful cannot be computed. The correct procedure would be to limit the mask size at the edge to points of either the object or the background. But how can this be achieved if we can only distinguish the object and the background after computation of the feature? Obviously, this problem cannot be solved in one step, but only iteratively using a procedure in which feature computation and segmentation are performed alternately.

In the first step, the features are computed disregarding any object boundaries. Then a preliminary segmentation is performed and the features are computed again, now using the segmentation results to limit the masks of the neighborhood operations at the object edges to either the object or the background pixels, depending on the location of the center pixel. To improve the results, feature computation and segmentation can be repeated until the procedure converges into a stable result.2.      The edge representation of an image significantly reduces the quantity of data to be processed, yet it retains essential information regarding the shapes of objects in the scene. Edge-based techniques rely on discontinuities in image values between distinct re­gions, and the goal of the segmentation algorithm is to accurately demarcate the boundary separating these regions. The major property of the edge detection technique is its ability to extract the exact edge line with good orientation as well as more literature about edge detection has been available in the past three decades.

For the purpose of precision, the perimeter of the boundaries detected must be approximately equal to that of the object in the input image. There are a large number of edge detection operators available and each of them is designed to be sensitive to certain types of edges. . It is a fundamental process detects and outlines of an object and boundaries among objects and the background in the image.

Edge detection is the most familiar approach for detecting significant discontinuities in intensity values. Edges are local changes in the image intensity. Edges typically occur on the boundary between two regions.

The main features can be extracted from the edges of an image. Edge detection has major feature for image analysis. These features are used by advanced computer vision algorithms. Edge detection is used for object detection which serves various applications like medical image processing, biometrics etc. Edge detection is an active area of research as it facilitates higher level image analysis. There are three different types of discontinuities in the grey level like point, line and edges. Spatial masks can be used to detect all the three types of discontinuities in an image.

An edge based segmentation approach can be used to avoid a bias in the size of the segmented object without using a complex thresholding scheme. Edge-based segmentation is based on the fact that the position of an edge is given by an extreme of the first-order derivative or a zero crossing in the second-order derivative11 1.2  Image Segmentation Techniques 1.2.1 Clustering MethodsClustering is the way of grouping a set of objects in such a way that objects in the same group called a cluster, are more similar in some sense or another to each other than to those in other groups or clusters. A cluster is therefore a collection of objects which are “similar” between them and are “dissimilar” to the objects belonging to other clusters. An image can be grouped based on keyword (metadata) or its content (description).

A variety of clustering techniques have been introduced to make the segmentation more effective.141.2.2 Thresholding Methods Thresholding10  is the operation of converting a multilevel image into a binary image it assigns the value of 0 (background) or 1 (objects or foreground) to each pixel of an image based on a comparison with some threshold value T (intensity or color value). Types of Thresholding1.        Local Thresholding2.        Global Thresholding3.

        Adaptive ThresholdingThreshold based technique works on the assumption that the pixels falling in certain range of intensity values represents one class and remaining pixels in the image represents the other class. Thresholding can be implemented either locally or globally Threshold technique is one of the important techniques in image segmentation. This technique can be expressed as:  T=Tx, y, f(x, y), b(x, y)….

….. (1)  Where: T is the threshold value.  x, y are the coordinates of the threshold value point. Threshold image g(x,y) can be define:  .

…….

…(2) Threshold segmentation techniques can be categorized in three different classes:  1. Local techniques are based on the local properties of the pixels and their neighborhoods. 2. Split, merge and growing techniques use both the notions of homogeneity and geometrical proximity in order to obtain good segmentation results. Finally image segmentation, which is a field of image analysis, is used to group pixels into regions to determine an image’s composition.

Selection of threshold is very crucial in image segmentation process. Threshold value can be determined either by an interactive way or can be the outcome of automatic threshold selection method.

x

Hi!
I'm Ruth!

Would you like to get a custom essay? How about receiving a customized one?

Check it out