By these primitives, they happen to be

Bythe advancement in image processing and vision techniques, various imageformats are available, they all are pixel representation.

Pixel representationhas some drawbacks such as non-scalability and absence of mathematicalrepresentation. On other hand Vector graphics use geometrical primitives toexpress a raster image. With the help of these primitives, they happen to bemore compact, editable, scalable, resolution-independent and even smaller insize. Vectorization, i.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

e. raster–to–vector conversion, has been at the centerof graphics recognition problems since the beginning. As implied bythe name, raster–to–vector conversion consists in analyzing a raster image toconvert its pixel representation to a vector representation. Properties ofvector graphics made them suitable for many portable applications.Vectorizationprocess can be divided into following stages: ·        Binarization ·        Noise Filtering ·        Segmentation Preprocessing Processing Post Processing ·        Detection of lines and arcs Noise Filtering ·        Approximation ·        Filtration ·        Analysis and Interpretation Steps in Vectorization                                                                                             PreprocessingBinarization of raster images is the first step inmost document/map image analysis systems. Selection of an appropriate binarizationmethod for an input image domain is a difficult problem.

A good binarizationwill result in better recognition accuracy for any pattern recognizationapplications. However it is difficult to evaluate the performance of low levelimage processing technique such as binarization, evaluation criteria may bevisual or machine dependent, recent performance evaluation for binarizationmethods 2 follow this approach. But these are only subjective criteria. Globalbinarization methods 3, 4, 5, 6 calculate a single threshold valuefor the entire image, these methods are not adapted when background noise isnot uniform. Local adaptive binarization methods calculate a local thresholddetermined by neighbouring pixels 7-15, these methods can deal with variouskinds of noise present in the image. Grayscale/binary image should be filteredfor reduction of noise, if the image is colored then the image has to berepresented by monochrome layers.Processing(Vectorization)Transformation of a raster binary image into avector form i.

e., chain of pixels or set lines, takes place in this step. Manymethods have been developed and implemented since image processing techniquesare introduced, these methods can be classified into following categories:  Methods Based on Hough Transformation, Thinningor Skeleton based methods, Matching opposite Contour based methods, Run graphbased methods, Mesh pattern based methods and Sparse pixel based. A generalvectorization process can be composed of following basic steps:-    1.           Medial axispoints sampling, or medial axis representation acquisition. This is the kernelprocessing for information reduction, after which only the important pointsthat represent the medial axis are determined.

2.           Line tracking,which follows (tracks) the medial axis points found in the first stage to yielda chain of points for each vector.HoughTransformationInHT method, each image point is treated independently, its independentcombination of evidence means that it can recognize partial or slightlydeformed shapes. The HT converts a difficult global detection problem in imagespace into a more easily solved local peak detection problem in a parameterspace 17. Dori 18 discuss Hough Transform is used in line recognition by transformingspatially extended patterns in binary image data into spatially compactfeatures in a parameter space. Parametric shapes in an image are detected bylooking for accumulation points in the parameter space. If a particular shapeis present in the image, then the mapping of all of its points into theparameter space must cluster around the parameter values which correspond tothat shape.

Oneway the HT can be used to detect lines is to parameterize it according to itsslope and intercept. Straight lines are defined in eq-1    —– (1)Thus,every line in the (x,y) plane corresponds to a point in the (m, c) plane. Everypoint on the (x,y) plane can have an infinite number of possible lines thatpass through it. The gradients and intercepts of these lines form on the (m, c)plane a line described by eq-(2). —– (2)The(m, c) plane is divided into rectangular ‘bins’ which accumulate for each blackpixel in the (x,y) plane; all the pixels lying along the line in eq (2). Whenthe line of eq (2) is drawn for each black pixel, the cells through which itpasses are incremented.

Considering the noise, lines are identified as peaks intransformed space, if it is greater than predefined threshold value in eq-1.TheSpace complexity of HT based method is quadratic to the image resolution. Hart19, uses angle-radius instead of slope-intercept parameters to simplify thecomputation, by using normal parameterization instead of parameter space. Thisparameterization specifies a straight line by the angle  of its normaland its algebraic distance p from the origin. The parameter space can berepresented by following equation.                Itreduces the problem of unbounded of slope and Intercept in original HT tofinite parameter space.

Although both techniques are not capable in findingpolylines. Li,Lavin and LeMaster 20, presented Fast Hough Transform technique (FHT). FHTsplits the parameter space into “hypercubes” aligned in a tree structure – inthe 2D case, quads of the plane are recursively divided into sub-quadrants, toform search quad-tree.

Exponential growth is prevented by pruning it down topromising quadrants only (those with many intersections) The infinite-slopeproblem can be resolved by using an “inverted” (c,k) space when | k |>=1.Illingworth21, proposed an hierarchical approach based on a pyramid structure with eachlayer in the pyramid splitting the complete image into a number of sub-images. Hierarchicalapproach is quite suitable for parallel architecture. The main disadvantage ofHT method is the large memory requirement and the parameter space is sampleddiscretely as result location accuracy is not preserved.

 Thinning Based MethodsMostof vectorization systems, earlier systems, are based on thinning based methods28, 29, 30 and few vectorization system used thinning as first step ofvectorization process 31. Thinning is the process of finding the skeleton ofan object, skeleton is a lower dimensional object which act as important shapedescriptors it captures essential topology and shape information of the objectin a simple form is useful in solving various problems in image processing. Skeletonscan be represented in various ways such as Medial Anxis (MA), Voronoi diagram 34,Shock graph 33 and Reeb graph. The skeleton of an object is conceptuallydefined as the locus of centre of pixels in the object 32, but in generalthere is no definition of skeleton exists. But all definition of skeleton must fulfilledfollowing requirements for skeleton objects 32: (a) Centeredness (b)preservation of connectivity (c) Consistency of topology (d) Thinness.

Thereare a variety of methods proposed for image skeleton extraction in theliterature. In general, they can be classified in 3 categories:BoundarypeelingThismethod is based on boundary erosion process; in this process each pixel removesthe pixels while sequence of pixels remains one pixel wide. This is arepetitive, time intensive process of testing and deletion each layer. Thedifficulty of this method is that the set of rules defined for removing pixelsis dependent highly on the type of image and that different set of rules willbe applied for different type of images. However, this method is good forconnectivity preservation.



I'm Ruth!

Would you like to get a custom essay? How about receiving a customized one?

Check it out