Edge detection is to detect the obvious change of the gradient in the image through the gradient change of the image, aiming at the edge information. Image segmentation is to segment the target, aiming at the target object, edge detection is a method of image segmentation in the spatial domain, which belongs to the inclusion relationship
The image after edge detection is a binary image. Morphological operations can be applied to segment the binary image, so edge detection is a prerequisite for image segmentation. But segmentation does not have to use edge detection.
Image segmentation:concept:
Image segmentation is the process of dividing an image into several small areas that do not intersect each other. The so-called small area is a connected set of pixels with a common attribute in a certain sense.
From the point of view of the set: it should be a set of points with the following properties. The set R represents the entire region. The division of R can be regarded as dividing R into N non-empty subsets R1, R2, ... that meet the following five conditions: , RN:
purpose:
Whether it is image processing, analysis, understanding and recognition, its basic work is generally based on image segmentation;
Extract meaningful features in the image or feature information required by the application;
The final result of image segmentation is to decompose the image into some units with certain characteristics, called the primitives of the image;
Relative to the entire image, this image primitive is easier to be processed quickly.
Principle of image segmentationResearch on image segmentation has been highly valued by people for many years, and various types of segmentation algorithms have been proposed so far. Pal divides image segmentation algorithms into 6 categories: threshold segmentation, pixel segmentation, depth image segmentation, color image segmentation, edge detection, and fuzzy set-based methods. However, in this method, the content of each category overlaps. In order to cover the emerging new methods, some researchers classify image segmentation algorithms into the following six categories: parallel boundary segmentation technology, serial boundary segmentation technology, parallel region segmentation technology, serial region segmentation technology, and segmentation combined with specific theoretical tools Technology and special image segmentation technology.
Features of image segmentation:The divided regions are similar to certain properties such as gray scale and texture, and the interior of the regions is connected and there are not too many small holes.
Regional boundaries are clear
Adjacent areas have obvious differences on the nature of the segmentation
Image segmentation method:1. Segmentation method based on pixel gray value: threshold (threshold) method
2. Region-based segmentation method: the boundary method of segmentation is realized by directly determining the boundary between regions;
3. Edge-based segmentation technology: first detect edge pixels, and then connect the edge pixels to form a boundary to form a segmentation.
The content of image segmentation:Edge detection
Edge tracking:
Starting from an edge point in the image, and then searching for the next edge point according to a certain criterion to track the target boundary.
Threshold segmentation:
Original image-f (x, y)
Gray threshold-T
Threshold operation to obtain binary image-g (x, y)
Region segmentation:
Threshold segmentation method limits the choice of multiple thresholds due to no or little consideration of spatial relationships
The region segmentation method can make up for this deficiency. It uses the spatial nature of the image. The method believes that the pixels belonging to the same region should have similar properties. The concept is quite intuitive.
Traditional region segmentation algorithms include region growth method and region split and merge method. This kind of method can obtain better performance when segmenting images with insufficient prior knowledge such as complex scenes or natural scenes when no prior knowledge is available. However, both space and time overhead are relatively large.
The regional growth method mainly considers the relationship between pixels and their spatial neighbor pixels
At the beginning, determine one or more pixel points as seeds, and then grow the area according to a certain similarity criterion, gradually generate a spatial area with a certain uniformity, merge adjacent pixels or areas with similar properties to gradually grow Area until there are no points or other small areas that can be merged.
The similarity measure of pixels in a region may include information such as average gray value, texture, color, and so on.
Regional growth:
Mainly consider the relationship between pixels and their spatial neighbor pixels
At the beginning, determine one or more pixels as seeds, and then grow the area according to a certain similarity criterion, gradually generate a spatial area with a certain uniformity, merge adjacent pixels or areas with similar properties to gradually grow the area, Until there are no points or other small areas that can be merged.
The similarity measure of pixels in the area may include information such as average gray value, texture, color, and so on.
The main steps:
Choose the right seed point
Determine the similarity criterion (growth criterion)
Determine growth stop conditions
Regional division:
Condition: If certain characteristics of the area do not meet the consistency criterion
Start: Start from the largest area of ​​the image, in general, start from the entire image
note:
Determine the split criterion (consistency criterion)
Determine the splitting method, that is, how to split the region so that the characteristics of the split subregions meet the consistency criterion value as much as possible
Edge detection:In the theoretical framework of visual computing, extracting basic features such as edges, corners, and textures on a two-dimensional image is the first step in the overall system framework. The graph composed of these features is called a primitive graph.
The edge points in the sense of different "scales" contain all the information of the original image under certain conditions.
definition:
• At present, there is a descriptive definition of edges, that is, the boundary of two uniform image areas with different gray levels, that is, the boundary reflects the local gray level changes.
• Local edges are small areas in the image where the local gray levels are transformed very quickly in a simple (ie, monotonous) manner. This local change can be detected by the edge detection operator of a certain window operation.
Description of the edge:1) The direction of the edge normal—the direction in which the grayscale changes the most at a certain point, perpendicular to the direction of the edge;
2) Edge direction-perpendicular to the edge normal direction, it is the tangent direction of the target boundary;
3) Edge intensity-a measure of the intensity of the local variation of the image along the edge normal.
The basic idea of ​​edge detection is to determine whether the pixel is located on the boundary of an object by detecting the state of each pixel and its neighborhood. If a pixel is located on the boundary of an object, the change of the gray value of its neighbor pixels is relatively large. If an algorithm can be used to detect this change and quantify it, then the boundary of the object can be determined.
The edge detection algorithm has the following four steps:
Filtering: The edge detection algorithm is mainly based on the first and second derivatives of the image intensity, but the calculation of the derivative is very sensitive to noise, so a filter must be used to improve the performance of the noise-related edge detector. It should be pointed out that most filters reduce the edge strength while reducing noise. Therefore, there is a trade-off between enhancing edges and reducing noise.
Enhancement: The basis for enhancing the edge is to determine the change value of the neighborhood intensity of each point of the image. The enhancement algorithm can highlight the points where the neighborhood (or local) intensity value has a significant change. Edge enhancement is generally done by calculating the gradient amplitude.
Detection: There are many points in the image whose gradient amplitude is relatively large, and these points are not all edges in a specific application field, so some method should be used to determine which points are edge points. The simplest edge detection criterion is the gradient amplitude threshold criterion.
Positioning: If the edge position needs to be determined in an application, the position of the edge can be estimated on the sub-pixel resolution, and the orientation of the edge can also be estimated.
In the edge detection algorithm, the first three steps are very common. This is because in most cases, it is only necessary for the edge detector to indicate that the edge appears near a certain pixel of the image, and it is not necessary to indicate the precise position or direction of the edge. Edge detection errors usually refer to edge misclassification errors, that is, false edges are discriminated as edges and retained, and true edges are discriminated as false edges and removed. The edge estimation error is a probabilistic statistical model to describe the edge position and direction errors. We distinguish edge detection error from edge estimation error because their calculation methods are completely different and their error models are also completely different.
Three common principles of edge detection:
• Good detection results, or the rate of false detection of edges as low as possible, is that there should be no detection results where the edges of the image appear; on the other hand, false edges should not appear;
• The positioning of the edge should be accurate, that is, the position of the edge we mark should be sufficiently close to the center position of the real edge on the image;
• Have the lowest possible number of responses to the same edge, that is, the detection response is preferably single pixel.
Several commonly used edge detection operators are Roberts edge detection operator, Sobel operator, Prewitt operator, Krisch edge operator, Gauss-Laplace operator.
Image features:• Image features refer to attributes that can be used as signs in an image. It can be divided into two types: statistical features and visual features.
• The statistical characteristics of the image refer to some artificially defined features that can be obtained through transformation, such as the histogram, moment, frequency spectrum, etc. of the image;
• The visual characteristics of the image refer to the natural characteristics that human vision can directly perceive, such as the brightness, texture or contour of the area, etc.
Contour extraction:
The algorithm for extracting the contour of a binary image is very simple, that is, hollowing out the internal point: If a point in the original image is black, and its eight neighboring points are all black, it means that the point is an internal point, delete the point (set The white pixel value is 255). Performing this operation on all pixels in the image can complete the extraction of the image outline.
Template matching:
Template matching refers to using a smaller image, that is, the template is compared with the source image to determine whether there is an area in the source image that is the same as or similar to the template. If the area exists, the location can also be determined and the area extracted .
Shape matching:
Shape is also an important feature to describe the content of an image. There are three issues to consider when using shape to match. First, shape is often associated with the target, so shape features can be viewed as higher-level image features relative to color. To obtain the shape parameters of the target, it is often necessary to segment the image first, so the shape characteristics will be affected by the image segmentation effect. Secondly, the description of the target shape is a very complicated problem. So far, no exact mathematical definition of the image shape that can be consistent with human feelings has been found. Finally, the shape of the target in the images obtained from different perspectives may be very different. In order to accurately match the shape, the problem of invariance of translation, scale, and rotation transformation needs to be solved.
The shape of the target can often be represented by the outline of the target, and the outline is composed of a series of boundary points. It is generally believed that at larger scales it is often possible to more reliably eliminate false detections and detect true boundary points, but at large scales it is not easy to accurately locate the boundary. On the contrary, the positioning of true boundary points is often more accurate at a smaller scale, but the proportion of false detections at a smaller scale will increase. Therefore, it may be considered to detect the true boundary point at a larger scale first, and then to locate the true boundary point more accurately at a smaller scale. As a multi-scale, multi-channel analysis tool, wavelet transform and analysis are more suitable for multi-scale boundary detection of images.
This Automation curtain is specially designed for automation industry. SDKELI LSC2 light curtain is designed for automation field, with small size, compact structure and strong anti-interference ability, and the product meets IEC 61496-2 standards. The automatic light curtain is with reliable quality and very competitive price. It has been used in many factories and has replaced curtains from Sick, Omron, Banner, Keyence, etc.
Automatic Light Curtain,Laser Light Curtain,Automation Light Beam Sensor,Automatic Infrared Beam Sensor,Infrared Beam Curttain Sensor,Infrared Beam Sensor
Jining KeLi Photoelectronic Industrial Co.,Ltd , https://www.sdkelien.com