|Vision Developoment Module 2019 Help|
|Vision Development Module 2020 Help|
|Vision Development Module 2020 SP1 Help|
|Vision Development Module 2020 SP2 Help|
Use color pattern matching to quickly locate known reference patterns, or fiducials, in a color image. With color pattern matching, you create a model or template that represents the object you are searching for. Then your machine vision application searches for the model in each acquired image, calculating a score for each match. The score indicates how closely the model matches the color pattern found. Use color pattern matching to locate reference patterns that are fully described by the color and spatial information in the pattern.
Grayscale, or monochrome, pattern matching is a well-established tool for alignment, gauging, and inspection applications. In all of these application areas, color simplifies a monochrome problem by improving contrast or separation of the object from the background. Color pattern matching algorithms provide a quick way to locate objects when color is present.
Use color pattern matching when the object under inspection has the following qualities:
The following figure illustrates the advantage of using color pattern matching over color location to locate the resistors in an image. Although color location finds the resistors in the image, the matches are not very accurate because they are limited to color information. Color pattern matching uses color matching first to locate the objects, and then pattern matching to refine the locations, providing more accurate results.
The following figure shows the advantage of using color information when locating color-coded fuses on a fuse box. Figure A shows a grayscale image of the fuse box. In the image of the fuse box in figure A, the grayscale pattern matching tool has difficulty clearly differentiating between fuse 20 and fuse 25 and will return close match scores because of similar grayscale intensities and the translucent nature of the fuses. In the color image, figure B, the addition of color helps to improve the accuracy and reliability of the pattern matching tool.
The color pattern matching tools in NI Vision measure the similarity between an idealized representation of a feature, called a model, and the feature that may be present in an image. A feature is defined as a specific pattern of color pixels in an image.
Color pattern matching is the key to many applications. Color pattern matching provides your application with information about the number of instances and location of the template within an image. Use color pattern matching in the following three general applications: gauging, inspection, and alignment.
Many gauging applications locate and then measure or gauge the distance between objects. Searching and finding a feature is the key processing task that determines the success of many gauging applications. If the components you want to gauge are uniquely identified by their color, color pattern matching provides a fast way to locate the components.
Inspection detects simple flaws, such as missing parts or unreadable printing. A common application is inspecting the labels on consumer product bottles for printing defects. Because most of the labels are in color, color pattern matching is used to locate the labels in the image before a detailed inspection of the label is performed. The score returned by the color pattern matching tool also can be used to decide whether a label is acceptable.
Alignment determines the position and orientation of a known object by locating fiducials. Use the fiducials as points of reference on the object. Grayscale pattern matching is sufficient for most applications, but some alignment applications require color pattern matching for more reliable results.
In automated machine vision applications, the visual appearance of materials or components under inspection can change due to factors such as orientation of the part, scale changes, and lighting changes. The color pattern matching tool maintains its ability to locate the reference patterns and gives accurate results despite these changes.
A color pattern matching tool locates the reference pattern in an image even when the pattern in the image is rotated and slightly scaled. When a pattern is rotated or scaled in the image, the color pattern matching tool detects the following features of an image:
Figure A shows a template image, or pattern. Figures B and C illustrate multiple occurrences of the template. Figure B shows the template shifted in the image. Figure C shows the template rotated in the image.
The color pattern matching tool finds the reference pattern in an image under conditions of uniform changes in the lighting across the image. Because color analysis is more robust when dealing with variations in lighting than grayscale processing, color pattern matching performs better under conditions of non-uniform light changes, such as in the presence of shadows, than grayscale pattern matching.
Figure A shows the original template image. Figure B shows the same pattern under bright light. Figure C shows the pattern under poor lighting.
Color pattern matching finds patterns that have undergone some transformation because of blurring or noise. Blurring usually occurs because of incorrect focus or depth of field changes.
Color pattern matching is a unique approach that combines color and spatial information to quickly find color patterns in an image. It uses the technologies behind color matching and grayscale pattern matching in a synergistic way to locate color patterns in color images.
Color matching compares the color content of an image or regions in an image to existing color information. The color information in the image may consist of one or more colors. To use color matching, define regions in an image that contain the color information you want to use as a reference. The machine vision functions then learn the 3D color information in the image and represents it as a 1D color spectrum. Your machine vision application compares the color information in the entire image or regions in the image to the learned color spectrum, calculating a score for each region. This score relates how closely the color information in the image region matches the information represented by the color spectrum. To use color matching, you need to know the location of the objects in the image before performing the match.
Color location functions extend the capabilities of color matching to applications where you do not know the location of the objects in the image. Color location uses the color information from a template image to look for occurrences of the template in the search image. The basic operation moves the template across the image pixel by pixel and compares the color information at the current location in the image to the color information in the template, using the color matching algorithm. Because searching an entire image for color matches is time consuming, the color location software uses some techniques to speed up the location process. A coarse-to-fine search strategy finds the rough locations of the matches in the image. A more refined search, using a hill climbing algorithm, is then performed around each match to get the accurate location of the match. Color location is an efficient way to look for occurrences of regions in an image with specific color attributes.
NI Vision grayscale pattern matching methods incorporate image understanding techniques to interpret the template information and use that information to find the template in the image. Image understanding refers to image processing techniques that generate information about the features of a template image. These methods include the following:
NI Vision uses a combination of the edge information in the image and an intelligent image sampling technique to match patterns. The image edge content provides information about the structure of the image in a compact form. The intelligent sampling technique extracts points from the template that represent the overall content of the image. The edge information and intelligent sampling technique reduce the inherently redundant information in an image and improve the speed and accuracy of the pattern matching tool. In cases where the pattern can be rotated in the image, a similar technique is used, but with specially chosen template pixels whose values, or relative change in values, reflect the rotation of the pattern. The result is fast and accurate grayscale pattern matching.
NI Vision pattern matching accurately locates objects in conditions where they vary in size (±5%) and orientation (between 0° and 360°) and when their appearance is degraded.
Color pattern matching uses a combination of color location and grayscale pattern matching to search for the template. When you use color pattern matching to search for a template, the software uses the color information in the template to look for occurrences of the template in the image. The software then applies grayscale pattern matching in a region around each of these occurrences to find the exact position of the template in the image. The following figure illustrates the general flow of the color pattern matching algorithm. The size of the searchable region is determined by the software, based on the inputs you provide, such as search strategy and color sensitivity.
There are standard ways to convert RGB to grayscale and to convert one color space to another. The transformation from RGB to grayscale is linear. However, some transformations from one color space to another are nonlinear because some color spaces represent colors that cannot be represented in other spaces.
The following equations convert an RGB image into a grayscale image on a pixel-by-pixel basis.
grayscale value = 0.299R + 0.587G + 0.114B
This equation is part of the NTSC standard for luminance. An alternative conversion from RGB to grayscale is a simple average:
grayscale value = (R + G + B) / 3
There is no matrix operation that allows you to convert from the RGB color space to the HSL color space. The following equations describe the nonlinear transformation that maps the RGB color space to the HSL color space.
V2 = (G – B)
V1 = 2R – G – B
|φ =||tan–1(V2/V1)||if V1 ≠ 0|
|else if V1 = 0, V2 > 0|
|else if V1 = 0, V2 < 0|
|H =||256 ×||φ
|if V1 ≥ 0, V2 ≥ 0|
|256 ×||(π + φ)
|else if V1 > 0|
|256 ×||(2π + φ)
The following equations map the HSL color space to the RGB color space.
|h = H||2π
s = S/255
s' = (1 – s)/3
f(h) = (1 – s · cos(h)/cos(π/3 – h))/3
b = s'
r = f(h)
g = 1 – r – b
|[0 < h ≤ 2π/3]|
|h' = h – 2π/3
r = s'
g = f(h')
b = 1 – r – g
|[2π/3 < h ≤ 4π/3]|
|h' = h – 4π/3
g = s'
b = f(h')
r = 1 – g – b
|[4π/3|< h ≤ 2π]|
1 = 0.299r + 0.587g + 0.114b
1' = L/l
The following 3 × 3 matrix converts RGB to CIE XYZ without applying gamma correction.
|X||=||0.412453 0.357580 0.180423||R|
|Y||0.212671 0.715160 0.072169||G|
|Z||0.019334 0.119193 0.950227||B|
By projecting the tristimulus values on to the unit plane X + Y + Z = 1, color can be expressed in a 2D plane. The chromaticity coordinates are defined as follows:
x = X / (X + Y + Z)
y = Y / (X + Y + Z)
z = Z / (X + Y + Z)
You can obtain z from x and y by z = 1 – x + y. Hence, chromaticity coordinates are usually given as (x, y) only. The chromaticity values depend on the hue or dominant wavelength and the saturation. Chromaticity values are independent of luminance.
The diagram from (x, y) is referred to as the CIE 1931 chromaticity diagram, or the CIE (x, y) chromaticity diagram, as illustrated in the bell curve of the following figure.
The three color components R, G, and B define a triangle inside the CIE diagram of the previous figure. Any color within the triangle can be formed by mixing R, G, and B. The triangle is called a gamut. Because the gamut is only a subset of the CIE color space, combinations of R, G, and B cannot generate all visible colors.
To transform values back to the RGB space from the CIE XYZ space, use the following matrix operation:
|R||=||3.240479 –1.537150 –0.498535||X|
|G||–0.969256 1.875992 0.041556||Y|
|B||0.055648 –0.204043 1.057311||Z|
Notice that the transform matrix has negative coefficients. Therefore, some XYZ color may transform into R, G, B values that are negative or greater than one. This means that not all visible colors can be produced using the RGB color space.
To transform RGB to CIE L*a*b*, you first must transform the RGB values into the CIE XYZ space. Use the following equations to convert the CIE XYZ values into the CIE L*a*b* values.
L* = 116 × (Y/Yn)1/3 – 16 for Y/Yn > 0.008856
L* = 903.3 × Y / Yn otherwise
a* = 500(f(X / Xn) – f(Y / Yn))
b* = 200(f(Y / Yn) – f(Z / Zn))
f(t) = t1/3 for t > 0.008856
f(t) = 7.787t + 16/116 otherwise
Here Xn, Yn, and Zn are the tri-stimulus values of the reference white.
L* represent the light intensity. NI Vision normalizes the result of the L* transformation to range from 0 to 255. The hue and chroma can be calculated as follows:
Hue = tan–1(b*/a*)
|ΔE*ab =||√||(a*)2 + (b*)2|
Based on the fact that the color space is now approximately uniform, a color difference formula can be given as the Euclidean distance between the coordinates of two colors in the CIE L*a*b*.
|Chroma =||√||(ΔL*)2 + (Δa*)2 + (Δb*)2|
To transform CIE L*a*b* values to RGB, first convert the CIE L*a*b* values to CIE XYZ using the following equations:
X = Xn(P + a* / 500)3
Y = YnP3
Z = Zn(P – b* / 200)3
P = (L* + 16) / 116
Then, use the conversion matrix given in the RGB and CIE XYZ section to convert CIE XYZ to RGB.
The following matrix operation converts the RGB color space to the CMY color space.
Normalize all color values to lie between 0 and 1 before using this conversion equation. To obtain RGB values from a set of CMY values, subtract the individual CMY values from 1.
The following matrix operation converts the RGB color space to the YIQ color space.
|Y||=||0.299 0.587 0.114||R|
|U||0.596 –0.275 –0.321||G|
|Q||10.212 –0.523 0.311||B|
The following matrix operation converts the YIQ color space to the RGB color space.
|R||=||1.0 0.956 0.621||Y|
|G||1.0 –0.272 –0.647||I|
|B||1.0 –1.105 1.702||Q|