|Download Help (Windows Only)|
A digitized image has three basic properties: resolution, definition, and number of planes.
The spatial resolution of an image is determined by its number of rows and columns of pixels. An image composed of m columns and n rows has a resolution of m × n. This image has m pixels along its horizontal axis and n pixels along its vertical axis.
The definition of an image indicates the number of shades that you can see in the image. The bit depth of an image is the number of bits used to encode the value of a pixel. For a given bit depth of n, the image has an image definition of 2n, meaning a pixel can have 2n different values. For example, if n equals 8 bits, a pixel can have 256 different values ranging from 0 to 255. If n equals 16 bits, a pixel can have 65,536 different values ranging from 0 to 65,535 or from -32,768 to 32,767.
NI Vision can process images with 8-bit, 10-bit, 12-bit, 14-bit, 16-bit, floating point, or color encoding. The manner in which you encode your image depends on the nature of the image acquisition device, the type of image processing you need to use, and the type of analysis you need to perform. For example, 8-bit encoding is sufficient if you need to obtain the shape information of objects in an image. However, if you need to precisely measure the light intensity of an image or region in an image, you should use 16-bit or floating-point encoding.
Use color encoded images when your machine vision or image processing application depends on the color content of the objects you are inspecting or analyzing.
NI Vision does not directly support other types of image encoding, particularly images encoded as 1-bit, 2-bit, or 4-bit images. In these cases, NI Vision automatically transforms the image into an 8-bit image—the minimum bit depth for NI Vision—when opening the image file.
The number of planes in an image corresponds to the number of arrays of pixels that compose the image. A grayscale or pseudo-color image is composed of one plane. A true-color image is composed of three planes—one each for the red component, blue component, and green component.
In true-color images, the color component intensities of a pixel are coded into three different values. A color image is the combination of three arrays of pixels corresponding to the red, green, and blue components in an RGB image. HSL images are defined by their hue, saturation, and luminance values.