Academic Company Events Community Support Solutions Products & Services Contact NI MyNI

Document Type: Prentice Hall
Author: Scott E. Umbaugh
Book: Computer Vision and Image Processing
Copyright: 1999
ISBN: 0-13-264599-8
NI Supported: No
Publish Date: Sep 6, 2006


Yes No

Related Categories

Image Enhancement Through Gray-Scale Modification

35 ratings | 3.09 out of 5
Print | PDF


Image enhancement techniques are used to emphasize and sharpen image features for display and analysis. Image enhancement is the process of applying these techniques to facilitate the development of a solution to a computer imaging problem. Consequently, the enhancement methods are application specific and are often developed empirically. Figure 4.1-1 illustrates the importance of the application by the feedback loop from the output image back to the start of the enhancement process and models the experimental nature of the development. In this figure we define the enhanced image as E(r, c). The range of applications includes using enhancement techniques as preprocessing steps to ease the next processing step or as postprocessing steps to improve the visual perception of a processed image, or image enhancement may be an end in itself. Enhancement methods operate in the spatial domain by manipulating the pixel data or in the frequency domain by modifying the spectral components (Figure 4.1-2). Some enhancement algorithms use both the spatial and frequency domains.

The type of techniques includes point operations, where each pixel is modified according to a particular equation that is not dependent on other pixel values; mask operations, where each pixel is modified according to the values of the pixel's neighbors (using convolution masks); or global operations, where all the pixel values in the image (or subimage) are taken into consideration. Spatial domain processing methods include all three types, but frequency domain operations, by nature of the frequency (and sequency) transforms, are global operations. Of course, frequency domain operations can become "mask operations," based only on a local neighborhood, by performing the transform on small image blocks instead of the entire image.

Enhancement is used as a preprocessing step in some computer vision applications to ease the vision task, for example, to enhance the edges of an object to facilitate guidance of a robotic gripper. Enhancement is also used as a preprocessing step in
Figure 4.1-1 The Image Enhancement Process

[+] Enlarge Image

applications where human viewing of an image is required before further processing. For example, in one application, high-speed film images had to be correlated with a computer-simulated model of an aircraft. This process was labor intensive because the high-speed film generated many images per second and difficult because of the fact that the images were all dark. This task was made considerably easier by enhancing the images before correlating them to the model, enabling the technician to process many more images in one session.

Image enhancement is used for postprocessing to generate a visually desirable image. For instance, we may perform image restoration to eliminate image distortion and find that the output image has lost most of its contrast. Here, we can apply some basic image enhancement methods to restore the image contrast. Alternately, after a compressed image has been restored to its "original" state (decompressed), some postprocessing enhancement may significantly improve the look of the image. For example, the standard JPEG compression algorithm may generate an image with undesirable "blocky" artifacts, and postprocessing it with a smoothing filter (lowpass or mean) will improve the appearance.

Overall, image enhancement methods are used to make images look better. What works for one application may not be suitable for another application, so the development of enhancement methods require problem domain knowledge, as well as image enhancement expertise. Assessment of the success of an image enhancement algorithm is often "in the eye of the beholder," so image enhancement is as much an art as it is a science.

Figure 4.1-2 Image Enhancement

[+] Enlarge Image

Gray-Scale Modification

Gray-scale modification (also called gray-level scaling) methods belong in the category of point operations and function by changing the pixel's (gray-level) values by a mapping equation. The mapping equation is typically linear (nonlinear equations can be modeled by piecewise linear models) and maps the original gray-level values to other, specified values. Typical applications include contrast enhancement and feature enhancement.

The primary operations applied to the gray scale of an image are to compress or stretch it. We typically compress gray-level ranges that are of little interest to us and stretch the gray-level ranges where we desire more information. This is illustrated in Figure 4.2-1a, where the original image data are shown on the horizontal axis and the modified values are shown on the vertical axis. The linear equations corresponding to the lines shown on the graph represent the mapping equations. If the slope of the line is between zero and one, this is called gray-level compression, whereas if the slope is greater than one, it is called gray-level stretching. In Figure 4.2-1a, the range of graylevel values from 28 to 75 is stretched, while the other gray values are left alone. The original and modified images are shown in Figures 4.2-1b, c, where we can see that stretching this range exposed previously hidden visual information. In some cases we may want to stretch a specific range of gray levels, while clipping the values at the low

Figure 4.2-1 Gray-Scale Modification

[+] Enlarge Image

and high ends. Figure 4.2-1d illustrates a linear function to stretch the gray levels between 50 and 200, while clipping any values below 50 to 0 and any values above 200 to 255. The original and modified images are shown in Figures 4.2-le, f, where we see the resulting enhanced image.

Figure 4.2-1 (Continued)

[+] Enlarge Image

Another type of mapping equation, used for feature extraction, is called intensity-level slicing. Here we are selecting specific gray-level values of interest and mapping them to a specified (typically high/bright) value. For example, we may have an application where it has been empirically determined that the objects of interest are in the gray-level range of 150 to 200. Using the mapping equations illustrated in Figures 4.2-2a, c, we can generate the resultant images in Figures 4.2-2b, d. With this type of operation, we can either leave the "background" gray-level values alone (Figure 4.2-2c) or turn them black (Figure 4.2-2e). Note that they do not need to be turned black; any gray-level value may be specified.

Figure 4.2-2 Intensity-Level Slicing

Buy the Book

Purchase Computer Vision and Image Processing from Prentice Hall Professional through this link and receive the following
  • Between 15% and 30% Off
  • Free Shipping and Handling
35 ratings | 3.09 out of 5
Print | PDF

Reader Comments | Submit a comment »


Excerpt from the book published by Prentice Hall Professional (
Copyright Prentice Hall Inc., A Pearson Education Company, Upper Saddle River, New Jersey 07458.
This material is protected under the copyright laws of the U.S. and other countries and any uses not in conformity with the copyright laws are prohibited, including but not limited to reproduction, DOWNLOADING, duplication, adaptation and transmission or broadcast by any media, devices or processes.