:: алгоритмы  и методы :: :: олимпиадные задачи :: :: связь :: :: о сайте ::
Путь: Графика » Квантизация » Quantization Overview
  Color Quantization Overview



Если Вы найдете статью полезной и интересной - не сочтите за труд, переведите материал или хотя бы часть его и отправьте на адрес algolist@manual.ru. Задать вопросы или просто написать письмо можно также на странице контактов.

Even today from time to time articles appear in computer graphics magazines that deal with algorithms which try to solve the color quantization problem more efficiently (e.g. /4/). This demonstrates that there is still a need for such algorithms and obviously some people work on color quantization techniques. Quantization of color images is necessary if the display on which a specific image is presented works with less colors than the original image. This is the case when a true or full color image is displayed on a device with a color lookup-table. Therefore the number of colors in the image has to be reduced. It has to be considered during color reduction that especially these colors are identified and selected which appear in the image very often and the substituted colors produce no or only little erros. This process is called color quantization.

It is used for previewing and for controlling the rendering process. Quantization is approximating a true intensity value with a displayable intensity value.

The objective of color quantization is displaying a full color image (24 Bits per pixel) with a restricted set of color numbers (256, 64, 16) without a significant (almost preceptually not noticeable by the spectator) lack of color impression approximation as closely as possible when quantized.

Generally speaking quantization can be viewed as a stepwise process:

  1. In the first step statistics on the used colors in the image that is to be quantizated are generated (histogram analysis)
  2. a) Based on the analysis the color lookup-table has to be filled with values
    b) The true color values are mapped to the values of the color table. The color values have to be mapped to the nearest color entries in the color table.
  3. The original image is quantizated. Each pixel is transformed to the appropriate index of the color table.
  4. Optionally an error diffusion technique can be applied.
In this sense the original image is strictly speaking already quantizied. This is, because the input data for quantization is a rectangular array of red, green and blue color separations. The separations are very often in the range [0, 255]. This is the case when the image is digitized from a scanner. The discussed algorithms regard that image as the true image and try to approximate it as closely a possible.

To assess quantization techniques the following quality criterias are considered

  • Human Perception
  • Run Time
  • Memory Requirement
As test images we used for quality measurements
  • a computer generated color image that is a composition of a scanned image, a computer generated figure, the picture of a butterfly and the transparently mapped EUROGRAPHICS logo and
  • a color gray shade.

Each slide contains the image that is to be quantized (left upper corner), the reduced color image (right upper corner), the error distribution between the original image and the manipulated image (lower left corner), and the applied methods (lower right corner).

Different quantization methods are investigated:

  • Static Color Table
  • Median cut (/3/)
  • Popularity (/3/)
  • Octree (/2/)

and combined with error diffusion techniques:

  • Dithering (/1/)
  • Floyd Steinberg (/1/)


  1. Static color look-up table




A very simple way to solve this problem is to divide the RGB cube into equal slices in each dimension. The crossproduct of this color levels can be used as the entities of the color look-up table. This kind of quantization can be made for the red axis and for the green axis into 8 levels each, and the blue axis (the human is less sensitive to blue) into 4 levels. So that 8 * 8 * 4 = 256 colors are uniformly spread over the color space are available. The mapping of an image value into a value out of this selection is simply done by rounding every component.

A important drawback of this method is the artifact of appearing edges in the image. With a 24 bit full color image to be displayed on a monitor with a color palette of up to 256 entries the problem arises to remove apparent edges caused by very small changes in color which are a result of color shading.

Even the algorithm that is very fast the result is not acceptable.


  2. Median cut algorithm




The idea behind the median cut alogrithm is to use each of the color in the synthesized look-up table to represent an equal number of pixels in the original image. The algorithm subdivides the color space interatively into smaller and smaller boxes.

The algorithm starts with a box that encloses all the different color values from the original image. The "size" of the box is given by the minimum and maximum of each of the color coordinates that encloses the box we look at. For splitting the box we have to decide on which "side" we want to subivide the box. Therefore the points are sorted along the longest dimension of the box. The partitioning into two halves is made at the median point. Approximately equal numbers of points will fall at each side of the cutting plane.

This step is applied until K boxes are generated and K maybe the maximum number of color entries in the available colormap. The according color to each box is calculated by averaging the colors in each box.

Variations in the algorithm could be made by changing the criterion where to intersect the box.

An alternative to sorting the color values form a minimum value to a maximum value could be to look at the coordinate with the largest variance. Another alternative could be to minimize the sum of variances for the two new boxes.


  3. Popularity algorithm




The initial idea of that algorithm is to build the colormap by finding the K most frequently appearing colors in the original image.

Therefore the colors are stored in a histogram. The K most frequently occuring colors are extracted and they are made the entries in the color table. Now the true image can be quantized.

The only problem that has still to be solved is how to map the other colors that appear in the original image.

To keep the error small we have to apply a method that finds one out of the K most frequently used color values in the nearest neighbourhood of the actual pixel. That is why, in general each pixel has to be tested to find the shortest distance to one of the K most color values.

The error is measured as the squared distance in euclidian space:


d(quant,orig) = (quant r - orig r ) 2 +

(quant g - orig g ) 2 +
(quant b - orig b ) 2 ,
with quant and orig as color triples in the RGB color space.

The brut force method to compute the distance between a particular pixel value and all K representatives is time consuming (exhaustive search).

To make the algorithm practical, searching the nearest neighbour must be fast. Basically distance testing with all the values in the color look-up table has to be performed or even better a smaller list of potential candidates to minimize d(x,y) can be preselected.

This can be achieved by generating an N x N x N lattice of cubical cells in the color space. Each cell contains the set of color values that belong to the K most frequently used color values of the true image which are within a particular cell. In addition to that further values form the K most frequently used values are considered, when their distance form the cell is smaller than a distance d. The value of d is computed as the distance of the candidate nearest to the center of the cell and its farest corner (locally sorted search).

Thereby computation time can be saved. The amount of memory can be reduced by avoiding the computation and management of unused cells.

This algorithm (nearest neighbor algorithm) works as described below:


currpixel is the identifier for the current color values
dmin = MAXINT
/* color value and index to the look-up table */
nearest_candidate = NULL
i = 0
while ( list_of_candidates is not empty )
{

candidate = HEAD of list_of_candiates
list_of_candidates = TAIL of list_of_candiates
distance = d (currpixel, candidate.Color_value)
if (distance < min)
{
min = distance nearest_candidate = candidate
}
}


  4. Octrees




The third method for quantization we like to introduce is an approach that is konwn under the name octree quantization. The idea here is to build a tree structure containing always a maximum of K different colors. If a further color is to be added to the tree structure, its color value has to be merged with the most likely one that is already in the tree. The both values are substituted by their mean.

The most important data structure are the nodes of the octree. Each inner node of the octree contain a maximum of eight successors, the leave nodes keep information for the color value (colorvalue), the color index (colorindex), and a counter (colorcount) for the pixel that are already mapped to a particular leave. Because each of the red, green and blue value is between 0 and 255 the maximum depth of the tree is eight. In level i Bit i of RGB is used as selector for the successors.

The octree is constructed during reading the image that is to be quantized. Only that parts of the octree are created that are really needed. Initially the first K values are represented exactly (in level eight). When the number of leaves nodes (currentK) exceeds K, the tree has to reduced. That would mean that leaves at the largest depth are substituted by their predecessor:


children = 0
for successor[i=1..8] do
{

if successor[i] != NULL then
{
children = children + 1
sum the colorvalues and colorcount
successor[i] = NULL
}
}
currentK = currentK - children + 1

In the end the octree for the entire image is generated. The K leaves are used as entries for the color look-up table. the representative color value for a leave is computed as the mean value of the color value and the color count. The color index is also stored in the octree.

The quantization of the image is performed in a second step. Again the image has to be read. The color values of each pixel are used for traversing the octree data structure. The search along a path through the tree is finished when a leave node is reached.


  5. Error Distribution




Applying quantization algorithms still result in errors. , Even the color value is as close to the color value as possible it is only an approximation of the true color value. This is especially visible to the viewer if a discontinuitiy appear. This effect appears when in the original image only small color changes exist (e.g. in a color shade) in the quantizated image different color values appear. That is the reason why a quantizated image has to be postprocessed in terms of error distribution.


  5.1 Error Diffusion via a Dither Matrix




The basic strategy of dithering is to trade intensity resolution for spatial resolution. Originally, dithering was developed to increase the apparent color resolution of a display without reducing the spatial resolution.

Applying a dither matrix to an image that is displayed on a monitor that has not the full color spectrum available but only a selectable subset. By averaging the intensities of neighbouring pixels one can get the impression of colors that are not represented in the colormap. The human eye will do the spatial blending if the resolution of the display is high enough. Therefore a color value is compared to a threshold matrix. This fits as long as the spectator keeps a certain distance to the monitor, what is the normal case. Experiments with a monitor that has only a color look-up-table with 256 entries available show good results.

For the 2 x 2 size the dither matrix, called D (2) is
| 0 2 |
| 3 1 |
Larger dither matrices can be produced by applying a recurrence relation to compute D (2n) from D (n) . With the matrix U (n) defined as as a n x n matrix of 1s, that is,
| 1 1 1 ... 1 |
| 1 1 1 ... 1 |
| 1 1 1 ... 1 |
| . . . ...... .. |
| 1 1 1 ... 1 |
The recurrence relation is D(n) =
| 4*D(n/2)+D(2)00*U(n/2) 4*D(n/2)+D(2)01 * U(n/2)|
| 4*D(n/2)+D(2)10*U(n/2) 4*D(n/2)+D(2)11*U(n/2) |

The entire algorithm looks like described below:

void set_color_palette( )
{
int r, g, b, colorindex = 0;
for (r=0; r<64; r+=9 )
for (g=0; g<64; g+=9 )
for (b=0; b<64; b+=21 )
{

setrgbpalette(colorindex, r, g, b);
colorindex = colorindex + 1;
}
}
void put_dither_pixel(int x, int y, unsigned char *pix)
{
unsigned char r,g,b;
unsigned char ri, gi, bi;
unsigned char DitherMatrix[4][4] =
{ { 0, 8, 2, 10 },
{ 12, 4, 14, 6 },
{ 3, 11, 1, 9 },
{ 15, 7, 13, 5 } };
unsigned char Ditherval;
Ditherval = DitherMatrix[x&3][y&3];
r = pix[0] >> 2;
g = pix[1] >> 2;
b = pix[2] >> 2;
ri = (r >> 3); if (ri != 0 && ((r&7)<<1) <= Ditherval ) ri--;
gi = (g >> 3);
if (gi != 0 && ((g&7)<<1) <= Ditherval ) gi--;
bi = (b >> 4);
if (bi != 0 && (b&15) <= Ditherval ) bi--;
putpixel(x,y, (255 & ((ri<<5)) | (gi<<2) | bi));
}


  5.2 Floyd Steinberg




The error, that is the difference between the exact pixel value and the approximated value actually displayed, is spread to the color values of the four pixels below and right to the pixel in question. 7/16 is added to the pixel to the right, 3/16 to the pixel below and the left, 5/16 immediately below, and 1/16 to the pixel below and the right. Figure XX was created using this method. To make sure that no unwanted effects appear in the image by diffusing the error over several pixels in the image, we have to ensure that the error term that is divided and spread to four pixels is exactly the appeared error. Therefore on the fourth pixel the error minus the sum of the already distributed error share:
fourtherror = error - 15/16 * error.

Even better results can be achivied by alternating the scan direction and the error diffusion form left to right and vice versa.

The Floyd-Steinberg approach is a serial process and that any pixel value affects all the pixels to the lower right of that pixel in the image. Another approach, ordered dither techniques, localize the effect of any single pixel.

One big drawback of this approach is that the error made by approximating the accurate value is accumulated over the entire image. All errors that were made on the pixels on the left and up a pixel in question have to be considered again and lead to a significantly larger error. This error is more disturbing the impression on the complete image than making more but smaller errors in approximating the color values of a pixel.


  Conclusion




Generally speaking the procedure of finding the best entries for the look up table is time and memory consuming. Better quantization results require more run time and memory (e.g. octree algorithm). It has also to be considered that in a larger context the best color quantization for a specific picture conflicts with the default color values of the system and maybe also with the optimized quantization of another picture.

If the algorithms are evaluated in terms of resource allocation (memory and computation time) and quality issues (what does the user realize when he/she views and compares the original image with the color reduced image?) with respect to the results based in the test image the following assessment can be given:

Octree algorithm
+ On the one hand the algorithm leads to the best results but on the other hand
- it is very memory and time consuming.

The algorithm is well suited for images that have to be displayed with the best quality and the quantization process is not considered. Artefacts may appear as local discontinuities in a color shade. A combination between the octree algorithm and the dithering technique is not appreciated because the color table entries are not equidistant. In comparison with the other algorithms this approach needs a larger implementation effort.

Static color table in combination with dithering
+ This approach gives very resonable results with respect to quantization errors and
+ the relation between computation time and memory requirements and the available result is the best.

The combination of a static color table with dithering works well for interactive work and fast viewing of arbitrary images.

Static color table in combination with Floyd Steinberg
+ The combination of static color table and Floyd Steinberg Error diffusion is also fast.
- it but leads to visible artifacts especially on low display resolution that are very disturbing.

Median cut algorithm
- It leads to discontinuities.

Popularity algorithm
- This algorithm leads also to discontinuities.
- in comparison with a static color table solution that is combined with dithering it leads to extremely bad results. It especially does not map color values that are not very often used in the original image to another but similar color value (for example look at the butterfly in the test image).

Quantization can immensly be improved by applying error distribution techniques.

Sometime efforts are made to speed up that process. The above mentioned methods are modified to improve computation time and/or reduce memory requirements:

Examples for this are:

  • The Popularity algorithm can gain further speed when only pixels are considered that where selected from the image to be quantized randomly with stochastic methods (but this may reduce quality!).
  • The Median cut algorithm can be improved when the algorithm stops making subcubes when the human eye is no longer able to differ between different color. The algorithm takes into consideration that human eye can differ more red colors than blue colors and so make more intersections along the red axis than along the axis in the color cube.

Raster Display and Interaction
In cases of rapid previewing of rendering and image processing results static color table combined with dithering seems to achives good results even on display with low resolution.

It also allows to work interactivly with the system. The error distribution with Floyd Steinberg leads to visual artifacts (locally large errors).

Bibliography /1/ J. D. Foley, A. van Dam, S. K. Feiner, J. F. Hughes, Computer Graphics - Principles and Practice, Second Edition, Addison-Wesley Publishing Company, 1990, ISBN 0-201- 12110-7.
/2/ M. Gervautz and Purgathofer, A simple Method for Color Quantization: Octree Quantization in New Trends in Computer Graphics, N. Magnenat-Thalmann and D. Thalmann, eds. Springe-Verlag, Berlin 1988, pp. 219-231.
/3/ P. Heckbert, Color Image Quantization for Frame Buffer Display, Computer Graphics (Proc. Siggraph), Vol. 16, No. 3 July 1982, pp. 297-307.
/4/ Zhiang Xiang and Gregory Joy, Color Image Quantization by Agglomerative Clustering in IEEE Computer Graphics, May 1994, pp 44-48.