Anyone that has ever matched paint colors knows that color matching can be a tedious and subjective process. Performing that task for the 24,576 pixels on the CoCo screen will require some automation. The first step towards automating that selection is to obtain a mathematical definition for each available color, thereby transforming the subjective process of color matching into an objective process instead.
In lieu of better information, it would be tempting simply to "guess"-timate the RGB values for the colors generated by the VDG. One might presume that each color is relatively close to an easily defined position in the RGB color space, and such a definition might even be close enough to achieve reasonable mappings. Fortunately, such a slipshod process is unnecessary -- some MESS folks figured-out a more refined palette definition based on mathematics and the VDG's datasheet.
The math required to compare two colors might not be obvious to everyone -- it wasn't originally obvious to me. After all, is "red" closer to "purple"? Or "orange"??
The RGB mapping of each color can be treated as a 3-dimensional coordinate in the RGB "color space". Once you wrap your head around that, the solution becomes more clear -- colors located nearest to one another in a color space are the colors that match each other best. The Euclidean distance between the RGB values for each color is used to determine which of the colors in the CoCo's palette is the best match for a color in the image being converted.
Now, get this -- there is more than one color space available to model color. This seems like an odd curiousity until one realizes that each color space emphasizes different aspects of color. This means that for any given pair of colors, the relative distances between those colors will differ depending on which color space is used to model them. I find that the emphasis on luminance in the YIQ color space gives it the best color matching results on the CoCo, so I convert my RGB color values to YIQ values before doing color matching comparisons.
I mentioned dithering in an earlier post. Dithering can be messy and distracting, and for old folks it looks a bit like fuzzy analog television reception. But, it can be very effective at improving color perception for a given image. So, I tend to dither my images while converting them for the CoCo.
The discussion above is just as applicable to a statically configured VDG mode as it is to the 8-color mode we have been describing. But the 8-color mode has one more wrinkle -- the palette must be set correctly for each block of pixels. I don't know of any clever way to predict which palette option is going to be the best for a given set of pixels. So, I simply do two sets of color matches for each group of pixels!
Since I can do 8 palette change for each line, I divide each line into 8 groups of 16 pixels each. I then do the color matches and calculate the accumulated color error for each set. After that, I choose the palette that produces the least total color error for each set of pixels. Finally, I record which palette I choose for each set and store the corresponding image data to the output. I later emit assembly code to perform the palette switching at the appropriate times for each line of the display. In effect, I compile the image into a binary program for the CoCo.
|Test Image In 8 On-Screen Colors|
Initially I was concerned that the 16-pixel groups on each line would result in obvious blocks of colors from each palette on different parts of the screen. But, I think that the ability to choose different palette combinations on each line combines with the dithering to mitigate any tendency to group colors on the screen.
Well, things are looking a bit better. But I think that we can still improve upon these results -- eight colors is still a bit paltry. Next time we will look at a way to combine a couple of CoCo video modes to extend the CoCo palette even further.