Sunday, July 29, 2012

Big Reveal

I think we have had enough background now.  Let's move on to see what our efforts have brought!

Mixing It Up

The basic idea is to use two video modes to display somewhat different versions of the same original image.  By alternating between these two images, we should produce an image that our eyes will blend into something that better represents the original than either does by itself.

The first implementation of this idea involved simply taking the output from my earlier efforts and modifying the code a bit to implement the continuous mode switching.  The modifications were simple: relocate the data so that both screens could be resident in memory at once; and, chain the code sections together so that each mode would be displayed in turn for 1/60th of a second for each mode.  In this implementation, none of the image data generation was changed at all -- both modes were still independent attempts to reproduce the original image.

Mode Swapping w/ Independent Images

The combination of the two "best effort" images does yield an improved result.  But what we really want is to use the second image to compensate for quantization error in the first image.  As it stands, the combined color for any given pixel is often limited to being only as good as the best match for that pixel in either of the two "best effort" images.  Only if the original pixel's color was between the colors in the "best effort" images will the combined image's pixel actually be an improvement.  Conversely, in most cases the resulting color for that pixel will be worse than that of the same pixel in the better of the "best effort" images!

To improve this, I decided to make one of the mode's generated image data depend on the image data generated for the other mode.  The first mode's image is still generated in a "best effort" fashion.  Once that image is generated, the color error for each pixel is calculated, doubled, and applied to the input for the generation of the second image.  This distorts the second image in a way that brings the combined image closer to the original than either "best effort" image would be likely to achieve.

Given the above, one must wonder which mode's image is generated first?  The "semi-graphics" mode is both lower resolution and less flexible with its color placement.  To me, it seems like a good bet to let that remain in the role of "best effort" image.  The "color graphics" mode's higher resolution and it's somewhat greater flexibility in placing colors should make it more effective at matching the distorted second image.  The above is conjecture, so YMMV -- I am pleased with the results I have gotten so far.

Mode Swapping w/ Error Redistribution

Fighting Flicker

Flicker is going to be an inevitable by-product of this technique.  Nevertheless, it is a distraction.  Even if we can't avoid it, we should attempt to minimize it.

AFAICT, flicker is more prominent when the luminance (aka "brightness") of the color on one screen differs significantly from the luminance of the color in the same position on the other screen.  Obviously, large areas of such combinations produce more noticeable flicker as well.

As I mentioned previously, I am already using the YIQ color space for matching colors.  The 'Y' part corresponds to luminance, and it is the biggest factor for matching colors in that space.  Hopefully this is already helping to reduce luminance differences between the two screens.  In any case, I don't have any other ideas to address this part of the problem...

It is notable that the VDG uses different border colors for "semi-graphics" (black border) and "color graphics" (black or buff, depending on CSS value) modes.  By default, switching between these modes produces a large flickery mess around the border of the screen.  Fortunately, clever timing allows for reconfiguring the VDG so that a single mode is always selected while the border is being drawn.  The images above use this technique to minimize flicker on the screen.

Winding Down

The end of July is approaching, and with it will come the end of the Retrochallenge event.  I'm fairly happy with where things stand, so I probably won't be adding much more to this project between now and then.  I probably will try to clean-up some code, push-out a git tree, and maybe write a little "wrap-up" post...stay tuned!


  1. Very impressive! I would love to see you apply this to some of the CoCo 3 modes (I know this has been done to an extent, but I'd love to see your take on it)...

    1. I should try a flicker mode between two screens of the CoCo3 256-color composite video mode. That should allow for more than 32k colors...hmmm... :-)

  2. This comment has been removed by the author.

    1. This was originally the reply to the other comment -- didn't realize that deleting it would leave a "this comment has been removed" scar on the page...

  3. What fun John. When I wrote my 2 games (Chess and Checkers), the CoCo I had was only a wire wrapped prototype. Even if I could spare the bytes, I wouldn't have risked "tricks" going into the ROM cartridges.

    Brings back memories, though.

    Strangely, I never owned a CoCo myself and the prototype went back to Tandy.

  4. Peter, it's great to hear from you! I spent tons of time with your Chess cartridge back when I was a kid. I remember being really excited the first time I managed to beat a _computer_ -- wow!! :-)

    Anyway, I'm glad to think that my stuff has brought you a little smile in return -- you earned it!

  5. John,

    If you alternate by scan-line the two images then the overall image brightness difference is minimized and reduces the apparent flicker.