# Dice, Rubik's Cubes, and Error Diffusion

During this lockdown I was thinking out another math-related art project I could do. Something more sophisticated than a jigsaw puzzle, but similarly cathartic, and excuse to write some code.
About a year ago, I engaged in a similar project, that of making a string selfie. I’m pleased with how the string project worked out (and how it inspired others to give it a go).
Stimulated by a few things I saw on the internet, the challenge I landed on was dice art, but as we will see, the project morphed a little.

## Dice Art

The concept of dice art is pretty simple. You take your image of choice, pixelate it down to the desired resolution, then gray scale it.
Regular dice have six sides, with between one and six spots. They also tesselate perfectly in a flat plane. When you look at a die, the different sides have different ‘brightness’ levels. The more dots a die has, the darker it’s appearance (assuming we are using white dice with black spots).
To a first order approximation, a two spot dice is twice as dark as one spot dice, a three spot, three times as dark as a one spot … all the way to six.
We can process our source image, normalizing and quantizing our color range down to just six shades of gray, with each die representing one pixel (with whitest shade pixels being transformed into a one spot, and the darkest pixels into a six spot). We can do this by averaging out the brightness of all the pixels under the space the die will cover and then selecting the appropriate face of the die to represent this pixel in the destination image.

Image: photomosaic
All we need to do next is go online and buy a metric tonne of bulk dice, then spend many hours mosaicking them together to create our desired piece of art.
Before buying, or placing dice, I wanted to model what the results would look like.

## Threshold

My first attempt to convert to image to six shades of gray was with simple thresholding; turning each pixel to one of the six values by simply selecting the color it was nearest to in the quantized mapping (if you are a computer programmer, this is equivalent to arithmetic shifts right on binary numbers; we are dividing and the less significant parts of the value for each pixel are lost in the process).
This technique minimizes the error of any single pixel; the face selected is the closest to each pixel value. Each individual pixel is mapped to its nearest brightness die side.
I tried this out, and there is no polite way to say it, but the results are garbage! You lose all high frequency subtly of the image, and end up with massive patches of all the same values in vast seas of sameness. It’s like getting into a time machine a looking at 4 col CGA graphics game (this probably dates me).
Sadly, when researching a little on the internet, this appears to be exactly how many people seem to have made their dice art. They printed out the image they wanted in gray scale 1:1, glued this printout to a backing board, then they hand eyeballed each pixel they needed to place next by examining the average darkness of the corresponding square of printout underneath where the die would go. Extremely talented to consistently grade each square and average out the darkness, and certainly a technique that implements the threshold algorithm described above, but it produces poor results (as we will see demonstrated a little later in this article).
The problem with the threshold implementation is that it throws lots of data away; any difference between the color we want, and the color we get is just lost. We can do better with a technique called dithering, but first let’s take a little distraction into the world of newspapers.
The problem with thresholding is that it throws lots of data away!

## Newspapers and halftones

Many newspapers, and many books, are printed in just black (and white). Ink is either on the paper (shows black), or it is not (shows the paper through; typically showing something close to white). This is great for text and high contrast diagrams, but hopeless for analog images such as photos. If you look at a photo in a black and white print newspaper, it certainly appears gray scaled and analog, what is going on? The newspaper is not printing with hundreds of different colors of gray ink, it is just putting down black ink (or not). (Below is an old picture of me taken from a local newspaper; I think I’m about seven or eight years old. I’m the one on the left).
If you were to zoom in on the image below you will see it is actually made of a collection of thousands of dots of ink of varying sizes. Areas that need to be dark host big dots of ink, and areas that are lighter have much smaller dots. Because the dots are very tiny, our eyes/brain merge them, and we interpret the image as smooth tone of continuous image.
Below is a test pattern showing the technique (in this example using white dots, not black). In the top left you can see the desired image 1:1 and then the blown-up version showing the dots.
This technique called is called Halftoning. It is similar to the skill used by talented engraving artists to give the illusion of contrast on coins, metal work, and print blocks, by varying the density of the lines they carve. William Henry Fox Talbot, a scientist and photography pioneer, is credited with the idea of halftone printing.
Sadly, we can’t use halftoning for the dice project as we can’t vary the size of the dice (and also the dice are pretty big compared to the pixels we’re using). We also don’t have the luxury of having all black face dice (even if we could get them in a wide range of sizes). Different sized dice would not tesselate either.
We can, however, leverage the principal where the eyes/brain average out adjacent pixels to produce a more continuous image. We’ll use a technique called error diffusion.

## Error Diffusion

With error diffusion, we don’t eliminate information, we share it around. The process starts out as before; we have a value for the pixel in the source image that we’d ideally like. Typically, we can’t match it exactly, so we pick the nearest we can find. Next, we take the error (delta) between the brightness of the pixel we actually place, and the brightness we wanted, and share this error (propagate it) to pixels that are adjacent. In this way, on average, we’ve got the same brightness, but spread over a few pixels. We don’t lose any energy from the system.
There are various matrices and error propagation formulas, but one of the most commonly used is called Floyd-Steinberg dithering, first published in 1976 by Robert W. Floyd and Louis Steinberg. It was exceptionally popular in the early days of computing when graphics displays were often limited to 256 colors, not the millions we are typically blessed with today.
It’s a pretty simple system, and below is how it works. You take the error (difference between the color you wanted and the color you selected), then give 7/16 of this error to the pixel on the right (when scanning top-to-bottom, left-to-right), 5/16 to the pixel immediately below. 3/16 to the one below left, and 1/16 to below right. If you notice, 5+7+3+1=16 so all of the error gets propagated and blended with the other adjacent pixels.
Because of how the algorithm moves top-to-bottom, and left-to-right, and only distributes the error forward and down, it never needs to revisit any pixel already processed. It can also process the image in one pass, and the entire image does not need to be held in memory as it only works on two lines at a time. Finally, the division is by 16 which makes it very easy for early CPUs to perform. You can see why this is an extremely popular algorithm. The exact choice of the weighting of the parameters was carefully chosen so that a 50% gray with Black and White dithering gets converted to a black and white checkerboard.
This rob Peter to pay Paul strategy creates dithering patterns that, because of how the brain/eye averages these out, gives the illusion of more continuous patterns of gray. It works spookily well.
Advertisement:

## Test Images

Let’s look at both the thresholding and the Floyd-Steinberg algorithms in action to compare them. We’ll use six test images.
The first is a flower photo taken by christina rutz, next we have a couple classic paintings, followed by a girl in Glacial National Park, and finally a test card selfie (with gray scale strips up the side).

## Two colors

How do each of these images look in just two colors using the techniques described above? Below are the results. On the left is the original full color source (at a browser friendly resolution of 300×300 pixels). In the middle is the two color thresholded image. To generate this threshold image, the source image was gray scaled to values 0-255, and anything less than 127 is rendered with a black pixel, anything higher than this is rendered white. On the right side is the Floyd-Steinberg dithered version using error diffusion.
(Click on any of the gray scaled versions to bring up an image blown-up and scaled 300% so you can see the individual pixles more easily).
Source image
Thresholded
Floyd-Steinberg dithered

I hope you can agree that the threshold algorithm is pretty pathetic at this color depth. Even if we increased the resolution of the image all we’d end up with is a more fractal like shape but it would just be a partial silhouette. If we increased the resolution of the Floyd-Steinberg version, the pixels would get smaller and we’d end up with something closer to the quality of what you see in a printed newspaper. Below is an intermediate version of the lake image with a slightly higher resolution; you get the idea. The higher the resolution, the smaller the dots and the more continuous the grays appear to the brain.
Not bad for just two colors. What if we try with more colors? Let's increase our palate to four colors.

## Four colors

In addition to black and white, there are now two intermediate shades of gray in the palate.
Source image
Thresholded
Floyd-Steinberg dithered

I think you will agree, that even with just four colors, the Floyd-Steinberg is starting a produce a pretty decent image. The thresholding is still pretty useless for all images (excpet van Gogh's Stary night, which has a more quantized natural palate, and natural high contrast betweeen adjacent colors). All the other images have large pools of the same color, and horrible banding (such as in the sky on the lake picture).
Let's increase to six shades of gray, which is what we need for the dice art project.

## Six Colors

We've now moved to six shades of gray, and things are starting to get a little better. There are still large pools of the same color, and banding in the sky regions (where there are slow transitions between similar shades). The van Gogh images is looking pretty good even in the threshold form, and the Girl with the Pearl Earning is looking stunning in six color Floyd-Steinberg dithering.
Natural photographs, with smooth continuous shades, benefit greatly from the dithering. Paintings made with fix palates don't benefit much as every occurrence of the paint snaps to the same color in the modified palate.
Source image
Thresholded
Floyd-Steinberg dithered

## Returning to Dice

Now that we've converted our images to six shades of gray, we can use these as the map for our dice project.
Here are files for the Girl with the Pearl Earing as 300×300 dice. If you have 90,000 dice to spare (and a lot of time), you can create your own masteriece! WARNING: These images are large in resolution.
If you zoom into areas of these images the difference is quite noticable. The threshold images has huge areas of all the same dice value. The Floyd-Steinberg version dithers between values as the error propogates over.

Threshold
Dithered
Possible Optimisation?: The two, three, and six spot faces have an orientation. There are two chiral ways each can be placed. Using an edge detection technique like Sobel it would be possible to estimate the angle of any edge when there was a transition to this value and the 'better' of the two orientations used. This might produce a slightly better image (or at least add a little random noise to the image to break up the uniformity of all the same numbers being orientated the same way, breaking up textures).

## Dice Trivia

⚄ ⚄
Did you know the design of the five-spot pattern on a die is called a Quincunx?
Base six is called Senary. In NCAA basketball, the players' uniform numbers are restricted to be senary numbers of at most two digits, so that the referees can signal which player committed an infraction by using hand signals (0-5), based on digits extended, or a clenched fist.

## Sixteen colors

Leaving behind the dice, let's take a step up to sixteen shades of gray. All variants look a lot better when there is a larger palate to select from. There's still some banding in the regions that slowly change color, and these transitions manifest as edges, but overall a great improvement. The Floyd-Steinberg dithered versions look superb.
Source image
Thresholded
Floyd-Steinberg dithered

## Moving On

The work done on dice, I decided I wanted a litte more of a challenge, and my next thought was to use dominoes. This is a much more interesting challenge. A standard set for double-six dominoes contains 28 bones.

Image: Diann Bayes
However, there is only one of each dominoe in a set, so if we built an image of, say, size 84×84 pixels (for a total of 7,056 pixels), we could tile this using 126 sets of dominoes. However, we'd only have exactly 126 double sixes, and 126 double blanks … In addition to quantizing the colors we'd need to make compromises as to which dominoe should go where; there's not going to be perfect solution and we'd need some kind 'quality' indicator we'd need optimize for. There's also a challenge in that we can orientate dominoes in landscape or portrait layout. This sounds like a fun project, but one for another day.
My next idea was Rubik's Cubes. Rubik's cubes have six colors, and it's possible to manipulate a cube so that any configuration of the six colors can be displayed on one face. In this way, a cube could be used as a macro 3×3 pixel to make an image.

## Floyd-Steinberg in color

The Floyd-Steinberg algorithm works just as well with color images as it does with gray scale. You start out with your fixed palate of destination colors, and then find the nearest color from the palate using a Euclidean distance formula (root of the sum of the squares of the errors of the Red, Green, Blue channels), then propogate each part of each color chanel error forward.
As before we'll use the same six test images, and generate a threshold image based on selecting the nearest color of the cube, and the Floyd-Steinberg output.
Source image
Thresholded
Floyd-Steinberg dithered

The green color on a Rubik's cube is the drakest color available, so this used as the subsitute for black in the soruce image. As before, the thresholding does a poor job, and there are large patches of color. The dithering does a remarkable job with what it is given and, despite the fact that there are only six distinct colors, seems to trick the brain into thinking there are more colors in the image than there really are as you merge and average them out.
If I win the lottery, and can buy 90,000 Rubik's cubes and have enough money to employ a large staff of helpers or volunteers, I could create something like this image below (900×900 pixels), made from 300×300 cubes.
The colors used on a standard Rubik's cube are not well distrubted around the color wheel. The Red and Orange are very close. As mentioned above, the green is the darkest color (and it's not very dark), and yellow is poorly represented in the image. If we were going to have 90,000 cubes made, how about making them a special order and customizing the six colors rather than using the standard cube colors? Obviously, you could optimize the exact palate for each possible image you wanted to produce, but for a generic palate we can draw input from the print industry again.
There is a six color printing process called Hexachrome (sometimes called CMYKOG, for Cyan, Magenta, Yellow, Black, Orange, Green), which is an expansion of the more common CMYK process)
Leaving aside all the differences between additive and subtractive color spaces, and the differences between RGB and CMYK, if we adjust our six color palate to something close to the CMYKOG specification we can produce some even better quality results than with the restrictive palate of the standard cube.
If we use the Hexachrome palate, we get get an image like this:
This image suffers a little because we have no white! In printing you get white from the lack of ink, here we are using cubes so have to have each pixel represented. White appears in the image above by the careful blending of Magenta, Cyan, and Yellow.

## Gotchas

Floyd-Steinberg is brilliant, but there are a couple of minor gotchas to be aware of:
• The purists reading this will recognize that I’m not gamma-correcting the brightness levels and assuming there is a linear relationship between brightness across the channels. This is not true, however it is very much a secondary order compared to the coarseness of the rest of the calculations in this exercise. This issue can easily be solved with couple of functions to convert too and from gamma corrected values from linear values before propogating the error.
• If there happens to be a large patch of color that coincidentally matches exactly one of the quantized colors in your palate, then that accumulated error walks (bleeds) over the entire patch of this color and this error is applied at the next transition; far from where it was generated. It's a fun thought exercise to come up with ideas to mitigate this: Scanning alternate lines right-left and left-right to reduce the impact? Adding a slight amount of random noise? Detecting this condition and squelching the error propogation? …
• The basic algorithm does not deal appropriately with the single pixels around the edge of the image. You can simply crop these pixels off or attempt to infer/interpolate sensible sentinel values with reflections from data in the image.