Here, I help you understand color blindness and describe a process by which you can make good color choices when designing for accessibility.
The opposite of color blindness is seeing all the colors and I can help you find 1,000 (or more) maximally distinct colors.
You can also delve into the mathematics behind the color blindness simulations and learn about copunctal points (the invisible color!) and lines of confusion.
Color blindness is the inability to distinguish colors. The converse of this is a set of maximual distinct colors — interesting in its own right and subject of this section.
Below I also provide helpful diagrams that visualize how color differences vary. These show the difference between two modern formulae for color differences: `\Delta E_{94}` and `\Delta E_{00}`.
When we had reached 3 petabases milestone in our sequencing, I made a graphic of a DNA double helix of 3,000 base pairs that wound into the shape of "3Pb". I wanted to color each base on a strand with a different reasonably saturated and bright color — the base on the opposite strand would be its RGB inverse.
I didn't want just different — I wanted maximally distinct. While it's very easy to pick any number of RGB colors that are different, it's trickier to make sure that they're all as different from each other as possible.
Here, I outline a method for selecting such a large number of distinct colors and provide these color sets for you to download. You can pick anywhere a set of anywhere from `N=20` to `5,000` maximally distinct colors.
The selection method described below is essentially the same as offered by the excellent iwanthue project, which provides an excellent visual tutorial of the idea of color differences. Unfortunately, the tool timed out when attempting to find 3,000 different colors, so I thought I'd generate a set myself.
These sets are from a subset of the 815,267 colors (see methods) limited to `L = 20-80` and `C \ge 20`. Extremely bright colors (e.g. pure yellows) are not included. For sets drawn from the full complement of 815,267 colors, see below.
Shown below are sets of `N=5-50` maximally distinct colors. The first swatch in the set is the one closest to pure RGB red (255,0,0). The next swatch is the first swatche's closest `\Delta E` neighbour and so on — the order is sensitive to small changes in color difference as well as the color of the first patch so it can be quite different for each set. The average `\Delta E` between closest neighbours is shown on the left — the fewer the colors, the larger the color clusters from which they were sampled and thus the larger the `\Delta E`.
The k-means clustering is stochastic — each time it's run you get a slightly different result as the algorithm tries to search for a global maximum (it won't find it but it will find many local minima).
Some larger sets of maximally distinct colors are shown below, ordered by either `\Delta E` or LCH.
The benefit of sorting by closest neighbours is that you can see the extent to which the color space is sampled — as the set of colors grows, a color's nearest neighbour becomes more and more similar. Sorting by LCH (hue first, then chroma, then luminance — each rounded off to nearest 5) helps to see how many colors from a given part of the hue wheel are chosen. For example, note how there's generally more reds than blues.
A more in-depth description of color differences is below. For the color selection I use CIE00 (`\Delta E_{00}`) but CIE94 would probably be just as good (and faster).
The process of selecting this set of maximally distinct colors starts with sampling all LCH colors in the range `LCH = [0,100] \times [0,100] \times [0,359]` in steps of `0.5` in each dimension. This creates an initial set of 28.8 million colors — more than 24-bit RGB colors. So, too many!
First, I eliminate all colors that have identical integer RGB values. This leaves me with 6.1 million colors.
Then, to make the analysis more practical, I find a subset of the 6.1 million by eliminating any colors that already have a color within `\Delta E_{00} < 0.5` in the set. Remember that neither Lab (and therefore LCH aren't perceptually uniform) so among the 6.1 million colors, some will be more similar to each other than others.
This filtering gives me 815,267 RGB colors (download list). In this set, the average distance between nearest color neighbours (`\Delta E`) is 0.54 and 99% of the `\Delta E < 0.8`. Only 683 (<0.1%) of colors have a nearest neighbour with `\Delta E > 1` and 41 colors have a nearest neighbour with `\Delta E > 2`. These are in areas where `\Delta E` changes quickly with small changes in RGB. For example the two colors (46,0,0) and (47,0,0) have a `\Delta E = 2.0` but only vary by `\Delta R = 1`.
I'm sticking to integer RGB values — given that I'm sampling a very large number of colors, the few places where carrying a decimal would even out the local sampling is negligible. As well, my original purpose of creating these color sets was for design and most applications (Photoshop, Illustrator, etc) limit you to integer RGB.
To find the set of `N` maximally distinct colors, I apply k-means clustering using `\Delta E_{00}` as the distance metric. For sets `N = 5-100` I run k-means 100 times and choose the one with lowest error. For `N=110-200` I run k-means 25 times and for `N=225-1000` I run it 5 times. For `N>1000` I run k-means only once.
In 1942 D.L. MacAdam published Visual sensitivities to color differences in daylight in which he demarcated regions of indistinguishable colors in CIE `xy` chromaticity space at 25 locations — these form the MacAdam ellipses (download ellipse positions).
The difference between colors is called delta E (`\Delta E`) and is expressed by a number of different formulas — each slightly better at incorporating how we perceive color differences.
The simplest is the CIE76 `\Delta E` and this is the Euclidian distance between two colors in the CIE Lab color space. The Lab space is not perceptually uniform so this formula, while doing a reasonable job overall, doesn't distinguish between the extent to which we see differences in colors that are very saturated (saturation in Lab is called chroma `C = \sqrt{a^2+b^2}`). A difference of `\Delta E_{76} = 2.3` is called the JND (just noticeable difference) and is considered the limit of color discrimination (on average).
More sophisticated versions of `\Delta E` such as CIE94 and CIE00 attempt to address the non-uniformity of Lab space and with the aim to have a `\Delta E = 1` corresponds to a just noticeable difference (JND). Below, I show unit ellipses for CIE94 and CIE00 in Lab and `xy` chromaticity space — colors on the opposite side of an ellipse have a `\Delta E = 1`.
CIE94 and CIE00 are very similar to each other, except for blues and purples, where `\Delta E_{00}` ellipses are more eccentric and in the saturated greens where they are a little wider. In both cases the ellipses point in the direction of saturation — this corresponds to the fact that we discriminate hue better than saturation in Lab space. CIE00 is a more complicated calculation (and called by some an ugly calculation) and therefore slower — I'll guess that for most applications CIE94 is sufficient.
For more details about color differences see Colour difference `\Delta E` — A survey by W.S. Mokrzycki and M. Tatol.
The needs of the many outweigh the needs of the few. —Mr. Spock (Star Trek II)
This month, we explore a related and powerful technique to address bias: propensity score weighting (PSW), which applies weights to each subject instead of matching (or discarding) them.
Kurz, C.F., Krzywinski, M. & Altman, N. (2025) Points of significance: Propensity score weighting. Nat. Methods 22:1–3.
Celebrate π Day (March 14th) and sequence digits like its 1999. Let's call some peaks.
I don’t have good luck in the match points. —Rafael Nadal, Spanish tennis player
Points of Significance is an ongoing series of short articles about statistics in Nature Methods that started in 2013. Its aim is to provide clear explanations of essential concepts in statistics for a nonspecialist audience. The articles favor heuristic explanations and make extensive use of simulated examples and graphical explanations, while maintaining mathematical rigor.
Topics range from basic, but often misunderstood, such as uncertainty and P-values, to relatively advanced, but often neglected, such as the error-in-variables problem and the curse of dimensionality. More recent articles have focused on timely topics such as modeling of epidemics, machine learning, and neural networks.
In this article, we discuss the evolution of topics and details behind some of the story arcs, our approach to crafting statistical explanations and narratives, and our use of figures and numerical simulations as props for building understanding.
Altman, N. & Krzywinski, M. (2025) Crafting 10 Years of Statistics Explanations: Points of Significance. Annual Review of Statistics and Its Application 12:69–87.
I don’t have good luck in the match points. —Rafael Nadal, Spanish tennis player
In many experimental designs, we need to keep in mind the possibility of confounding variables, which may give rise to bias in the estimate of the treatment effect.
If the control and experimental groups aren't matched (or, roughly, similar enough), this bias can arise.
Sometimes this can be dealt with by randomizing, which on average can balance this effect out. When randomization is not possible, propensity score matching is an excellent strategy to match control and experimental groups.
Kurz, C.F., Krzywinski, M. & Altman, N. (2024) Points of significance: Propensity score matching. Nat. Methods 21:1770–1772.
P-values combined with estimates of effect size are used to assess the importance of experimental results. However, their interpretation can be invalidated by selection bias when testing multiple hypotheses, fitting multiple models or even informally selecting results that seem interesting after observing the data.
We offer an introduction to principled uses of p-values (targeted at the non-specialist) and identify questionable practices to be avoided.
Altman, N. & Krzywinski, M. (2024) Understanding p-values and significance. Laboratory Animals 58:443–446.