buy artwork
This section contains various art work based on `\pi`, `\phi` and `e` that I created over the years.
Some of the numerical art reveals interesting and unexpected observations. For example, the sequence 999999 in π at digit 762 called the Feynman Point. Or that if you calculate π to 13,099,586 digits you will find love.
`\pi` day art and `\pi` approximation day art is kept separate.
If you look hard enough, you can find anything.
In this case, you can find names of famous mathematicians and even the digits of mathematical constants in the digits of `\pi`. These posters are very similar to the Love in `\pi` series I did for the 2013 `\pi` day.
To "find" instances of text in `\pi`, you first need a way to represent letters numerically.
To do this, I arbitrarily map A to 1, B to 2 and so on. A space is a zero. Thus the string "HELLO" becomes 85121215, because H=8, E=5, L=12 and O=15. Here I don't pay attention to the case of the letters.
It turns out that the first "HELLO" in `\pi` is at position 34,851,875.
...05419663858273297437 85121215 18330978282693807709...
You can find your own favourite sequence using the The `\pi` search page.
One of the posters in this section indicate the position of the last names of 135 famous mathematicians: Abel, d'Alembert, Apollonius, Archimedes, Archytas, Aristotle, Aryabhata, Atiyah, Babbage, Banach, Banneker, Bernoulli, Bhascara, Birkhoff, Boole, Borel, Brahmagupta, Brouwer, Brunelleschi, Cantor, Cardano, Cartan, Cauchy, Cavalieri, Cayley, Chebyshev, Chern, Cohen, Conway, Dedekind, Democritus, Descartes, Diophantus, Dirichlet, Einstein, Eisenstein, Eratosthenes, Erdos, Escher, Euclid, Eudoxus, Euler, De Fermat, Fibonacci, Fourier, Frege, Galilei, Galois, Gauss, Germain, Godel, Grassmann, Grothendieck, Hadamard, Halley, al-Haytham, Hamilton, Hardy, Hausdorff, Hermite, Heron, Hilbert, Hipparchus, Hopper, Hui, Huygens, Hypatia, Jacobi, Jordan, Kepler, Khayyam, al-Khwarizmi, Klein, Kolmogorov, Kummer, Lagrange, Lambert, Laplace, Lasker, Lebesgue, Legendre, Leibniz, Lie, Liouville, Littlewood, Lorenz, Lovelace, Madhava, Magnus, Maxwell, Minkowski, De Moivre, Monge, De Morgan, Napier, Nash, Von Neumann, Newton, Noether, Pacioli, Panini, Pappus, Pascal, Peano, Perelman, Plato, Plucker, Poincare, Poisson, Polya, Poncelet, Ptolemy, Pythagoras, Ramanujan, Riemann, Robinson, Russell, Selberg, Serre, Siegel, Steiner, Sylvester, Tao, Tarski, Taylor, Thales, Turing, Venn, Viete, Wallis, Weierstrass, Weil, Weyl, Whitehead, Wiles, Witten.
The search is limited to the first 1,000,000,000 digits of `\pi` and if the sequence of digits that corresponds to the name isn't found then I trim the last letter off the name and try again.
For example, "Chebyshev" which is 385225198522 is not found but the next attempt "Chebyshe" 3852251985 is found at digit 7,737,114.
...50964281262402457441 3852251985 89678438272780298551...
The other poster shows the location of 210 mathematical constants. Here the search doesn't require encoding—the search query is the the sequence of digits in the constant. The leading zero before the decimal for constants `\lt 1` is trimmed. If the sequence isn't found (I use the precision as listed on the Wikipedia page), I trim the last digit and repeat.
For example, Euler's number 2.71828182 is found at digit 246,890,641.
...22156499305 271828182...
What immortal hand or eye, could frame thy fearful symmetry? — William Blake, "The Tyger"
This month, we look at symmetric regression, which, unlike simple linear regression, it is reversible — remaining unaltered when the variables are swapped.
Simple linear regression can summarize the linear relationship between two variables `X` and `Y` — for example, when `Y` is considered the response (dependent) and `X` the predictor (independent) variable.
However, there are times when we are not interested (or able) to distinguish between dependent and independent variables — either because they have the same importance or the same role. This is where symmetric regression can help.
Luca Greco, George Luta, Martin Krzywinski & Naomi Altman (2025) Points of significance: Symmetric alternatives to the ordinary least squares regression. Nat. Methods 22:1610–1612.
Fuelled by philanthropy, findings into the workings of BRCA1 and BRCA2 genes have led to groundbreaking research and lifesaving innovations to care for families facing cancer.
This set of 100 one-of-a-kind prints explore the structure of these genes. Each artwork is unique — if you put them all together, you get the full sequence of the BRCA1 and BRCA2 proteins.
The needs of the many outweigh the needs of the few. —Mr. Spock (Star Trek II)
This month, we explore a related and powerful technique to address bias: propensity score weighting (PSW), which applies weights to each subject instead of matching (or discarding) them.
Kurz, C.F., Krzywinski, M. & Altman, N. (2025) Points of significance: Propensity score weighting. Nat. Methods 22:638–640.
Celebrate π Day (March 14th) and sequence digits like its 1999. Let's call some peaks.
I don’t have good luck in the match points. —Rafael Nadal, Spanish tennis player
Points of Significance is an ongoing series of short articles about statistics in Nature Methods that started in 2013. Its aim is to provide clear explanations of essential concepts in statistics for a nonspecialist audience. The articles favor heuristic explanations and make extensive use of simulated examples and graphical explanations, while maintaining mathematical rigor.
Topics range from basic, but often misunderstood, such as uncertainty and P-values, to relatively advanced, but often neglected, such as the error-in-variables problem and the curse of dimensionality. More recent articles have focused on timely topics such as modeling of epidemics, machine learning, and neural networks.
In this article, we discuss the evolution of topics and details behind some of the story arcs, our approach to crafting statistical explanations and narratives, and our use of figures and numerical simulations as props for building understanding.
Altman, N. & Krzywinski, M. (2025) Crafting 10 Years of Statistics Explanations: Points of Significance. Annual Review of Statistics and Its Application 12:69–87.
I don’t have good luck in the match points. —Rafael Nadal, Spanish tennis player
In many experimental designs, we need to keep in mind the possibility of confounding variables, which may give rise to bias in the estimate of the treatment effect.
If the control and experimental groups aren't matched (or, roughly, similar enough), this bias can arise.
Sometimes this can be dealt with by randomizing, which on average can balance this effect out. When randomization is not possible, propensity score matching is an excellent strategy to match control and experimental groups.
Kurz, C.F., Krzywinski, M. & Altman, N. (2024) Points of significance: Propensity score matching. Nat. Methods 21:1770–1772.