Safe, fallen down this way, I want to be just what I am.
•
• safe at last
• more quotes

data visualization
**+** art

To see a World in a grain of sand,

And Heaven in a wild flower,

Hold Infinity in the palm of your hand

And Eternity in an hour.

—William Blake

In collaboration with Max Cooper, we tell a story about infinity in 6 minutes (and 34 seconds).

What's the story about? Natural numbers, integers, rationals, reals, power sets and the cardinality of the continuum — a real visual parade of numbers.

But math is a serious matter — so we include two of Cantor's diagnoal proofs in the video. Watch as we show that the size of the set of rational numbers is the same as the set of natural numbers (making them *countable*) but that the size of the set of real numbers is larger (making them *uncountable*).

In science one tries to tell people,

in such a way as to be understood by everyone,

something that no one ever knew before.

But in poetry, it’s the exact opposite.

—Paul Dirac, Mathematical Circles Adieu by H. Eves [quoted]

The style of Aleph 2 is that of a low-fi terminal. We used the Classic Console font by Csaba Széll, which I extended to include more set theory symbols (download extended font).

The animation was closely synchronized to the music, all the while trying to avoid looking too much like something from the Matrix movie.

To help you interpret what you are seeing, I walk you through the video and introduce you some advanced concepts of set theory. This way, next time you see an equation like `|\mathbb{R}| = | \mathbb{P}(\mathbb{N}) | = 2^{\aleph_0}`, you'll sprout a smile of joy.

These pages describe the system I build to generate the animation and the mathematics behind infinity, including sets, cardinality, countability, $\aleph$ and the Continuum Hypothesis.

I also present the animation system I built for the video, which was coded from scratch.

If you like infinity, you might like dimensions too.

Check out my video for Max Cooper's Ascent, which animates 5-dimensional cubes doing their confusing things.

news
**+** thoughts

*Huge empty areas of the universe called voids could help solve the greatest mysteries in the cosmos.*

My graphic accompanying How Analyzing Cosmic Nothing Might Explain Everything in the January 2024 issue of Scientific American depicts the entire Universe in a two-page spread — full of nothing.

The graphic uses the latest data from SDSS 12 and is an update to my Superclusters and Voids poster.

Michael Lemonick (editor) explains on the graphic:

“Regions of relatively empty space called cosmic voids are everywhere in the universe, and scientists believe studying their size, shape and spread across the cosmos could help them understand dark matter, dark energy and other big mysteries.

To use voids in this way, astronomers must map these regions in detail—a project that is just beginning.

Shown here are voids discovered by the Sloan Digital Sky Survey (SDSS), along with a selection of 16 previously named voids. Scientists expect voids to be evenly distributed throughout space—the lack of voids in some regions on the globe simply reﬂects SDSS’s sky coverage.”

Sofia Contarini, Alice Pisani, Nico Hamaus, Federico Marulli Lauro Moscardini & Marco Baldi (2023) Cosmological Constraints from the BOSS DR12 Void Size Function *Astrophysical Journal* **953**:46.

Nico Hamaus, Alice Pisani, Jin-Ah Choi, Guilhem Lavaux, Benjamin D. Wandelt & Jochen Weller (2020) Journal of Cosmology and Astroparticle Physics **2020**:023.

Sloan Digital Sky Survey Data Release 12

Alan MacRobert (Sky & Telescope), Paulina Rowicka/Martin Krzywinski (revisions & Microscopium)

Hoffleit & Warren Jr. (1991) The Bright Star Catalog, 5th Revised Edition (Preliminary Version).

*H*_{0} = 67.4 km/(Mpc·s), *Ω*_{m} = 0.315, *Ω*_{v} = 0.685. Planck collaboration Planck 2018 results. VI. Cosmological parameters (2018).

*It is the mark of an educated mind to rest satisfied with the degree of precision that the nature of the subject admits and not to seek exactness where only an approximation is possible. —Aristotle*

In regression, the predictors are (typically) assumed to have known values that are measured without error.

Practically, however, predictors are often measured with error. This has a profound (but predictable) effect on the estimates of relationships among variables – the so-called “error in variables” problem.

Error in measuring the predictors is often ignored. In this column, we discuss when ignoring this error is harmless and when it can lead to large bias that can leads us to miss important effects.

Altman, N. & Krzywinski, M. (2024) Points of significance: Error in predictor variables. *Nat. Methods* **20**.

Altman, N. & Krzywinski, M. (2015) Points of significance: Simple linear regression. *Nat. Methods* **12**:999–1000.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of significance: Logistic regression. *Nat. Methods* **13**:541–542 (2016).

Das, K., Krzywinski, M. & Altman, N. (2019) Points of significance: Quantile regression. *Nat. Methods* **16**:451–452.

*Nature uses only the longest threads to weave her patterns, so that each small piece of her fabric reveals the organization of the entire tapestry. – Richard Feynman*

Following up on our Neural network primer column, this month we explore a different kind of network architecture: a convolutional network.

The convolutional network replaces the hidden layer of a fully connected network (FCN) with one or more filters (a kind of neuron that looks at the input within a narrow window).

Even through convolutional networks have far fewer neurons that an FCN, they can perform substantially better for certain kinds of problems, such as sequence motif detection.

Derry, A., Krzywinski, M & Altman, N. (2023) Points of significance: Convolutional neural networks. *Nature Methods* **20**:1269–1270.

Derry, A., Krzywinski, M. & Altman, N. (2023) Points of significance: Neural network primer. Nature Methods **20**:165–167.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of significance: Logistic regression. Nature Methods **13**:541–542.

*Nature is often hidden, sometimes overcome, seldom extinguished. —Francis Bacon*

In the first of a series of columns about neural networks, we introduce them with an intuitive approach that draws from our discussion about logistic regression.

Simple neural networks are just a chain of linear regressions. And, although neural network models can get very complicated, their essence can be understood in terms of relatively basic principles.

We show how neural network components (neurons) can be arranged in the network and discuss the ideas of hidden layers. Using a simple data set we show how even a 3-neuron neural network can already model relatively complicated data patterns.

Derry, A., Krzywinski, M & Altman, N. (2023) Points of significance: Neural network primer. *Nature Methods* **20**:165–167.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of significance: Logistic regression. Nature Methods **13**:541–542.

Our cover on the 11 January 2023 Cell Genomics issue depicts the process of determining the parent-of-origin using differential methylation of alleles at imprinted regions (iDMRs) is imagined as a circuit.

Designed in collaboration with with Carlos Urzua.

Akbari, V. *et al.* Parent-of-origin detection and chromosome-scale haplotyping using long-read DNA methylation sequencing and Strand-seq (2023) Cell Genomics 3(1).

Browse my gallery of cover designs.

My cover design on the 6 January 2023 Science Advances issue depicts DNA sequencing read translation in high-dimensional space. The image showss 672 bases of sequencing barcodes generated by three different single-cell RNA sequencing platforms were encoded as oriented triangles on the faces of three 7-dimensional cubes.

More details about the design.

Kijima, Y. *et al.* A universal sequencing read interpreter (2023) *Science Advances* **9**.

Browse my gallery of cover designs.

Martin Krzywinski | contact | Canada's Michael Smith Genome Sciences Centre ⊂ BC Cancer Research Center ⊂ BC Cancer ⊂ PHSA

{ 10.9.234.151 }