Given a location `x` defined in the context of `h` chromosomes, the probability that position `x` is covered at least `\phi` times is `P_{h,\phi}` and given by $$ P_{h,\phi} = \left( 1 - \sum \frac{1}{k!} \left( \frac{\rho}{h}^k \right) e^{-\rho/h} \right)^h \tag{1} $$
For more details, see Wendl, M.C. and R.K. Wilson. 2008. Aspects of coverage in medical DNA sequencing. BMC Bioinformatics 9: 239.
For a given sequencing redundancy `\rho` (e.g. `\rho`-fold, as determined by the length of the haploid genome) of a aneuploidy = 6 genome, the fraction of the aneuploidy = 6 genome represented by at least `\phi` reads is reported by `P_{h,\phi}`. Coverage by fewer than `\phi` reads is reported as `1-P_{h,\phi}`. Coverage by exactly `\phi` reads is `P_{h,\phi} - P_{h,\phi+1}`. Entries for which fractional coverage is `\lt 10^{-5}` are not shown.
A rudimentary Monte Carlo simulation of genome coverage is also available, and is a useful supplement to the exact probabilities shown here.
http://mkweb.bcgsc.ca/coverage/?aneuploidy=12&depth=200
EXAMPLE 1
Suppose you carried out 3-fold redundant (`\rho=3`) sequencing of a haploid genome (`h=1`). 95.02% of the genome will be covered by at least one read (`P_{1,1}`) while 22.40% will be covered by exactly 3 reads (`P_{1,3} - P_{1,4}`).
EXAMPLE 2
You are sequencing a sample with a tumor content of 25% and you're interested in the depth of sequencing required to detect heterozygous mutations in the tumor. This scenario is equivalent to an aneuploidy = 8 genome—any given allele is present 8 times. If you sequence at (`\rho=200`), then 95% of the bases will be covered at a depth of at least `\phi = 14` (`P_{8,14} = 0.9494`). If you're satisfied with `\phi = 5` then you only need `\rho = 100` since now `P_{8,5} = 0.9580`.
ANALYTICAL vs STOCHASTIC
View plot that compares analytical vs stochastic results.
HAPLOID vs DIPLOID
View plot that compares 100x and 200x coverage of haploid and diploid genomes.
CODE
Download Perl scripts for analytical (to produce the tables below for any `\rho`) and stochastic coverage calculations.
View table for sequencing redundancy `\rho` = 1 2 3 4 5 6 7 8 9 10 20 25 50 75 100 of a aneuploidy = 6 genome.
`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|
0 | 1.0000 | 0.0000 | 1.0000 |
1 | 0.0000 | 1.0000 | 0.0000 |
`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|
0 | 0.9995 | 0.0000 | 1.0000 |
1 | 0.0005 | 0.9995 | 0.0005 |
`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|
0 | 0.9963 | 0.0000 | 1.0000 |
1 | 0.0037 | 0.9963 | 0.0037 |
`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|
0 | 0.9867 | 0.0000 | 1.0000 |
1 | 0.0133 | 0.9867 | 0.0133 |
`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|
0 | 0.9673 | 0.0000 | 1.0000 |
1 | 0.0326 | 0.9673 | 0.0327 |
2 | 0.0001 | 0.9999 | 0.0001 |
`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|
0 | 0.9362 | 0.0000 | 1.0000 |
1 | 0.0635 | 0.9362 | 0.0638 |
2 | 0.0003 | 0.9997 | 0.0003 |
`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|
0 | 0.8934 | 0.0000 | 1.0000 |
1 | 0.1054 | 0.8934 | 0.1066 |
2 | 0.0012 | 0.9988 | 0.0012 |
`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|
0 | 0.8405 | 0.0000 | 1.0000 |
1 | 0.1562 | 0.8405 | 0.1595 |
2 | 0.0032 | 0.9967 | 0.0033 |
3 | 0.0000 | 1.0000 | 0.0000 |
`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|
0 | 0.7802 | 0.0000 | 1.0000 |
1 | 0.2124 | 0.7802 | 0.2198 |
2 | 0.0074 | 0.9925 | 0.0075 |
3 | 0.0000 | 1.0000 | 0.0000 |
`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|
0 | 0.7152 | 0.0000 | 1.0000 |
1 | 0.2698 | 0.7152 | 0.2848 |
2 | 0.0148 | 0.9851 | 0.0149 |
3 | 0.0002 | 0.9998 | 0.0002 |
`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|
0 | 0.1958 | 0.0000 | 1.0000 |
1 | 0.4391 | 0.1958 | 0.8042 |
2 | 0.2916 | 0.6349 | 0.3651 |
3 | 0.0674 | 0.9265 | 0.0735 |
4 | 0.0059 | 0.9939 | 0.0061 |
5 | 0.0002 | 0.9998 | 0.0002 |
`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|
0 | 0.0895 | 0.0000 | 1.0000 |
1 | 0.3046 | 0.0895 | 0.9105 |
2 | 0.3714 | 0.3941 | 0.6059 |
3 | 0.1887 | 0.7654 | 0.2346 |
4 | 0.0416 | 0.9541 | 0.0459 |
5 | 0.0041 | 0.9957 | 0.0043 |
6 | 0.0002 | 0.9998 | 0.0002 |
`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|
0 | 0.0014 | 0.0000 | 1.0000 |
1 | 0.0119 | 0.0014 | 0.9986 |
2 | 0.0485 | 0.0134 | 0.9866 |
3 | 0.1244 | 0.0619 | 0.9381 |
4 | 0.2155 | 0.1863 | 0.8137 |
5 | 0.2533 | 0.4018 | 0.5982 |
6 | 0.1989 | 0.6551 | 0.3449 |
7 | 0.1027 | 0.8540 | 0.1460 |
8 | 0.0345 | 0.9567 | 0.0433 |
9 | 0.0076 | 0.9913 | 0.0087 |
10 | 0.0011 | 0.9988 | 0.0012 |
11 | 0.0001 | 0.9999 | 0.0001 |
`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|
0 | 0.0000 | 0.0000 | 1.0000 |
1 | 0.0003 | 0.0000 | 1.0000 |
2 | 0.0017 | 0.0003 | 0.9997 |
3 | 0.0072 | 0.0020 | 0.9980 |
4 | 0.0224 | 0.0093 | 0.9907 |
5 | 0.0541 | 0.0316 | 0.9684 |
6 | 0.1046 | 0.0857 | 0.9143 |
7 | 0.1620 | 0.1903 | 0.8097 |
8 | 0.1987 | 0.3523 | 0.6477 |
9 | 0.1897 | 0.5509 | 0.4491 |
10 | 0.1387 | 0.7407 | 0.2593 |
11 | 0.0766 | 0.8794 | 0.1206 |
12 | 0.0316 | 0.9560 | 0.0440 |
13 | 0.0097 | 0.9876 | 0.0124 |
14 | 0.0022 | 0.9973 | 0.0027 |
15 | 0.0004 | 0.9996 | 0.0004 |
16 | 0.0000 | 0.9999 | 0.0001 |
`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|
2 | 0.0000 | 0.0000 | 1.0000 |
3 | 0.0003 | 0.0001 | 0.9999 |
4 | 0.0011 | 0.0003 | 0.9997 |
5 | 0.0037 | 0.0014 | 0.9986 |
6 | 0.0102 | 0.0051 | 0.9949 |
7 | 0.0240 | 0.0154 | 0.9846 |
8 | 0.0485 | 0.0394 | 0.9606 |
9 | 0.0844 | 0.0878 | 0.9122 |
10 | 0.1261 | 0.1722 | 0.8278 |
11 | 0.1604 | 0.2983 | 0.7017 |
12 | 0.1712 | 0.4587 | 0.5413 |
13 | 0.1513 | 0.6298 | 0.3702 |
14 | 0.1093 | 0.7811 | 0.2189 |
15 | 0.0639 | 0.8904 | 0.1096 |
16 | 0.0300 | 0.9544 | 0.0456 |
17 | 0.0113 | 0.9844 | 0.0156 |
18 | 0.0034 | 0.9957 | 0.0043 |
19 | 0.0008 | 0.9990 | 0.0010 |
20 | 0.0002 | 0.9998 | 0.0002 |
21 | 0.0000 | 1.0000 | 0.0000 |
We'd like to say a ‘cosmic hello’: mathematics, culture, palaeontology, art and science, and ... human genomes.
All animals are equal, but some animals are more equal than others. —George Orwell
This month, we will illustrate the importance of establishing a baseline performance level.
Baselines are typically generated independently for each dataset using very simple models. Their role is to set the minimum level of acceptable performance and help with comparing relative improvements in performance of other models.
Unfortunately, baselines are often overlooked and, in the presence of a class imbalance5, must be established with care.
Megahed, F.M, Chen, Y-J., Jones-Farmer, A., Rigdon, S.E., Krzywinski, M. & Altman, N. (2024) Points of significance: Comparing classifier performance with baselines. Nat. Methods 20.
Celebrate π Day (March 14th) and dig into the digit garden. Let's grow something.
Huge empty areas of the universe called voids could help solve the greatest mysteries in the cosmos.
My graphic accompanying How Analyzing Cosmic Nothing Might Explain Everything in the January 2024 issue of Scientific American depicts the entire Universe in a two-page spread — full of nothing.
The graphic uses the latest data from SDSS 12 and is an update to my Superclusters and Voids poster.
Michael Lemonick (editor) explains on the graphic:
“Regions of relatively empty space called cosmic voids are everywhere in the universe, and scientists believe studying their size, shape and spread across the cosmos could help them understand dark matter, dark energy and other big mysteries.
To use voids in this way, astronomers must map these regions in detail—a project that is just beginning.
Shown here are voids discovered by the Sloan Digital Sky Survey (SDSS), along with a selection of 16 previously named voids. Scientists expect voids to be evenly distributed throughout space—the lack of voids in some regions on the globe simply reflects SDSS’s sky coverage.”
Sofia Contarini, Alice Pisani, Nico Hamaus, Federico Marulli Lauro Moscardini & Marco Baldi (2023) Cosmological Constraints from the BOSS DR12 Void Size Function Astrophysical Journal 953:46.
Nico Hamaus, Alice Pisani, Jin-Ah Choi, Guilhem Lavaux, Benjamin D. Wandelt & Jochen Weller (2020) Journal of Cosmology and Astroparticle Physics 2020:023.
Sloan Digital Sky Survey Data Release 12
Alan MacRobert (Sky & Telescope), Paulina Rowicka/Martin Krzywinski (revisions & Microscopium)
Hoffleit & Warren Jr. (1991) The Bright Star Catalog, 5th Revised Edition (Preliminary Version).
H0 = 67.4 km/(Mpc·s), Ωm = 0.315, Ωv = 0.685. Planck collaboration Planck 2018 results. VI. Cosmological parameters (2018).
constellation figures
stars
cosmology
It is the mark of an educated mind to rest satisfied with the degree of precision that the nature of the subject admits and not to seek exactness where only an approximation is possible. —Aristotle
In regression, the predictors are (typically) assumed to have known values that are measured without error.
Practically, however, predictors are often measured with error. This has a profound (but predictable) effect on the estimates of relationships among variables – the so-called “error in variables” problem.
Error in measuring the predictors is often ignored. In this column, we discuss when ignoring this error is harmless and when it can lead to large bias that can leads us to miss important effects.
Altman, N. & Krzywinski, M. (2024) Points of significance: Error in predictor variables. Nat. Methods 20.
Altman, N. & Krzywinski, M. (2015) Points of significance: Simple linear regression. Nat. Methods 12:999–1000.
Lever, J., Krzywinski, M. & Altman, N. (2016) Points of significance: Logistic regression. Nat. Methods 13:541–542 (2016).
Das, K., Krzywinski, M. & Altman, N. (2019) Points of significance: Quantile regression. Nat. Methods 16:451–452.
Nature uses only the longest threads to weave her patterns, so that each small piece of her fabric reveals the organization of the entire tapestry. – Richard Feynman
Following up on our Neural network primer column, this month we explore a different kind of network architecture: a convolutional network.
The convolutional network replaces the hidden layer of a fully connected network (FCN) with one or more filters (a kind of neuron that looks at the input within a narrow window).
Even through convolutional networks have far fewer neurons that an FCN, they can perform substantially better for certain kinds of problems, such as sequence motif detection.
Derry, A., Krzywinski, M & Altman, N. (2023) Points of significance: Convolutional neural networks. Nature Methods 20:1269–1270.
Derry, A., Krzywinski, M. & Altman, N. (2023) Points of significance: Neural network primer. Nature Methods 20:165–167.
Lever, J., Krzywinski, M. & Altman, N. (2016) Points of significance: Logistic regression. Nature Methods 13:541–542.