Wednesday, September 13, 2017

Sequencing human diploid genomes

Most eukaryotes are diploid, including humans. They have two copies of each autosome. Thousands of human genomes have been sequenced but in almost all cases the resulting genome sequence is a mixture of sequences from homologous chromosomes. If a site is heterogeneous—different alleles on each chromosome—then these are entered as variants.

It would be much better to have complete sequences of each individual chromosome (= diploid sequence) in order to better understand genetic heterogeneity in the human population. Until recently, there were only two examples in the databases. The first was Craig Venter's genome (Levey et al., 2007) and the second was an Asian male (YH) (Cao et al., 2015).

Diploid sequences are much more expensive and time-consuming than standard reference sequences. That's because you can't just match sequence reads to the human reference genome in order to obtain alignment and position information. Instead, you have to pretty much construct de novo assemblies of each chromosome. Using modern technology, it's relatively easy to generate millions of short sequence reads and then match then up to the reference genome to get a genome sequence that combines information from both chromosomes. That's why it's now possible to sequence a genome for less that $1000 (US). De novo assemblies require much more data and more computing power.

A group at a private company (10X Genomics in Pleasanton, California (USA)) has developed new software to assemble diploid genome sequences. They used the technology to add seven new diploid sequences to the databases (Weisenfeld et al., 2017). The resulting assemblies are just draft genomes with plenty of gaps but this is still a significant achievement.

Here's the abstract,
Weisenfeld, N.I., Kumar, V., Shah, P., Church, D.M., and Jaffe, D.B. (2017) Direct determination of diploid genome sequences. Genome Research, 27:757-767. [doi: 10.1101/gr.214874.116]

Determining the genome sequence of an organism is challenging, yet fundamental to understanding its biology. Over the past decade, thousands of human genomes have been sequenced, contributing deeply to biomedical research. In the vast majority of cases, these have been analyzed by aligning sequence reads to a single reference genome, biasing the resulting analyses, and in general, failing to capture sequences novel to a given genome. Some de novo assemblies have been constructed free of reference bias, but nearly all were constructed by merging homologous loci into single “consensus” sequences, generally absent from nature. These assemblies do not correctly represent the diploid biology of an individual. In exactly two cases, true diploid de novo assemblies have been made, at great expense. One was generated using Sanger sequencing, and one using thousands of clone pools. Here, we demonstrate a straightforward and low-cost method for creating true diploid de novo assemblies. We make a single library from ∼1 ng of high molecular weight DNA, using the 10x Genomics microfluidic platform to partition the genome. We applied this technique to seven human samples, generating low-cost HiSeq X data, then assembled these using a new “pushbutton” algorithm, Supernova. Each computation took 2 d on a single server. Each yielded contigs longer than 100 kb, phase blocks longer than 2.5 Mb, and scaffolds longer than 15 Mb. Our method provides a scalable capability for determining the actual diploid genome sequence in a sample, opening the door to new approaches in genomic biology and medicine.


Cao, H., Wu, H., Luo, R., Huang, S., Sun, Y., Tong, X., Xie, Y., Liu, B., Yang, H., and Zheng, H. (2015) De novo assembly of a haplotype-resolved human genome. Nature biotechnology, 33:617-622. [doi:10.1038/nbt.3200]

Levy, S., Sutton, G., Ng, P.C., Feuk, L., Halpern, A.L., Walenz, B.P., Axelrod, N., Huang, J., Kirkness, E.F., Denisov, G., Lin, Y., MacDonald, J.R., Pang, A.W. C., Shago, M., Stockwell, T.B., Tsiamouri, A., Bafna, V., Bansal, V., Kravitz, S.A., Busam, D.A., Beeson, K. Y., McIntosh, T.C., Remington, K.A., Abril, J.F., Gill, J., Borman, J., Rogers, Y.-H., Frazier, M.E., Scherer, S.W., Strausberg, R.L., and Venter, J.C. (2007) The diploid genome sequence of an individual human. PLoS Biol, 5:e254. [doi: 10.1371/journal.pbio.0050254]

22 comments :

  1. Very interesting and illuminating. I hadn't considered this issue before.

    One quibble: I'm not certain you're correct that most eukaryotes are diploid. Certainly most animals are, but not all. I think most fungal species spend most of their life cycles haploid. Plants alternate: every other generation is diploid, with intervening generations haploid. The haploid generations of seed plants are microscopic, so perhaps should be ignored. Quite a lot of protists are haploid.

    -jaxkayaker

    ReplyDelete
  2. I'm not clear on this works. How do you assemble two chromosomes contigs from a mass of short reads, if the differences average farther apart than the reads are long?

    ReplyDelete
    Replies
    1. No. I was hoping you had read the paper and would tell me how they do it. Is it open access? Otherwise I can't read it anyway.

      Delete
    2. OK, it is open access. I read it, or tried to. The methods section is so condensed I can't be sure what it says, but I think they made libraries of long bits of chromosomes, how long I can't say, but presumably long enough that overlap of fragments can reliably assign them to the correct matching chromatids, and then they separately label bits of each library, Illumina sequence them all in a batch, and separate the short reads by label, assembling contigs from reads with the same labels. Is that right?

      Delete
    3. Yeah, I think that's what they did. I don't understand the methodology either and their description of what they did in the rest of the paper is confusing.

      This is a general them in scientific papers these days. Even when the subject is something I'm familiar with it's impossible to figure out exactly what the authors did. This is especially true of genomics papers where the algorithms used (or developed) are absolutely crucial. The average informed reader has no idea if the results are accurate. (Also true of phylogeny papers.)

      In this case, it's even hard to figure out how much of the seven genomes were sequenced. I think it's about 80%.

      Delete
    4. That looks about right. Break the genome into long fragments, then do short-read sequencing of each long fragment, using tags to say which short reads belong to which long fragment.

      There is some funky maths around the fact that they sequenced each genome to a depth of 56x, but each individual long fragment was only sequenced to a depth of 0.36x

      By analogy to earlier sequencing methods, it's like making a BAC library at 150x genome depth, and then sequencing a third of each BAC. Presumably it's more important to have a larger number of molecules from any given region, than to have perfect sequence for any given molecule.

      That necessitates some very complicated assembly algorithms though!

      Delete
  3. It is important to have long assemblies to characterize genetic variation in populations, yes.

    But to just find out what a typical human sequence is, it isn't critical to do all the sequencing from one diploid genome and resolve it into pairs of chromosomes. Since people actually come together and have offspring, and in the process haplotypes from the two parents get recombined with each other, all of us end up having genomes that are a patchwork combination of chunks from various ancestors.

    People used to triumphantly raise as an objection to sequencing the human genome "but it's not all from the same individual!". As if this doomed it to meaninglessness. To see how humans differ from (say) chimps, a genome in which different parts come from different individuals is fine.

    To characterize genetic variation, I do agree with the need to have long sequences (not necessarily whole genomes). But getting diploid sequences with resolved phase is less critical. If we have a good sample of long haplotypes from one population, we can count on people's mating activity to guarantee that pairs of random haplotypes will be a good prediction of what the diploid genomes will look like.

    ReplyDelete
    Replies
    1. The authors address this issue in their paper. Did you read it?

      I agree that diploid sequences won't be important in most cases but one important case is when you want to look at how recombination has shuffled the genome.

      Delete
    2. I was going off-topic into the worry that people used to have that we were sequencing the human genome by getting different parts of the sequence from different individuals, and that this was supposedly a problem.

      I agree that there are good and valid reasons for looking at samples of long genotypic stretches from individuals.

      Delete
    3. For species delimition, it is much more informative if you can get pairs of sequences from each individual, than just a bunch of sequences from some unknown number of species/populations.

      Also, phylogenetic analysis of allopolyploids is much improved by getting multiple sequences per individual.

      Delete
    4. @Graham Jones: True, the diploidy helps a lot in deciding whether you have one species or a mixture of two species. But you don't at all need whole genomes -- even a few dozen loci will do the job, having 100 loci would be more than enough.

      Delete
    5. @Joe: If the loci are long enough, yes. I think a few dozen loci, each of length 100kbp would work in most cases. Certainly you don't need whole genomes, but the desirable sequences are long enough to run up against the problem John Harshman mentions (and which the paper perhaps solves).

      Delete
    6. Yes, for most purposes I think that ploidy is overkill. But for human disease studies there's a fair case to be made that (one reason) GWAS has proven so underwhelming is due to its inability to include phased haplotype, sequence information. Personally, I'm glad to see this issue being raised as human geneticists have become all too comfortable ignoring the gaping genomic hole that lies at the center of much of their big data.

      Delete
  4. As for the authors's conclusion that

    Our method provides a scalable capability for determining the actual diploid genome sequence in a sample, opening the door to new approaches in genomic biology and medicine.

    This is way too grandiose. Almost anything that can be done with complete diploid genome sequences can already be done with population samples of moderately long haplotypes.

    ReplyDelete
    Replies
    1. This sort of excessive hype is becoming all too common these days. I suppose we're all getting used to it so we just ignore it but that's really not an excuse.

      This is not how science is supposed to be done.

      Delete
  5. I would have assumed they did this to look for 'haplotypes': specific combinations of sometimes distant SNPs that cause disease or lead to resistance etc. Presumably many of these must occur in cis (rather than trans)

    ReplyDelete
  6. Hi Larry,
    Being 60 years since Crick presented the central dogma, I thought you'd like to say something about it:
    http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.2003243

    ReplyDelete
    Replies
    1. I read Matthew's article and Jerry Coyne's post. I'd like to write up a post for Sandwwalk but I'm very busy these days. I leave for Edinburgh (Scotland tomorrow.

      Delete