This will be the first in a new series of blog posts that discuss my thoughts on the utility of genomes at various stages of completion (both in terms of genome assembly and annotation). These posts will mostly be addressing issues that pertain to eukaryotic genomes...are there any other kind? ;-)
I often find myself torn between two conflicting viewpoints about the utility of unfinished genomes. First, let's look at the any-amount-of-sequence-is-better-than-no-sequence-at-all argument. This is clearly true in many cases. If you sequence only 1% of a genome, and if that 1% contains something you're interested in (gene, repeat, binding site, sequence variant etc), then you may well think that the sequencing effort was tremendously useful.
Indeed, one of my all-time favorite papers in science is an early bioinformatics analysis of gene sequences in GenBank. Published way back in 1980, this paper (Codon catalog usage and the genome hypothesis) studied "all published mRNA sequences of more than about 50 codons". Today, that would be a daunting exercise. Back then, the dataset comprised just 90 genes! Most of these were viral sequences, with just six vertebrate species represented (and only four sequences from human).
The abstract of this paper concluded:
Each gene in a genome tends to conform to its species' usage of the codon catalog; this is our genome hypothesis.
This mostly remains true today and the original work on this tiny dataset established a pattern that spawned an entire sub-discipline of genomics, that of codon-usage bias (now with over 7,000 publications). So clearly, you can do lots of great and useful science with only a tiny amount of genome sequence information. So what's the problem?
pause-to-switch-hats-to-argue-the-other-point
Well, 1% of a genome may be better than 0%, and 2% is better than 1%, and so on. But I want 100% of a genome (yes, I'm greedy like that). However, I begrudgingly accept that generating a complete and accurate genome assembly (not to mention a complete set of gene annotations) currently falls into the nice-idea-kid-but-we-can't-all-be-dreamers category.
The danger in not getting to 100% completion is that there is a perception — by scientists as well as the general public — that these genomes are indeed all finished. This disconnect between the actual state of completion, versus the perceived state of completion can lead to reactions of the wait-a-minute-I-thought-this-was-meant-to-be-finished!?! variety. Indeed, it can be highly confusing when people go to download the genome of their species of interest, under the impression that the genome was 'finished' many years ago, only to find that they can't find what they're looking for.
Someone might be looking for their favorite gene annotation, but maybe this 'finished' genome hasn't actually been annotated. Or maybe it's been annotated by four different gene finders and left in a state where the user has to decide which ones to trust. Maybe the researcher is interested in chromosome evolution and is surprised to find that the genome doesn't consist of chromosome sequences, just scaffolds. Maybe they find that there are two completely different versions of the same genome, that were assembled by different groups. Or maybe they find that the download link provided by the paper no longer works and they can't even find the genome in question.
The great biologist Sydney Brenner has often spoke of the need to achieve CAP criteria in efforts such as genome sequencing. What are these criteria?
- C - Complete I.e. if you're going to do it, do a thorough job so that someone doesn't have to come along later to redo it.
- A - Accurate This is kind of obvious but there are so many published genomes out there that are far from accurate.
- P - Permanent Do it once, and forever.
The last point is probably not something that is thought about as much as the first two criteria. It relates to where these genomes end up being stored and the file formats that people use. But it also applies to other subtle issues. I.e. let's assume that research group 'X' has sequenced a genome to an impressive depth but that they made a terrible assembly. As long as their raw reads remain available, someone else can (in theory) attempt a better assembly, or attempt to remake the exact same assembly (science should be reproducible, right?).
However, reproducibility is not always easy in bioinformatics. Even if all of the methodologies are carefully documented, the software involved may no longer be available, or it may only run on an architecture that no longer exists. If you are attempting to make a better genome assembly, you could face issues if some critical piece of information was missing from the SRA Experiment metadata.
A potentially more problematic situation would be if the metadata was incorrect in some way (e.g. a wrong insert size was listed).
In subsequent posts, I'll explore how different genomes hold up to these criteria. I will also suggest my own 'five levels of genome completeness' criteria (for genome sequences and annotations).