More madness with MAPQ scores (a.k.a. why bioinformaticians hate poor and incomplete software documentation)

I have previously written about the range of mapping quality scores (MAPQ) that you might see in BAM/SAM files, as produced by popular read mapping programs. A very quick recap:

  1. Bowtie 2 generates MAPQ scores between 0–42
  2. BWA generates MAPQ scores between 0–37
  3. Neither piece of software describes the range of possible scores in their documentation
  4. The SAM specification defines the possible ranges of the MAPQ score as 0–255 (though 255 should indicate that mapping quality was not available)
  5. I advocated that you should always take a look at your mapped sequence data to see what ranges of scores are present before doing anything else with your BAM/SAM files

So what is my latest gripe? Well, I've recently been running TopHat (version 2.0.13) to map some RNA-Seq reads to a genome sequence. TopHat uses Bowtie (or Bowtie 2) as the tool to do the intial mapping of reads to the genome, so you might expect it to generate the same range of MAPQ scores as the standalone version of Bowtie.

But it doesn't.

From my initial testing, it seems that the BAM/SAM output file from TopHat only contains MAPQ scores of 0, 1, 3, or 50. I find this puzzling and incongruous. Why produce only four MAPQ scores (compared to >30 different values that Bowtie 2 can produce), and why change the maximum possible value to 50? I turned to the TopHat manual, but found no explanation regarding MAPQ scores.

Turning to Google, I found this useful Biostars post which suggests that five MAPQ values are possible with TopHat (you can also have a value of 2 which I didn't see in my data), and that these values correspond to the following:

  • 0 = maps to 10 or more locations
  • 1 = maps to 4-9 locations
  • 2 = maps to 3 locations
  • 3 = maps to 2 locations
  • 50 = unique mapping

The post also reveals that, confusingly, TopHat previously used a value of 255 to indicate uniquely mapped reads. However, I then found another Biostars post which says that a MAPQ score of 2 isn't possible with TopHat, and that the meaning of the scores are as follows:

  • 0 = maps to 5 or more locations
  • 1 = maps to 3-4 locations
  • 3 = maps to 2 locations
  • 255 = unique mapping

This post was in reference to an older version of TopHat (1.4.1) which probably explains the use of the 255 score rather than 50. The comments on this post reflect some of the confusion over this topic. Going back to the original Biostars post, I then noticed a recent comment suggesting that MAPQ scores of 24, 28, 41, 42, and 44 are also possible with TopHat (version 2.0.13).

As this situation shows, when there is no official explanation that fully describes how a piece of software should work, it can lead to mass speculation by others. Such speculation can sometimes be inconsistant which can end up making things even more confusing. This is what drives bioinformaticians crazy.

I find it deeply frustrating when so much of this confusion could be removed with better documentation by the people that developed the original software. In this case the documentation needs just one paragraph added; something along the lines of…

Mapping Quality scores (MAPQ)
TopHat outputs MAPQ scores in the BAM/SAM files with possible values 0, 1, 2, or 50. The first three values indicate mappings to 5, 3–4, or 2 locations, whereas a value of 50 represents a unique match. Please note that older versions of TopHat used a value of 255 for unique matches. Further note that standalone versions of Bowtie and Bowie 2 (used by TopHat) produce a different range of MAPQ scores (0–42).

Would that be so hard?

New paper provides a great overview of the current state of genome assembly

The following paper by Stephen Richards and Shwetha Murali has just appeared in the journal Current Opinion in Insect Science:

Best practices in insect genome sequencing: what works and what doesn’t

In some ways I wish they had chosen a different title as the focus of this paper is much more about genome assembly than genome sequencing. Furthermore, it provides a great overview of all of the current strategies in genome assembly. This should be of interest to any non-insect researchers interested in the best way of putting a genome together. Here is part of the legend from a very informative table in the paper:

Table 1 — De novo genome assembly strategies:
Assembly software is designed for a specific sequencing and assembly strategy. Thus sequence must be generated with the assembly software and algorithm in mind, choosing a sequence strategy designed for a different assembly algorithm, or sequencing without thinking about assembly is usually a recipe for poor un-publishable assemblies. Here we survey different assembly strategies, with different sequence and library construction requirements.

Bioinformatics software names: the good, the bad, and the ugly

The Good

Given that I spend so much time criticising bad bioinformatics names, I probably should make more of an effort to those occasional flag names that I actually like! Here are a few:

RNAcentral: an international database of ncRNA sequences

A good reminder that a bioinformatics tool doesn't have to use acronyms or intialisms! The name is easy to remember and makes it fairly obvious as to what you might expect to find in this database.


KnotProt: a database of proteins with knots and slipknots

A simple, clever, and memorable, name. And once again, no acronym!


WormBase and FlyBase

Some personal bias here — I spent four years working at WormBase — but you have to admire the simplicity and elegance of the names. 'WormBase' sort of replaced it's predecessor ACeDB (A Caenorhabidtis elegans DataBase). I say 'sort of' because ACeDB was the name for both the underlying software (which continued to be used by WormBase) and the specific instance of the database that contained C. elegans data. This led to the somewhat confusing situation (circa 2000) of there being many public ACeDB databases for many different species, only one of which was the actual ACeDB resource with worm data.


The Bad

These are all worthy of a JABBA award:

The human DEPhOsphorylation database DEPOD: a 2015 update

I find it amusing that they couldn't even get the acronym correctly captitalized in the title of the paper. As the abstract confirms, the second 'D' in 'DEPOD' comes from the word 'database' which should be capitalized. So it is another tenuous selection of letters to form the name of the database, but I guess at least the name is unique and Google searches for depod database don't have any trouble finding this resource.


IMGT®, the international ImMunoGeneTics information system® 25 years on

It's a registered trademark and that little R appears at every mention of the name in the paper. This initialism is the first I've seen where all letters of the short name come from one word in the full name.


DoGSD: the dog and wolf genome SNP database

I have several issues with this:

  1. It's a poor acronym (not explicitly stated in the paper): Dog and wolf Genome Snp Database
  2. The word 'dog' contributes a 'D' to the name, but then you end up with 'DoG' in the final name. It looks odd.
  3. What did the poor wolf do to not get featured in the database name?
  4. The lower-case 'O' means that you potentially can read this as dog-ess-dee or do-gee-ess-dee.
  5. Why focus the name on just two types of canine species? What if they wanted to add SNPs from Jackals or Coyotes, are they going to change the name of the database? They could have just called this something like 'The Canine SNP Database' and avoided all of these problems.

The Ugly

Maybe not JABBA-award winners, but they come with their own problems:

MulRF: a software package for phylogenetic analysis using multi-copy gene trees

Sometimes using the lower-case letter 'L' in any name is just asking for trouble. Depending on the font, it can look like the number 1 or even a pipe character '|'. The second issue is concerns the pronouncability of this name. Mull-urf? Mull-ar-eff? It doesn't trip off the tongue.


DupliPHY-Web: a web server for DupliPHY and DupliPHY-ML

This tool is all about looking for gene duplications from a phylogenetic perspective, hence 'Dupli' + 'PHY'. I actually think this is quite a good choice of name, except for the inconsistent, and visually jarring, use of mixed case. Why not just 'Dupliphy'?


ChiTaRS 2.1—an improved database of the chimeric transcripts and RNA-seq data with novel sense–antisense chimeric RNA transcripts

It's not spelt out in detail, but one can assume that 'ChiTaRS' derives from the following letters: CHImeric Transcripts And Rna-Seq data. So it is not being a bogus bioinformatics acronym in that respect. But I find it visually unappealing. Mixed capitalization like this never scans well.


DoRiNA 2.0—upgrading the doRiNA database of RNA interactions in post-transcriptional regulation

The paper doesn't explicitly state how the word 'DoRiNA' is formed other than saying:

we have built the database of RNA interactions (doRiNA)

So one can assume that those letters are derived from 'Database Of Rna INterActions'. On the plus side, it is a unique name easily searchable with Google. On the negative side, it seems strange to have 'RNA' as part of your database name, only with an additional letter inserted inbetween.

Metassembler: Merging and optimizing de novo genome assemblies

There's a great new paper in bioRxiv by Alejandro Hernandez Wences and Michael Schatz. They directly address something I wondered about as we were running the Assemblathon contests. Namely, can you combine some of the submitted assemblies to make an even better assembly? Well the answer seems to be a resounding 'yes'.

For each of three species in the Assemblathon 2 project we applied our algorithm to the top 6 assemblies as ranked by the cumulative Z-score reported in the paper…

We evaluated the correctness and contiguity of the metassembly at each merging step using the metrics used by the Assemblathon 2 evaluation…

In all three species, the contiguity statistics are significantly improved by our metassembly algorithm

Hopefully their Metassembler tool will be useful in improving many other poor quality assemblies that are out there!