Is the naming of bioinformatics software getting out of CoNtRol?

There is a new paper in the journal Bioinformatics. This is the title of that paper:

CoNtRol: an open source framework for the analysis of chemical reaction networks

Now people will know that I have no stomach for bogus bioinformatics acronyms and initialisms, so is CoNtRol worthy of a JABBA award? Well I can't give it such an award because CoNtRol is not an acronym or an initialism. At least I don't think it is. 

The abstract describes CoNtRol as a web-based framework for analysis of chemical reaction networks (CRNs). So even though the capitalized letters in CoNtRol give you CNR, maybe it's really all about CRNs???

The CoNtRol website makes things a little more confusing by starting their introduction with the text: CoNtRol (CRN tool) is a web application. Are you now thinking what I'm thinking? Is CoNtRol the world's first bioinformatics software based on an anagram (CoNtRol = CRN tool)? If this isn't the reason, then I can only assume that someone decided to just randomly capitalize various letters in the name.

Whatever the reason for the name, the more practical issue is that these tools can often be hard to find with web search engines. It doesn't show up on the first page of Google results if you search for control bioinformatics web app. Nor does it show up if you search for control chemical network app. There is something to be said for giving software novel names.

 

101 questions with a bioinformatician #8: Nick Loman

This post is part of a series that interviews some notable bioinformaticians to get their views on various aspects of bioinformatics research. Hopefully these answers will prove useful to others in the field, especially to those who are just starting their bioinformatics careers.


Nick Loman is an Independent Research Fellow in the Institute of Microbiology and Infection at the University of Birmingham, UK. You may know Nick for his involvement in producing the only world map of high-throughput sequencers (at least I'm assuming that this is the only map of its kind…I'm too lazy to check). Maybe you know him for the exclusive interview that he managed to secure with some of Oxford Nanopore's head honchos at the 2012 AGBT meeting (the scene of a certain wowser moment in high-throughput sequencing). Or maybe you just know Nick for his epicurean passions.

I like to think of Nick as the Jack of Clubs in the deck of cards that is the bioinformatics blogging community (this works as a metaphor, right?). Actually, on some days he's more like the Ten of Diamonds, but then he goes and writes great pieces like this (co-authored with fellow 101 alumni Mick Watson):

If you are interested in bioinformatics, and if you want to keep up with the latest developments in high-throughput sequencing technology, then you really should be keeping a close eye on people like Nick (though not too close, give the man some privacy!).

You can find out more about Nick by following him on twitter (@pathogenomenick) or keeping up with his excellent blog (Pathogens: Genes and Genomes). And now, on to the 101 questions...

 

 

001. What's something that you enjoy about current bioinformatics research?

I mainly enjoy the daily battles with crashing servers with cryptic memory errors, incompatible software versions, buggy scripts (mine and others) and full hard drives. 

Hah! That was the famous British sarcasm you will have read about.

The obvious answer is that the projects I get involved in are incredibly diverse, and I get to interact with many interesting people, because sequencing and bioinformatics skills are in such demand.

Another thing I enjoy is that I can reach out, via Twitter and blogging, to discuss with all the great computation biologists in the world struggling with the same problems. I have no idea what it must be like to feel isolated and slog away in a windowless laboratory without that kind of communication.

 

010. What's something that you *don't* enjoy about current  bioinformatics research?

I whinge quite a lot on my Twitter feed, but I wish bioinformaticians (including myself) wouldn't spend so much time reinventing the wheel (Keith: it's bioinformatics sin number 1 on this list), and instead try and muck-in together to solve really important problems.

A model of bioinformatics research a bit more like the Linux kernel might work. Imagine an international network of committed bioinformaticians working together. We would achieve great things quickly. But the academic model of recognition is broken for things like this, where everyone needs their own papers to justify their positions.

 

011. If you could go back in time and visit yourself as an 18 year old, what single piece of advice would you give yourself to help your future bioinformatics career?

I guess I would have got into the details of Bayesian statistics and machine learning earlier. These skills are very useful and I am only picking them up properly just now (I am on a Medical Research Council Training Fellowship).

Probably would have slipped myself a copy of Grays Sports Almanac too.

More prosaic: GNU parallel I discovered way too late and is an essential tool. And screen.

 

100. What's your all-time favorite piece of bioinformatics software, and why?

There's very little you can't get done with BLAST. It has its funny little quirks, but you know where you are with it. 

 

 

101. IUPAC describes a set of 18 single-character nucleotide codes that can represent a DNA base: which one best reflects your personality?

Well, it would be rather British to suggest T. But I prefer coffee.

Developing CEGMA: how working on old code can drive you mad and some tips on how to avoid this

Today marks the day when the original paper that describes the CEGMA software (Core Eukaryotic Gene Mapping Approach) becomes my most cited paper (as tracked by Google Scholar):

Does this fact make me happy? Not really. In fact, you may be surprised to learn that I find working on CEGMA a little bit depressing. I say this on a day when, purely coincidentally, I am releasing a new version of CEGMA. Why the grumpy face Keith? (I hear you ask). Let's take a trip down memory lane to find out why:

  • Early 2004: A paper is published that describes the KOGs database of euKaryotic Orthologous Groups.
  • Early 2005: I become the first person to join the Korf Lab after Ian Korf moves to Davis in 2004.
  • Mid 2005: Genís Parra becomes the second person to join the lab.
  • 2005–2006: The three of us work on the idea which became CEGMA. This project was primarily driven forward by Genís; during this time our initial CEGMA manuscript was rejected by two journals.
  • Late 2006: Our CEGMA paper was accepted!
  • Early 2007: CEGMA paper is published — as an aside, the URL for CEGMA that we include in the paper still works!
  • 2007: We work on the CEGMA spin-off idea: that it can be used to assess the 'gene space' of draft genomes.
  • 2008: Write new manuscript, get rejected twice (again), finally get accepted late 2008.
  • Early 2009: The 2nd CEGMA paper gets published!
  • Mid 2010: Genís leaves the lab.

By the time Genís left Davis, our original CEGMA paper had been cited 11 times (one of which was by our second CEGMA paper). I think that we had all expected the tool to have been a little more popular, but our expectations had been dampened somewhat by the difficulties in getting the paper published. Anyway, no sooner than Genís had left the lab, then the paper started getting a lot more attention:

Growth in citations to the two CEGMA papers.

This was in no doubt due to its use as a tool in the Assemblathon 1 paper (of which I was also involved), a project that started in late 2010. However, any interest generated from the Assemblathon project probably just reflected the fact that everyone and their dog had started sequencing genomes and producing — how best to describe them? —'assemblies of questionable quality'.

This is also about the time when I started to turn into this guy:

This was because it had fallen on me to continue to deal with all CEGMA-related support requests. Until 2010, there hadn't really been any support requests because almost no-one was using CEGMA. This changed dramatically and I started to receive lots of emails that:

  • Asked questions about interpreting CEGMA output
  • Reported bugs
  • Asked for help installing CEGMA
  • Suggested new features
  • Asked me to run CEGMA for them

I started receiving lots of the latter requests because CEGMA is admittedly a bit of a pig to install (on non Mac-based Unix systems at least). In the last 6 months alone, I've run CEGMA 80 times for various researchers who (presumably) are unable to install it themselves.

After the version 2.3 release — necessary to transition to the use of NCBI BLAST+ instead of WU-BLAST — and 2.4 release — necessary to fix the bugs I introduced in v2.3! — I swore an oath never to update CEGMA again. This was mostly because we no longer have any money to work on the current version of CEGMA. However, it was also because it is not much fun to spend your days working on code that you barely understand.

It should be said that we do have plans for a completely new version of CEGMA that will — subject to our grant proposal being successful — be redeveloped from the ground up, and will include many completely new features. Perhaps most importantly — for me at least — a version 3.0 release of CEGMA will be much more maintainable.

And now we get to the main source of my ire when dealing with CEGMA. It is built on a complex web of Perl scripts and modules, which make various system calls to run BLAST, genewise, geneid, and hmmsearch (from HMMER). I still find the scripts difficult to understand — I didn't write any of the original code — and therefore I find it almost impossible to maintain. One of the reasons I had to make this v2.5 update is because the latest versions of Perl have deprecated a particular feature causing CEGMA to break for some people.

Most fundamentally, the biggest problem with CEGMA (v2.x) is that it is centered around use of the KOGs database, a resource that is now over a decade old. This wasn't an issue when we were developing the software in 2005, but it is an issue now. Our plans for CEGMA v3.0 will address this by moving to a much more modern source of orthologous group information.

In making this final update to v2.x of CEGMA, I've tried adopting some changes to bring us up to date with the modern age. Although the code remains available from our lab's website, I've also pushed the code to GitHub (which wasn't in existence when we started developing CEGMA!). In doing this, I've also taken the step to give our repository a DOI and therefore make the latest version citable in its own right. This is done through use of Zenodo.

Although I hope that this is the last thing that I ever have to write about CEGMA v2.x, it is worth reflecting on some of the ways that the process of managing and maintaining CEGMA could have been made easier:

  1. Maintain documentation for your code that is more than just an installation guide and a set of embedded comments. From time to time, I've had some help from Genís in understanding how the code is working, but the complexity of this software really requires a detailed document that explains how and why everything works the way it does. There have been times when I have been unable to help people with CEGMA-related questions because I still can't understand what some of the code is doing.
  2. Start a FAQ file from day one. This is something that, foolishly, I have only recently started. I could have probably saved myself many hours of email-related support if I had sorted this out earlier.
  3. Put your code online for others to contribute to. Although GitHub wasn't around when we started CEGMA, I could have put the code up there at some point before today!
  4. Don't assume that people will use a mailing list for support, or even contact you directly. One thing I did do many years ago, is set up a CEGMA mailing list. However, I'm still surprised that many people just report their CEGMA problems on sites like SEQanswers or BioStars. I probably should have started checking these sites earlier.
  5. Don't underestimate how much time can be spent supporting software! I probably should have started setting aside a fixed portion of time each week to deal with CEGMA-related issues, rather than trying to tackle things as and when they landed on my doorstep.
  6. Assume that you will not be the last person to manage a piece of software. There are many things you can do to start good practices very early on, including using email addresses for support which are not tied to a personal account, ensuring that your changes to the code base have meaningful (and helpful) commit messages, and making sure that more than one person has access to wherever the code is going to end up.

In some ways it is very unusual for software to have this type of popularity where people only start using it several years after it is originally developed. But as CEGMA shows, it can happen, and hopefully these notes will serve as a bit of a warning to others who are developing bioinformatics software.

Fun with an 'error message' from NCBI BLAST+

Consider this very simple DNA sequence in FASTA format:

>sequence1
nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
ttttttagaaaaattatttttaagaatttttcattttaggaatattgtta
tttcagaaaatagctaaatgtgatttctgtaattttgcctgccaaattcg
tgaaatgcaataaaaatctaatatccctcatcagtgcgatttccgaatca
gtatatttttacgtaatagcttctttgacatcaataagtatttgcctata
tgactttagacttgaaattggctattaatgccaatttcatgatatctagc
cactttagtataattgtttttagtttttggcaaaactattgtctaaacag

If you try converting this to a BLAST database using the 'makeblastdb' command from the latest version of NCBI's BLAST+ suite, you will see the following line included in the output:

Error: (1431.1) FASTA-Reader: Warning: FASTA-Reader: First data line in seq is about 100% ambiguous nucleotides (shouldn't be over 40%)

Now consider what happens if you run the same makeblastdb command on this sequence (which just switches the first two lines of sequence1):

>sequence2
ttttttagaaaaattatttttaagaatttttcattttaggaatattgtta
nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
tttcagaaaatagctaaatgtgatttctgtaattttgcctgccaaattcg
tgaaatgcaataaaaatctaatatccctcatcagtgcgatttccgaatca
gtatatttttacgtaatagcttctttgacatcaataagtatttgcctata
tgactttagacttgaaattggctattaatgccaatttcatgatatctagc
cactttagtataattgtttttagtttttggcaaaactattgtctaaacag

Although this sequence has the exact same proportion of As, Cs, Gs, Ts, and Ns, it does not produce the error message. What about the following sequence?

>sequence3
nnnac
ttttttagaaaaattatttttaagaatttttcattttaggaatattgtta
tttcagaaaatagctaaatgtgatttctgtaattttgcctgccaaattcg
tgaaatgcaataaaaatctaatatccctcatcagtgcgatttccgaatca
gtatatttttacgtaatagcttctttgacatcaataagtatttgcctata
tgactttagacttgaaattggctattaatgccaatttcatgatatctagc
cactttagtataattgtttttagtttttggcaaaactattgtctaaacag

Well, surprise surprise, this sequence produces the error message again (though it now tells you that the first line 'is about 60% ambiguous nucleotides'). You will still see the same message even if you added 1 billion As, Cs, Gs, and Ts on to the end of sequence 3. This seems to be the code responsible for the error message (taken from this page):

In case it wasn't obvious, here is why this annoys me:

  1. The comment in the code indicates that this should be treated as a warning (less serious), but then the message starts with a prefix of 'Error' (more serious). So it's an warning and an error?
  2. It only considers the first line of sequence data. I appreciate that this is easiest thing to check, but it is not very useful if all of your ambiguous bases are at the end of the sequence (or anywhere past the first line).
  3. What is the rationale for choosing 40% as the threshold for warning? It seems a little too arbitrary.
  4. It produces this warning if the first line at least 40% ambiguous and if it also has a length greater than 3 bp! This means that it can be triggered with a line that starts 'NNNAC' as in my sequence3 example above.
  5. It considers all ambiguity codes as being equal. So if I switched my first line of sequence3 from NNNAC to RWBAC, it would still be rejected even though the sequence RWB contains much more information than NNN.
  6. The way the output text bluntly says 'shouldn't be over 40%' comes across as very matter-of-fact, as if you've transgressed some unknown law of bioinformatics.

So here are my suggestions for an alternative (which admittedly requires some more coding):

  1. If a sequence is less than 1,000 bp check all of the sequence, otherwise check the first 1,000 bp of sequence (if not more).
  2. Report the output as a warning and not an error.
  3. Change the warning message. E.g. 'The first 1,000 bp of your sequence contains a high proportion (X%) of ambiguous bases. Such sequences may not be very useful for any downstream analysis that you perform with BLAST+.'