The three ages of CEGMA: thoughts on the slow burning popularity of some bioinformatics tools

The past

CEGMA is a bioinformatics tool that was originally developed to annotate a small set of genes in novel genome sequences that lack any annotation. The logic being that if you can at least annotate a small number of genes and have some confidence about their gene structure, you can then use them as a training set for an ab initio gene finder to go on and annotate the rest of the genome.

This tool was developed in 2005 and it took rejections from two different journals before the paper was finally published in 2007. We soon realized that the set of highly conserved eukaryotic genes that CEGMA used could also be adapted to assess the completeness of genome assemblies. Strictly speaking, we can use CEGMA to assess the completeness of the 'gene space' of genomes. Another publication followed in 2009, but CEGMA still didn't gain much attention.

CEGMA was then used as one of the assessment tools in the 2011 Assemblathon competition, and then again for the Assemblathon 2 contest. It's possible that these publications led to an explosion in the popularity of CEGMA. Alternatively, it may have become more popular just because more and more people have started to sequence genomes, and there is a growing need for tools to assess whether genome assemblies are any good.

The following graph shows the increase in citations to our two CEGMA papers since they were first published. I think it is unusual to see this sort of delayed growth in citations to a paper. The current citations from 2014 suggest that this year will see the citation count double compared to 2013.

cegma citations.png

The present

All of this is both good and bad. It is always good to see a bioinformatics tool actively being used, and it is always nice to have your papers cited. However, it's bad because the principal developer left our group many years ago and I have been left to support CEGMA without sufficient time or resources to do so. I will be honest and say that it can be a real pain to even get CEGMA installed (especially on some flavors of Linux). You need to separately install NCBI BLAST+, HMMER, geneid, and genewise, and you can't just use any version of these tools either. These installation problems have meant that I recently tried making it easier for people to submit jobs to us, which I run locally on their behalf.

These submission requests made me realize that many people are using CEGMA to assess the quality of transcriptomes as well as genomes. This is not something we ever advocated, but it seems to work. These submissions have also let me take a look at whether CEGMA is performing as expected with respect to the N50 lengths of the genomes/transcriptomes being assessed (I can't use NG50 which I would prefer, because I don't know the expected size for these genomes).

cegma.png

Generally speaking, if your genome contains longer sequences, then you have more chance of those sequences containing some of the 248 core genes that CEGMA searches for. This is not exactly rocket science, but I still find it surprising — not to mention worrying — that there are a lot of extremely low quality genome assemblies out there, which might not be useful for anything.

The future

We are currently preparing a grant that would hopefully allow us to give CEGMA a much needed overhaul. Currently we have insufficient resources to really support the current version of CEGMA, but we have many good ideas for how we could improve it. Most notably, we would want to make new sets of core genes based on modern resources such as the eggNOG database. The core genes that CEGMA uses were determined from an analysis of the KOGs database which is now over a decade old! A lot has changed in the world of genomics since then.

The problem that arises when Google Scholar indexes papers published to pre-print servers

The Assemblathon 2 paper, on which I was lead author, was ultimately published with the online journal Gigascience. However, like an increasing number of papers, it was first released to the arXiv.org pre-print server.

If you are a user of the very useful Google Scholar service and you have also published a paper such that it appears in two places, then you may have run into the same problems that I have. Namely, Google Scholar appears to only track citations to the first place where the paper was published.

It should be said that it is great that Google tracks citations to these pre-print articles at all, though see another post of mine that illustrates just how powerful (and somewhat creepy), Google Scholar's indexing power is. However, most people would expect that when a paper is formally published, that Google Scholar should track citations to that as well. Preferably separately from the pre-print version of the article.

For a long time with the Assemblathon 2 paper, Google Scholar only seemed to show citations to the pre-print version of the paper, even when I knew that others were citing the Gigascience version. So I contacted Google about this, and after a bit of a wait, I heard back from them:

Hi Keith,

It still get indexed though the information is not yet shown:

http://scholar.google.com/scholar?q=http%3A%2F%2Fwww.gigasciencejournal.com%2Fcontent%2F2%2F1%2F10+&btnG=

If one version (the arXiv one in this case) was discovered before the last major index update, the information for the other versions found after the major update would not appear before the next major update.

Their answer still raises some issues, and I'm waiting to hear back from my follow up question...how often does the index get updated? Checking Google Scholar today, it initially appears as if they are still only tracking the pre-print version of our paper:

2014-01-27 at 9.36 AM.png

However, after checking I see that 9 out of 10 of the most recent citations are all citing the Gigascience version of the paper. So in conclusion:

  1. Google Scholar will start to track formal versions of a publication even after the paper was first published on a pre-print server.
  2. Somewhat annoyingly, they do not separate out the citations and so one Google Scholar entry ends up tracking two versions of a paper.
  3. The Google Scholar entry that is tracking the combined citations only lists the pre-print server in the 'Journal' name field; you have to check individual citations to see if they are citing the formal version of the publication.
  4. Google Scholar has a 'major' indexing cycle and you may have to wait for the latest version of the index to be updated before you see any changes.

JABBA, ORCA, and more bad bioinformatics acronyms

JABBA awards — Just Another Bogus Bioinformatics Acronym — are my attempt to poke a little bit of fun at the crazy (and often nonsensical) acronyms and initialisms that are sometimes used in bioinformatics and genomics. When I first started handing out these awards in June 2013, I didn't realize that I was not alone in drawing attention to these unusual epithets.

http://orcacronyms.blogspot.com

http://orcacronyms.blogspot.com

ORCA is the Organization for Really Contrived Acronyms a fun blog set up by an old colleague of mine, Richard Edwards. ORCA sets out to highlight strange acronyms across many different disciplines, whereas my JABBA awards focus on bioinformatics. Occasionally, there is some overlap, and so I will point you to the latest ORCA post which details a particularly strange initialism for a bioinformatics database:

ADAN - prediction of protein-protein interAction of moDular domAiNs

Be sure to read Richard's thoughts on this name, as well as checking out some of the other great ORCA posts, including one of my favorites (GARFIELD).

ACGT: a new home for my science-related blog posts

Over the last year I've increasingly found myself blogging about science — and about genomics and bioinformatics in particular — on my main website (keithbradnam.com). Increasingly this has led to a very disjointed blog portfolio: posts about my disdain for contrived bioinformatics acronyms would sit aside pictures of my bacon extravaganza.

No longer will this be the case. ACGT will the new home for all of my scientific contemplations. So what is ACGT all about? Maybe you are wondering Are Completed Genomes True? or maybe you are just on the lookout to see someone Assessing Computational Genomics Tools. If so, then ACGT may be a home for such things (as well as Arbitrary, Contrived, Genome Tittle-Tattle perhaps).

I've imported all of the relevant posts from my main blog (I'll leave the originals in place for now), and hopefully all of the links work. Please let me know if this is not the case. Now that I have a new home for my scientific musings —  particularly those relating to bioinformatics — I hope this will encourage me to write more. See you around!

Keith Bradnam