Bad bioinformatics software names revisited

I recently have been sorting through lots of old notes files, including many from my time as a genomics researcher at UC Davis. One note file I had was called ‘Strategies for naming bioinformatics software’ and I initially assumed it was one of the blog posts posted on this blog.

However, I couldn’t find it as an actual post and when I did a quick web search, I instead discovered this ‘Bioinformatics lab’ podcast from earlier this year:

I have been out of the field of genomics/bioinformatics for many years now and didn’t know about The Bioinformatics Lab podcast which describes itself as ‘ramblings on all things bioinformatics’.

The conversation between the hosts (Kevin Libuit and Andrew Page) is good and listening to it brought back lots of memories from the many things I’ve written about on this blog. At the end of the episode, Andrew concludes:

“It’s kind of hard. People should bit a bit of effort into it”

100% this! Naming software should definitely not be an afterthought. Andrew goes on:

“Before you do any development on anything, go and choose a really good name and make sure it doesn’t conflict with any trademarks or existing tools, you can Google it easily and it’s not offensive in any language.”

These are the types of things that I have written about extensively on this blog. If you are interested, perhaps start with

Then you can ready any one of the nearly forty posts I wrote which handed out ‘JABBA awards’ which stands for Just Another Bogus Bioinformatics Acronym.

This award series started all the way back in 2013 and the inaugural award went to a tool with the crazy capitalisation of 'BeAtMuSiC'.

There’s also a series of posts on duplicate names in bioinformatics where people haven’t checked whether their software name is stepping on someone else’s toes.

This includes a post about the audacious attempt to name a new piece of bioinformatics software BLAST. There is also a post about the five different tools that are all called ‘SNAP’.

Admittedly I’ve been out of the loop for so long there is the possibility of there being many more SNAPs out there now!

The moral of this blog post is that names are important and it is very easy to mess them up which could end up meaning that fewer people ever discover your tool in teh first place.

CEGMA is dying…just very, very slowly

This is my first post on this blog in almost three years and it is now almost nine years since I could legitimately call myself a genomics researcher or bioinformatician.

However, I feel that I need to 'come out of retirement' for one quick blog post on a topic that has spanned many others…CEGMA.

As I outlined in my last post on this blog, the CEGMA tool that I helped develop back in 2005 and which was first published in 2007, continues to be used.

This is despite many attempts to tell/remind people not to use it anymore! There are better tools out there (probably many that I'm not even aware of). Fundamentally, the weakness of using CEGMA is that is based on an identified set of orthologs that was published over two decades ago.

And yet, every week I receive Google Scholar alerts that tell me that someone else has cited the tool again. We (myself and Ian Korf) should perhaps take some of the blame for keeping the software available on the Korf Lab website (I wonder how many other bioinformatics tools from 2007 can still be downloaded and successfully run?).

CEGMA citations (2011-2024)

When I saw that citations had peaked in 2017 and when I saw better tools come along, I thought it would be only a couple of years until the death knell tolled for CEGMA. I was wrong. It is dying…just very, very slowly. There were 119 citations last year and there have been 88 so far this year.

Academics (including former academics) obviously love to see their work cited. It is good to know that you have built tools that were actively used. But please, stop using CEGMA now! Myself and the other co-authors no longer need the citations to justify our existence.

Come back to this blog in another three years when I will no doubt post yet another post about CEGMA ('For the love of all that is holy why won't you just curl up and die!').

New BUSCO vs (very old) CEGMA

If I’m only going to write one or two blog posts a year on this blog, then it makes sense to return to my recurring theme of don’t use CEGMA, use BUSCO!

In 2015 I was foolishly optimistic that the development of BUSCO would mean that people would stop using CEGMA — a tool that we started developing in 2005 and which used a set of orthologs published in 2003! — and that we would reach ‘peak-CEGMA’ citations that year.

That didn’t happen. At the end of 2017, I again asked the question have we reached peak-CEGMA? because we had seen ten consecutive years of increasing publications.

Well I’m happy to announce that 2017 did indeed see citations to our 2007 CEGMA paper finally peak:

CEGMA citations by year (from Google Scholar)

CEGMA citations by year (from Google Scholar)

Although we have definitely passed peak CEGMA, it still receives over a 100 citations a year and people really should be using tools like BUSCO instead.

This neatly leads me to mention that a recent publication in Molecular Biology and Evolution describes an update to BUSCO:

From the introduction:

With respect to v3, the last BUSCO version, v5, features: 1) a major upgrade of the underlying data sets in sync with OrthoDB v10; 2) an updated workflow for the assessment of prokaryotic and viral genomes using the gene predictor Prodigal (Hyatt et al. 2010); 3) an alternative workflow for the assessment of eukaryotic genomes using the gene predictor MetaEuk (Levy Karin et al. 2020); 4) a workflow to automatically select the most appropriate BUSCO data set, enabling the analysis of sequences of unknown origin; 5) an option to run batch analysis of multiple inputs to facilitate high-throughput assessments of large data sets and metagenomic bins; and 6) a major refactoring of the code, and maintenance of two distribution channels on Bioconda (Grüning et al. 2018) and Docker (Merkel 2014).

Please, please, please…don’t use CEGMA anymore! It is enjoying a well-earned retirement at the Sunnyvale Home for Senior Bioinformatics Tools.

Three cheers for JABBA awards

jabba logo.png

These days, I mostly think of this blog as a time capsule to my past life as a scientist. Every so often though, I’m tempted out of retirement for one more post. This time I’ve actually been asked to bring back my JABBA awards by Martin Hunt (@martibartfast)…and with good reason!

There is a new preprint in bioRxiv…

I’m almost lost for words about this one. You know that it is a tenuous attempt at an acronym or initialism when you don’t use any letters from the 2nd, 3rd, 4th, or 5th words of the full software name!

The approach here is very close to just choosing a random five-letter word. The authors could also have had:

CLAMP: hierarChical taxonomic cLassification for virAl Metagenomic data via deeP learning

HOTEL: hierarcHical taxOnomic classificaTion for viral mEtagenomic data via deep Learning

RAVEN: hieraRchical tAxonomic classification for Viral metagenomic data via dEep learNing

ALIEN: hierArchical taxonomic cLassification for vIral metagEnomic data via deep learniNg

LARVA: hierarchicaL taxonomic classificAtion for viRal metagenomic data Via deep leArning

Okay, as this might be my only blog post of 2020, I’ll say CHEERio!

DOGMA: a new tool for assessing the quality of proteomes and transcriptomes

A new tool, recently published in Nucleic Acids Research, caught my eye this week:

The tool, by a team from the University of Münster, uses protein domains and domain arrangements in order to assess 'completeness' of a proteome or transcriptome. From the abstract…

Even in the era of next generation sequencing, in which bioinformatics tools abound, annotating transcriptomes and proteomes remains a challenge. This can have major implications for the reliability of studies based on these datasets. Therefore, quality assessment represents a crucial step prior to downstream analyses on novel transcriptomes and proteomes. DOGMA allows such a quality assessment to be carried out. The data of interest are evaluated based on a comparison with a core set of conserved protein domains and domain arrangements. Depending on the studied species, DOGMA offers precomputed core sets for different phylogenetic clades

Unlike CEGMA and BUSCO, which run against unannotated assemblies, DOGMA first requires a set of gene annotations. The paper focuses on the web server version of DOGMA but you can also access the source code online.

It's good to see that other groups are continuing to look at new ways of asssessing the quality of large genome/transcriptome/proteome datasets.

What's in a name?

Initially, I thought the name was just a word that both echoed 'CEGMA' and reinforced the central dogma of molecular biology. Hooray I thought, a bioinformatics tool that just has a regular word as a name without relying on contrived acronyms.

Then I saw the website…

  • DOGMA: DOmain-based General Measure for transcriptome and proteome quality Assessment

This is even more tenuous than the older, unrelated, version of DOGMA:

  • DOGMA: Dual Organellar GenoMe Annotator

Beyond Generations: My Vocabulary for Sequencing Tech

Many writers have attempted to divide Next Generation Sequencing into Second Generation Sequencing and Third Generation Sequencing. Personally, I think it isn't helpful and just confuses matters. I'm not the biggest fan of Next Generation Sequencing (NGS) to start with, as like "post-modern architecture" (or heck, "modern architecture") it isn't future-proofed.

Keith Robison gives an interesting deep dive on how sequencing technologies have been named and potentially could be named.

This post reminded me of my previous takes on the confusing, and inconsistent labelling of these technologies:

Reflections on the 2019 Festival of Genomics conference in London

IMG_8201.jpg

For the third year in a row, I attended the Festival of Genomics conference in London. This year saw the conference change venue, moving from the ExCel Arena to the Business Design Centre in Islington.

The new venue was notably smaller leading to many sessions being heavily overcrowded. There were also fewer 'fun' activities compared to previous years. No graffiti wall and no recharging stations (massage stands and power points for phones).

The opening keynote was given by Professor Mark Caulfield (Chief Scientist at Genomics England

From 100K to 500K

Reflecting on the completion of the 100,000 Genomes Project, Professor Caulfield revealed that the 100,000th genome was completed at 2:40 am on the 2nd December.

He also shared details that at the peak, the project was completing 6,000 genomes a month and it has now reached 103,311 genomes.

The next phase will see 500,000 genomes completed within the NHS over the next five years, with an 'ambition' to go on to sequence five million genomes.

Looking at the global picture of human genome sequencing, Professor Caulfield projected that there will be 60 million completed genomes by 2023.

I wrote more about the conference in a blog post for The Institute of Cancer Research:

Damn and blast…I can't think of what to name my software

1441920213651.png

As many people have pointed out on Twitter this week, there is a new preprint on bioRxiv that merits some discussion:

The full name of the test that is the subject of this article is the Bron/Lyon Attention Stability Test. You have to admit that 'BLAST' is a punchy and catchy acronym for a software tool.

It's just a shame that is also an acronym for another piece of software that you may have come across.

It's a bold move to give your software the same name as another tool that has only been cited at least 135,000 times!

This is not the first, nor will it be the last, example of duplicate names in bioinformatics software, many of which I have written about before.

The 100,000 Genomes Project has finished

This week I helped write a blog post for The Institute of Cancer Research to mark the completion of the 100,000 Genomes Project. This blog post was co-written by a former colleague, Dr Sam Dick, who wrote the majority of the article:

Read the blog post:

Reflecting on this milestone achievement, I also took to Twitter this week for a lengthy (and admittedly rambling) thread that reflected on how far genomics has come as a field. Click on the tweet below to see the full Twitter thread:

Hopping back for another JABBA award

jabba logo.png

So I was meant to have retired myself from handing out JABBA awards to recognise instances of ‘Just Another Bogus Bioinformatics Algorithm’. However, I saw something this week which clearly merits an award.

And so, from a new paper recently published in PLoS ONE I give you:

The three lower-case letters signal that there is going to be some name wrangling going on…so let’s see how the authors arrive at this name:

GRASShopPER: GPU overlap GRaph ASSembler using Paired End Reads

That’s how it is described in the paper, so I guess it could have also been called ‘GOGAUPER’? I find this is another example of a clumsily constructed acronym that could have been avoided altogether.

‘Grasshopper’ is a cool, and catchy, name for any software tool and it doesn’t really need to be retconned into making an awkward acronym.

It does, however, give me one new animal for the JABBA menagerie!