Another hard-to-pronounce bioinformatics software name

This was from a few months ago, published in the journal Nucleic Acids Research:

So how do you pronounce 'FunFHMMer'? I can imagine several possibilities:

  1. Fun-eff-aitch-em-em-er
  2. Fun-eff-aitch-em-mer
  3. Fun-eff-hammer
  4. Fünf-hammer

Reading the manuscript suggests that 'FunF' stems from 'FunFam(s)' which in turn is derived from 'functional families'. This would suggest that options 1 or 3 above might be the correct way to pronounce this software's name.

The fully expanded description of this web server's name becomes a bit of a mouthful:

Class Architecture Topology Homologous Superfamily Functional Families Hidden Markov Model (maker?)

We asked 272 bioinformaticians…name something that makes you angry: more reflections on the poor state of software documentation.

I'd like to share the details of a recent survey conducted by Nick Loman and Thomas Connor that tried to understand current issues with bioinformatics practice and training.

The survey was announced on twitter and attracted almost 300 responses. Nick and Tom have kindly placed the results of the survey on Figshare so that others can play with the data (it seems fitting to talk about this today as it is International Open Access Week):

When you ask a bunch of bioinformaticians the question What things most frustrate you or limit your ability to carry out bioinformatics analysis? you can be sure that you will attract some passionate, and often amusing, answers (I particularly liked someone's response to this question "Not enough Heng Li").

I was struck by how many people raised the issue of poor, incomplete, or otherwise terrible software documentation as a problem (there were at least 42 responses that mentioned this). The availability of 'good documentation' was also listed as the 2nd most important factor when choosing software to use.

I recently wrote about whether this problem is something that really needs to be dealt with by journals and by the review process. It shouldn't be enough that software is available and that it works, we should have some minimal expectation for what documentation should accompany bioinformatics software.

Keith's 10 point checklist for reviewing software

If you are ever in a position to review a software-based manuscript, please check for the following:

  1. Is there a plain text README file that accompanies the software and which explains what the program does and who created it?
  2. Is there a comprehensive manual available somewhere that describes what every option of the program does?
  3. Is there a clear version number or release date for the software?
  4. Does the software provide clear installation instructions (where relevant) that actually work?
  5. Is the software accompanied by an appropriate license?
  6. For command-line programs, does the program give some sensible output when no arguments are provided?
  7. For command-line programs, does the program give some sensible output when -h and/or --help is specified (see this old post of mine for more on this topic)?
  8. For command-line programs, does the built-in help/documentation agree with the external documentation (text/PDF), i.e. do they both list the same features/options?
  9. For script based software (Perl, Python etc.), does the code contain a reasonable level of comments that allow someone with relevant coding experience to understand what the major sections of the program are trying to do?
  10. Is there a contact email address (or link to support web page) provided so that a user can ask questions and get more help?

I'm not expecting every piece of bioinformatics software to tick all 10 of these boxes, but most of these are relatively low-hanging fruit. If you are not prepared to provide useful documentation for your software, then you should also be prepared for people to choose not to use your software, and for reviewers to reject your manuscript!

Your help needed: readers of ACGT can take part in a scientific study and win prizes

I’ve teamed up with researcher Paige Brown Jarreau (@fromthelabbench on twitter) to create a survey of ACGT readers, the results of which will be combined with feedback from readers of other science blogs.

Paige is a postdoctoral researcher at the Manship School of Mass Communication, Louisiana State University and her research focuses on the intersection of science communication, journalism, and new media. She also writes on her popular From the Lab Bench blog.

By participating in this 10–15 minute survey, you’ll be helping me improve ACGT, but more importantly you will be contributing to our understanding of science blog readership. You will also get FREE science art from Paige's Photography for participating, as well as a chance to win a t-shirt and a $50 Amazon gift card!

Click on the following link to take the survey: http://bit.ly/mysciblogreaders

Thanks!

Keith

P.S. Even if you don't take part in the survey, you should still check out Paige's amazing photography, her picture of a Western lowland gorilla is stunning.