What's in a name? Better vocabularies = better bioinformatics?

About 7:00 this morning I was somewhat relieved because my scheduled lab talk had been postponed (my boss was not around). But we were still having the lab meeting anyway.

About 8:00 this morning, I stumbled across this blog post by @biomickwatson on twitter. I really enjoyed the post and thought I would mention in in the lab meeting. Suddently though that prompted me to think about some other topics relating to Mick's blog post.

Before I knew it, I had made about 30 slides and ended up speaking for most of the lab meeting. I thought I'd add some notes and post the talk on SlideShare.

What's in a name? Better vocabularies = better bioinformatics?

from

Keith Bradnam

I get very frustrated by people who rely heavily on GO term analysis, without having a good understanding of what Gene Ontology terms are, or how they get assigned to database objects. There are too many published anayses which see an enrichment of a particular GO term as some reliable indicator that there is a difference in datasets X & Y. Do they ever check to see how these GO terms were assigned? No.

New recipient of the Just Another Bogus Bioinformatics Acronym (JABBA) award

It was only a few weeks ago that I gave out the last JABBA award. One of the winning recipients that time was a database — featuring excessive use of mixed-case characters — called 'mpMoRFsDB'.

Well it seems that if you work on 'MoRFs' (Molecular Recognition Features) then you must love coming up with fun acronyms. This week in BMC Bioinformatics we have another MoRFs related tool that is worthy of a JABBA award:

The oh-so-catchy 'MFSPSSMpred' (Masked, Filtered and Smoothed Position-Specific Scoring Matrix-based Predictor) is the kind of name that requires you to first sit down and take a deep breath before attempting to pronounce it. Just imagine having to tell someone about this tool:

"Hi Keith, can you recommend any bioinformatics tools for identifying MoRFs?"

"Why certainly, have you tried em-eff-ess-pee-ess-ess-em-pred?"

Congratulations MFSPSSMpred, you join the ranks of former JABBA winners.

Some free code editors for Macs (that work in a UC Davis computer lab)

Every year I help teach a course[1] to grad students that hopefully leaves them with an understanding of how to use Unix and Perl to solve bioinformatics-related problems. We use a Mac-based computer lab because all Macs come with Unix and Perl installed. Many of our students are new to programming and many are new to Macs. Because of this, and because they need to use a code editor to write their Perl scripts, we have previously pointed them towards Fraise. Despite its age [2], this relatively lightweight editor has proved fine for the last few years that we have taught this course.

This year, however, Fraise proved problematic. The computer lab has now upgraded to OS X 10.8 which provides extra safeguards about what apps can be run. This Gatekeeper technology has been set up to only allow ‘signed’ apps[3]. The version of Fraise that we were using required administrator privileges for it to be opened (not possible in this computer lab).

My first thought was to direct students to download and install TextWrangler. This is an excellent, powerful, and well maintained code editor for Macs. Most importantly, it is free and also a signed app. However, it does try to install a helper app which caused a persistent dialog window to keep popping up during the installation. Clicking ‘cancel’ worked…but only after about 20 attempts[4]. I like TextWrangler as an app, but prefer the cleaner look of Fraise. So today I set out to find code editors for Macs that:

  • were free
  • could be run on the Macs in our computer lab (i.e. had to be signed apps)
  • were relatively simple to use and/or were easy on the eye

Here is what I came up with. These are all apps that seem to be under current development (updated at some point in 2013):

AppSize in MB Free? Notes

Komodo Edit301.1YesBig because it is part of a larger IDE tool which is not free[5]

Sublime Text 227.3sort of[6]Gaining in popularity (a version 3 beta is also available)

TextMate 230.3YesWhile this is technically an ‘alpha’[7] release, it seems very stable.

TextWrangler19.2YesVery robust and venerable app. Free since 2005

Tincta 25.6YesSmall app, similar to Fraise in appearance

 

If I had to suggest one, it would probably be Sublime Text 2 (though I will encourage students to buy this if they like it). TextMate 2 is a good second choice, particularly because it is also a very clean looking app. Of course, at some point we need to tell students about the joys of real text editors such as vi, vim, and emacs…but of course this might lead to hostilities![8]

  1. This course material is available for free and became the basis for our much more expansive book on the same topic  ↩

  2. Fraise is itself a fork of Smultron which stopped development in 2009 but which reappeared as a paid app in the Mac App Store in 2011.  ↩

  3. Those apps that are approved by Apple, even if they are not in the Mac App Store.  ↩

  4. Seriously, it takes a lot of clicks to make this dialog box go away. It then produces more pop-up dialogs asking whether you want to register, or install command-line tools.  ↩

  5. Currently $332 for a single license  ↩

  6. This is a paid app, but can be used in trial mode indefinitely with occasional reminders.  ↩

  7. TextMate 2 has been in alpha status since 2009  ↩

  8. Editor wars should be avoided if possible  ↩

How well do UC Davis Graduate Groups communicate their work to the wider world?

PhD students in our lab are mostly split between a couple of UC Davis's many graduate groups. A conversation with some of the students today about 'outreach' and 'social media' led me to wonder how well these graduate groups are communicating their presence to the outside world. The simplest ways of doing this would be:

  • maintain a current website for your graduate group (i.e. with news items)
  • use Facebook (ideally with an open group)
  • establish a blog
  • use twitter

I looked at 11 different graduate groups to see how well they ticked the above boxes. I might be missing some blogs, Facebook groups, and twitter accounts, but if I can't find the relevant details from a Google/Facebook/Twitter search, then I'm assuming that others won't discover them either. This is what I found:

Headline links take you to the home page for the respective graduate group.

Biochemistry, Molecular, Cellular and Developmental Biology (BMCDB)

  • No news page but has an actively maintained blog (easily linked to from above site)
  • Facebook group (open)
  • Active twitter account

Biomedical Engineering (BME)

Biostatistics

  • No news page, though there is a short 'announcements' box on main page
  • No Facebook group
  • No twitter account

Ecology

Epidemiology

  • No news page
  • Facebook group (closed)
  • No twitter account

Integrative Genetics and Genomics (IGG, formerly GGG)

  • Has a news page, but only one item from 2013, remaining items from 2009 and 2008!
  • Facebook group (closed, and have to search for GGG or IGG to find it)
  • No twitter account

Immunology

  • No news page
  • No Facebook group
  • No twitter account

Microbiology

  • No news page
  • Facebook group (closed)
  • No twitter account
  • Has separate website
  • Other: website told me I had to enable Javascript to view their home page even though I have javascript enabled

Nutritional biology

  • No news page
  • No Facebook group
  • No twitter account

Plant Biology

  • No news page
  • No Facebook group
  • No twitter account

Population Biology

  • No news page
  • No Facebook group
  • No twitter account

Please let me know of any updates or additions that I can make to this list

So overall it is pretty poor. BMCDB outshines the others, though BME and Ecology also have a good presence on the web. In many ways, I think it looks worse to do these things badly than to not to them at all. Closed Facebook groups don't send out an inviting message, and having a 'news' page for your graduate group with items from 5 years ago, also sends out the wrong signals.

It takes time and effort to maintain a social media presence, but it doesn't take much effort to at least maintain a news page or simple twitter account (even posting just 1–2 times a week is better than nothing).

Furthermore, the ability to show that you can communicate your work to the wider world is of increasing relevance when applying for grants. It can also raise your profile with your peers and be a useful addition on a resume that helps you stand out from other applicants. Finally, starting a blog or twitter account also helps you hone your writing skills (the latter is great for making you think about how to condense complex thoughts into 'bite size' chunks).

I hope that some of UC Davis's graduate groups make more of an effort in this area (and of course the same can be said for many of UC Davis's departmental and lab websites).

 

Updated 26th September: Added details of some graduate groups that do have blogs and/or websites but which, unhelpfully, are not linked to from their official graduate group webpage.

New JABBA awards for crimes against bioinformatics acronyms

JABBA is an acronym for Just Another Bogus Bioinformatics Acronym. I hand out JABBA awards to bioinformatics papers that reach just a little bit too far when trying to come up with acronyms (or intialisms). See details of previous winners here.

So without further delay, let's see who the new recipients are. As always, journals like Bioinformatics produce many strong candidates for JABBA awards. Here are three winners from the latest issue:

  1. mpMoRFsDB: a database of molecular recognition features in membrane proteins - What is it with the need for so much use of mixed-case characters these days? It makes names harder to read, and this database has a name which doesn't really roll off the tongue.
  2. CoDNaS: a database of conformational diversity in the native state of proteins - More mixed case madness. I'm really unsure how this database should be pronounced. 'Cod Naz'? 'Code Nass'? 'Coe Dee-en-ays'? The abstract suggests that the acronym stems from the name 'Conformational Diversity of Native State', so I guess we should be thankful that they avoided the potential confusion of naming this database 'CoDoNS'.
  3. GALANT: a Cytoscape plugin for visualizing data as functional landscapes projected onto biological networks - If you want your bioinformatics tool to have a cool sounding name, just give it that name. Don't feel that you somehow have to tenuously arrive at that name from a dubious method of selecting just the letters that make it work. The abstract of this paper reveals that GALANT stems from 'GrAph LANdscape VisualizaTion'. So the full name only consists of three words, and the initial letter of one of those words doesn't even make it into the acronym/initialism. They may as well call their tool 'GREAT' (GRaph landscapE visualizATion).

The joys of dealing with well-annotated 'reference' genomes

Important update included at bottom of this post

Arabidopsis thaliana has one of the best curated eukaryotic genome sequences so you might expect working with Arabidopsis data to be a joy? Well, not always. Consider gene AT1G79920. This gene has two coding variants (AT1G79920.1 and AT1G79920.2) which only seem to differ in their 5' UTR regions:

2013-08-30 at 3.23 PM.png

The primary source for Arabidopsis genome information is The Arabidopsis Information Resource (TAIR) but another site called Phytozome has started collating all plant-related genome data (often pulling it from sites like TAIR).

Both sites allow you to download coding sequences for each gene from their FTP sites, or view the same sequence information on their web sites. You could also download GFF files and using the coordinate information, extract the coding sequences from the whole genome sequence.

A simple sanity check when working with coding sequences is that the length of a coding sequence should be divisible by 3 otherwise there might be a frameshift. There seems to be an issue with AT1G79920 in that it sometimes has a frameshift. I say sometimes because it depends on where you look at the data.

Here is a sequence alignment for exons 5 and 6 of both coding variants of this gene. The sequence identifiers include a 'P' or 'T' to indicate whether the information came from Phytozome or TAIR and they also denote whether it was taken from their web or FTP site. I also insert 'gtag' into one sequence to illustrate where the intron between these exons would occur.

2013-08-30 at 3.25 PM.png

The first boxed rectangle highlights a base that looks like a SNP, but these are coding variants with UTR variation, so the underlying genome sequence in the coding regions should be identical.

The second box is perhaps more disturbing. The web site versions of exon 7 all have a 1 bp deletion which would lead to a frameshift. I guess this error started at TAIR and propagated to Phytozome, but the fact that both sites also have a correct version available on their FTP site is confusing and troubling.

My boss first discovered this by looking at the GFF files for the Arabidopsis genome and this was one of 25 genes with a 'not-divisible-by-3' length error. So it pays to always check — and double check — your data.

Time to send an email to TAIR and Phytozome to report this.

Update

I heard back from TAIR and Phytozome and it seems that there are a small number of likely genome sequence errors in the latest (TAIR10) release of the A. thaliana genome. When these affect genes and would introduce a frameshift error, TAIR make manual edits to the resulting files of CDSs, transcripts, and proteins. They do this when there is clear homology from genes in other species that suggests the change should be made.

So if you work with downloaded FASTA files from the FTP site, you won't see these errors. If you work from GFF files (which is presumably what some of their web displays do), you'll run into these issues. There is a small file (TAIR10sequenceedits.txt) included as part of the TAIR10 release which documents these changes.

Thanks to a speedy and helpful response from both sites. Perhaps I should retitle this post The Joys of Dealing with Fast and Knowledgable Genome Curators?

Crappy science spam from Photon Journals

I've started receiving more and more science-related spam recently. Some of these are semi-legitimate, like those from the OMICS group, but should still be avoided (here's a good cautionary tale of what to expect if you go to an OMICS Group conference).

Sometimes I'm just disappointed by how little effort these spammers take to make their emails looks professional. Over the last couple of weeks I've received four emails from 'Photon Journals' asking me to submit my research to some of their (many) journals. To give you an idea of how lame their attempts at spamming people are, consider that:

  1. Their email comes from a Hotmail account.
  2. They address me as 'Dear Dr. Krbradnam' (clearly they have phished my academic email address from somewhere and just assumed that this is my surname).
  3. Their email is written in purple!
  4. Two emails on the same day ask me to submit material to two completely different journals (see below).
  5. The 'website' for the journal is hosted on Google Sites
  6. The website looks like it was designed by a five year old on acid (see below).
2013-08-29 at 8.31 PM.png
2013-08-29 at 8.32 PM.png
2013-08-29 at 8.44 PM.png

The modus operandi of these fake journals is just to charge you to submit your paper (and really it can be any paper, no-one is going to check it, let alone read it). I guess we should be grateful that some of these are so obviously fake, that it makes it easier to ignore them. Several sites are now collating list of all of these bogus journals (there are a lot).

New JABBA award recipients for the bogus use of bioinformatics acronyms

The latest issue of the journal Bioinformatics was released today and as with most issues, it features a large number of bioinformatics tools and resources. Many of these tools feature questionable use of acronyms and initialisms and so it is time to hand out some new JABBA awards.

A quick reminder, JABBA stands for 'Just Another Bogus Bioinformatics Acronym'. I recently gave out the inaugural JABBA award and have since discovered that JABBA is not the only game in town (if the game is critically evaluating the overreaching use of acronymns in science). Step forward ORCA - the Organisation of Really Contrived Acronyms, a blog that was started by an old bioinformatics colleague of mine. I highly recommend checking this out to see more acronym-derived-crimes.

Anyway, here are the several nominations for JABBA awards from the latest issue of Bioinformatics:

  1. MaGnET: Malaria Genome Exploration Tool - My main issue with this one is the ungainly capitalization. There is also the issue that the tool is in no way related to magnets so if you don't remember the fact that it is called MaGnET then it it hard to find. A Google search for malaria tool doesn't feature MaGnET on the first page of hits.
  2. MMuFLR: missense mutation and frameshift location reporter - Okay, so part of me thinks that this is probably meant to be pronounced 'muffler'? If this is not the case, then it doesn't really roll of the tongue. "Oh you're looking to find missense and frameshift mutations? Then you should try em-em-yoo-ef-el-ar". It doesn't help when you follow the link in the article to find the resource (a Galaxy workflow) only to find no mentions of MMuFLR on the resulting page (unless you search for it).
  3. mRMRe: an R package for parallelized mRMR ensemble feature selection - Some of the same reasons that applied to MMuFLR also apply to mRMRe. Try saying this three times fast and you'll see what I mean. It seems that mRMRe is an 'ensemble' implementation of something called mRMR (minimum redundancy maximum relevance). Still not sure why the first 'm' is free from the need of capitalization (or the 'e' for 'ensemble').
  4. TIGAR: transcript isoform abundance estimation method with gapped alignment of RNA-Seq data by variational Bayesian inference - Aways a bit suspicious when you have a tool name that features 15 words but somehow gets reduced to 5 letters for the abbreviated name. Could equally make a case that this tool should be called TIRBI. In any case, TIGAR is quite an unusual spelling and you might think that a Google search for TIGAR would put this resource near the top of the results. However it also seems that TIGAR is the The Inland Gateway Association of Realtors as well as The International Gymnastics Academy of the Rockies. But perhaps more importantly, TIGAR already has a scientific connection as it also the name of a gene (TIGAR: TP53-induced glycolysis and apoptosis regulator00762-8)).

Thanks to Bioinformatics journal for providing some new JABBA recipients.

Mining Altmetric data to discover what types of research article get the most social media engagement

Altmetric is a service that tracks the popularity of published research articles via the impact that those articles make on social media sites. Put simply, the more an article is tweeted and blogged about, the higher its Altmetric score will be. The scoring system is fairly complex as it also tracks who is reading your article on sites such as Mendeley.

I was pleased to see that the recent Assemblathon 2 paper — on which I am the lead author —  gained a lot of mentions on twitter. Curious, I looked up its Altmetric score and was surprised to see that it was 61. This puts it in in the 99th percentile of all articles tracked by Altmetric (almost 1.4 million articles). I imagine that the score will possibly rise a little more in the coming weeks (at the time of writing the paper is only four days old).

I was then curious as to how well the Assemblathon 1 paper fared. Published in September, 2011, it has an Altmetric score of 71. This made me curious as to where both papers ranked in the entire list of 1,384,477 articles tracked by this service. So I quickly joined the free trial of Altmetric (they have a few paid services) and was able to download details of the top 25,000 articles. This revealed that the two Assemblathon papers came in at a not-too-shabby 5,616th and 10,250th place overall. If you're interested, this paper — with an off-the-charts Altmetric score of 11,152 — takes the top spot.

Just to satisfy my curiosity, I made a word cloud based on the titles of the research papers that appear in the top 10,000 Altmetric articles.

2013-07-26 at 3.37 PM.png

Perhaps unsurprisingly, this reveals that analyses (and meta-analyses) of data relating to human health prompt a lot of engagement via social media. Okay, time to go and write my next paper 'A Global Study of Human Health Risks'.