Data access for the 1,000 Plants (1KP) project

From the abstract of a new paper in GigaScience:

The 1,000 plants (1KP) project is an international multi-disciplinary consortium that has generated transcriptome data from over 1,000 plant species, with exemplars for all of the major lineages across the Viridiplantae (green plants) clade. Here, we describe how to access the data used in a phylogenomics analysis of the first 85 species, and how to visualize our gene and species trees.

The paper doesn't provide a link to what seems to be the actual project website. They mention directories within the iPlant Collaborative project where you can access data. The project website reveals that this project can be referred to either '1000 plants', 'oneKP' or '1KP' (but not '1000P'?).

Being a pedantic kind of guy, I was curious by the paper's vague mention of 'over 1,000 plant species'. How many species exactly? The paper doesn't say. But if you go to one of the iPlant pages for 1KP, you will see this:

Altogether, we sequenced 1320 samples (from 1162 species)

So this project seems to have exceeded the boundaries suggested by its name. How about the '1.2KP' project?

Identical Classifications In Science: Some advice for Jonathan Eisen

Jonathan Eisen — a colleague at the UC Davis Genome Center — has a quandary. He came up with a name for one of his projects but now needs to consider renaming it. The problem is that ICIS (Innovating Communication in Scholarship) sounds a bit like…well you all know what it sounds like. So Jon has appealed for suggestions on how to rename their project.

He should take comfort that he may not be the only one facing this dilemma. After all, the International Cooperative ITP Study Group (ICIS) has been an ongoing collaboration between hematologists since 1997. I wonder whether they are considering a name change? Maybe Jon could also ask the folk at the International Conference on Information Systems (ICIS) who have been meeting since 1980. Or they could talk to the people that came up with the Intelligent Coin Identification System (ICIS), or the The Intensive Care Infection Score (ICIS), or the Integrated Crate Interrogation System (ICIS), or the 20 year old International Crop Information System (ICIS), or the people who named this gene.

These are just some of the academic uses of ICIS that I could find from a couple of quick searches. I expect that there are more out there. This is a reflection on one of the most primal desires of all scientists…the need to come up with an acronym or initialism for their project. This urge is all too commonly associated with the additional need to make the name 'fun' (particularly a desire to name things after animals). Acronyms can also backfire for other reasons, such as when you don't fully appreciate how it might sound in other countries.

The shorter your acronym, the more likely that it has been used by other people before you (even within the same field). My suggestion would be to consider the shocking alternative of not using an acronym at all! After all, sometimes people can come up with new names that seem to catch on.

Making genome assemblies in the year 2014

I often like to encourage students to explain their work without using any complex scientific vocabulary. If you can explain what you do to your parents or grand-parents then this is great practice for explaining your work to other scientists from outside your field.

I also encourage students to think of analogies and metaphors for their work as these can really help others to grasp difficult concepts. Yesterday, I wrote a post called Making cakes in the year 2014 which was (hopefully) an obvious attempt to explain some of the complexities and problems inherent in the field of genome assembly.

It almost feels wrong to even attempt to convert millions of ~100 bp DNA fragments into — in the case of some species — a small number of sequences that span billions of bp. Every single step in the process is fraught with errors and difficulties. Every single step is controlled by software with numerous options that are often unexplored. Every single step has many alternative pieces of software available.

If we just focus on one of the earliest steps in any modern sequencing pipeline, the need to remove adapter contamination from your sequenced reads. There are at least thirty-four different tools that can be used for this step and there are over 240 different threads on SEQanswers.com that contain the words 'trim' and 'adapter' (suggesting that this process is not straightforward, and that many people need help).

I had a look at some of these tools. The program Btrim has 12 different command-line options that can all affect how the program trims adapter sequences (it has 27 different command-line options in total). Skewer has 9 different command-line options that will affect the output of the program. The trimmer Concerti has 8 options that will also affect the output. Do we even have a good idea of what is the best way to remove adapter sequences? Maybe we need a 'trimmathon' to help test all of these tools! 

If there is a point to this post maybe it would be that genome assembly is an amazingly complex, time consuming, and fundamentally difficult problem. But even the 'little steps' that that have to be done before you even start assembling your sequences are also far from straightforward. Don't convince yourself for a moment that a single tool — with default parameters — will do all of the hard work for you.

 

 

PLOS Computational Biology: Ten Simple Rules for Writing a PLOS Ten Simple Rules Article

Is there practical advice for contributing to the Ten Simple Rules collection already available? What can we learn from the existing articles in the collection? If only there was an article with ten simple rules for writing a PLOS Ten Simple Rules article. If only that article could be peppered with insightful comments from the founder of the collection: Philip E. Bourne.

This is that article.

This is very meta. I think I will wait for the 'Ten Simple Rules for Writing a Ten Simple Rules Article about writing a PLOS Ten Simple Rules Article'.

Making cakes in the year 2014

I've been trying to make a cake. There are lots of published recipes out there for how to make this cake, but the one that I used came with only a very blurry image of what the finished cake should look like. So I really had to hope that the recipe was a good one, because I wasn't entirely sure if I would be able to tell whether it worked or not.

To get started, I used one of those online shopping services that can deliver all the ingredients to your door. Even though they claimed that they stocked everything on my shopping list, they then informed me that there were a small number of ingredients that they were not able to physically access at the moment. Frustratingly, they weren't able to tell me which ingredients would be missing when they delivered them. How odd. 

Something else that seemed unusual was that my cake recipe specified that I needed almost 100 times the amount of ingredients compared to what will end up in the finished cake. Seems a bit wasteful, but who am I to argue with the recipe?

Before I could actually start the baking process, I found that there were a few issues that I had to overcome. Lots of the ingredients had become stuck to the packaging and I had to use a tool which could separate the two. Only, some of the time it didn't get rid of all the packaging, and some of the time it ended up getting rid of not just the packaging but some of the ingredient as well. There's actually several tools on the market for doing this, but they all seem to perform slightly differently.

After I got rid of the packaging I noticed that lots of the ingredients had started to spoil and had to be thrown away, but some of them could be salvaged by cutting off the bad parts. There also seemed to be a lot of implements that you can buy to help with the cutting. Wasn't obvious which one was the best, so I used the first one that Google suggested.

At this point it was kind of frustrating to notice that a small proportion of my ingredients weren't cake ingredients at all. I had to throw them all away, but I think that some of them may have ended up in the final cake.

When it came to the actual baking, I was a bit overwhelmed by the fact that there were dozens of different manufacturers who all claimed that I could make a better cake if only I used their brand of oven. Nearly all of these ovens just let you put your raw ingredients in one slot — after you have removed packaging, the spoilt ingredients, and the non-cake ingredients — and voila, out comes your cake!

I chose one of the more popular ovens on the market and waited patiently for many hours as my cake baked happily in the oven. When the timer buzzed and I took the cake out, I was surprised to that many of the raw ingredients were left behind in the oven's 'waste overflow unit'. The real surprise however, was that the finished cake didn't really look anything like the — admittedly blurry — photo that came with the recipe. 

The cake had many different layers, but they weren't quite all the same size and some of them seemed to have been assembled in the wrong order. The pattern on the cake decoration — yes this oven also decorates the cake — was inconsistent at best. It would mostly use one color of icing, but every now and then, it would insert a different color. The same thing happened with the fillings, it would randomly switch from one flavor to another, and then back again. It was almost like there were two different cakes which had  been squished together to make a new one.

When I finally showed the cake to one my baking friends, I was hoping that he would enjoy it. However, all he kept asking me was "How big are the layers?". When I told him, he replied "My cake has bigger layers so yours can't be very good", and then he left. How rude. I took it to another friend and she just said "Your cake is smaller than mine so mine must be better". She also left without trying it. Finally, I took it to another baking colleague. Before I could show him the cake he just said "My cake has most of the common ingredients expected in all cakes, how many does yours have?". I didn't know so he left.

Making cakes is a very strange business.

How would you pronounce the name of this bioinformatics tool?

From the latest issue of Bioinformatics we have a new tool that is an R package for the analysis of GWAS studies. Rather than name the tool, I want you all to first see it exactly as it appears in the journal:

The first character in the name of this software is a character which can often be hard to identify, particularly when certain fonts makes it look like it could be the letters L or I, or even the number 1.

This is not a name that is worthy of a JABBA-award, but it does fall in to my category of posts which I call almost JABBA, for software names that have various other issues. The particular issue in this case is that the name is hard to read and therefore hard to pronounce. I feel that the use of lower-case characters makes it more likely that the reader will attempt to pronounce this as a word, rather than read it as an initialism. E.g. maybe you saw this name and read it as 'Lurgpurr', or 'Ergpurr'.

The reason behind the name is not explained in the article, but when you go to the linked software page, all is revealed:

It's a bit odd that one of the five words that appear in this name ('Gaussian') doesn't get mentioned anywhere in the paper. But more importantly, why did they feel the need for using lower-case characters? 'LRGPR' would have been much easier to read and comprehend than the font-dependent 'lrgpr'.

 

Why the UCSC Genome Browser FTP site is one of my least favorite places to visit

If you visit the Golden Path directory of the UCSC Genome Browser FTP site (ftp://hgdownload.cse.ucsc.edu//apache/htdocs/goldenPath), you will come across the following quirks:

  1. Multiple genomes for the same species are not grouped together under a parent directory for each species, so the number of items in this directory (~250) gives no indication of the number of species represented (~125).
  2. Species identifiers are ambiguous. You have to know that 'mm9' refers to Mus musculus and not Macaca mulatta
  3. Species identifiers are also inconsistent. Some species get just two lower-case characters (e.g. 'mm' = Mus musculus, 'dm' = Drosophila melanogaster) whereas most get six characters (e.g. 'felCat' = Felis catus, 'sacCer' = Saccharomyces cerevisiae).
  4. Humans, hallowed species that we are, simply get 'hg' (presumably for 'human genome').
  5. The six-character format reverses centuries (!) of naming convention by making the genus part of the name start with a lower-case character and the specific part of the name start with an upper-case character.
  6. Some species also have date-versioned directories in addition to numerical-suffixed directories. So do you want to download the 'hg7' version of the human genome or instead get the 'hg7oct2000_oo21' (don't ask me what the 'oo_21' part means)?

If you want a challenge, try writing some bioinformatics software that goes from the Latin name for a species to the correct directory on their FTP site! I guess the UCSC team are going to hope that six characters is enough to uniquely identify any future species that end up here. So I hope they don't start sequencing too many more Drosophila species. E.g.

Compare this madness — and it is madness — to the calming orderliness of the Ensembl Genomes FTP site (e.g. ftp://ftp.ensemblgenomes.org//pub/release-23/metazoa/fasta):

A view from UCSC Genome Browser FTP site…

A view from UCSC Genome Browser FTP site…

…compared to a view from the Ensembl Genomes FTP site

I think the key point from this story is that a lot of bioinformatics research can be hard enough without the added complexities of working with unstructured data. When you start building any new resource in bioinformatics, be it an FTP site, web site, GitHub repository, you should plan for the future! I.e. expect things to expand, grow, and greatly increase in complexity.

Even if you intend for a resource to only ever contain information for a single species, assume that it will end up containing hundreds of species. You should also assume that people may wish to automate the querying of your data. If you plan for these things from the moment you start building your resource, you might make some bioinformaticans happy — and you certainly don't want to make us angry…you wouldn't like us when we're angry.

How does the popularity of the UC Davis Genome Center vary with geographic location?

If I perform a Google search for the two words genome center, I see that the UC Davis Genome Center (henceforth UCDGC) is the top hit. But this is to be expected because Google has been personalizing search results for some time now, so this result is obviously tailored to me (if you didn't know, I work at the UCDGC).

If you are signed in to Google when you perform a search, the results will be heavily influenced by your search history and by what Google knows about you and your interests. Even if you sign out of Google, the search engine giant can track some information via cookies. Even if you disable cookies or use a private browsing mode, Google is still altering your search results because it knows your location (even if only approximately).

This explains why I will almost always see UCDGC as the top result when I search for 'genome center'. To get around this, I could use a search engine that doesn't track my activity, or I could use a private browsing mode in combination with a little-known feature of Google, that of changing your search location. It's possible to perform a search as if I was located in any major city or state in America.

So this allows me to see how often the UCDGC appears in the #1 position as I move around the country. I first performed a search for 'genome center' as if I was located in each state (e.g. set location to be 'Alabama', 'Alaska', 'Arkansas' etc.):

Ranking of UC Davis Genome Center among Google search results when searching for 'genome center' in each state

When you search for 'genome center', the UCDGC is the top search result in every state! One caveat to this approach is that it may not be all that meaningful to set your location to be an entire state. So I repeated the approach but this time I set my location to be the most populous city in each state:

Ranking of UC Davis Genome Center among Google search results when searching for 'genome center' in the most populous city of each state (as indicated by position of marker within each state). 

This shows that UCDGC is the #1 search result for cities in 36/50 states. The places where UCDGC is not #1 are all cities that have a notable genome center of their own (or are located close to one). A few notes relating to this:

  1. The New York Genome Center dominates results not only in New York City (NY), but also in Newark (NJ), Bridgeport (CT), and Philadephia (PA)
  2. The #1 result in Baltimore (MD) is for the Institute of Genome Sciences at the University of Maryland
  3. St. Louis (MO) sees The Genome Institute at Washington University take the top spot
  4. In the north west, a search from Seattle gives the Seattle Structural Genomics Center for Infectious Disease as the most popular result. But if you head to Spokane (Washington's 2nd city), then the UCDGC becomes the #1 result again
  5. In Texas, the Department of Genomic Medicine at the Houston Methodist Research Institute, pushes UCDGC to 4th place. However, move to San Antonio or Dallas and the UCDGC regains first place
  6. Chicago (IL) has the Institute for Genomics and Systems Biology at #1
  7. In Minneapolis (MN) it is the University of Minnesota Genomics Center who is the top dog
  8. The home of the King (Memphis, TN) is also home to the W. Harry Feinstone Center for Genomic Research which takes the #1 position. Once again, if you move to this state's second city (Nashville), the UCDGC regains the top spot in the search results.
  9. Las Vegas, NV is home to the University of Nevada Las Vegas Genomics Core Facility. Moving to Nevada's second city (Henderson) puts UCDGC back on top.
  10. In Salt Lake City (UT) you can find the Utah Genome Depot at the University of Utah dominating the rankings.
  11. Finally, in Atlanta (GA), it is the Emory University Integrated Genomics Core which denies the UCDGC the #1 position

The UC Davis Genome Center is not only the top hit when you search for 'genome center' in various locations in the USA. If you use the Google location option to go truly global, you will see that we rank as the top search result for 'genome center' in London, Paris, Berlin, Moscow, Dehli, Seoul, Cairo, Buenos Aires, Bogota, Rio de Janeiro, Cape town, Kuala Lumpur, and Sydney!

While this could all be the result of UC Davis spending millions of dollars to adopt search engine optimization strategies to unduly influence our position in the search results, I prefer to believe that it reflects our reputation for world-class genomics research and training.

Real bioinformaticians and old bioinformaticians

A passing mention of the phrase 'real bioinformaticians' by Michael Hoffman (@michaelhoffman) yesterday, prompted me to elevate the concept to be worthy of its own hashtag. This is what happened next:

You will notice that Sara G's response (@sargoshoe) humorously introduced the concept of #oldbioinformaticians, and this in turn spawned an even longer set of tweets (see below). I think that many of the more — how shall we put this — wise and distinguished members of the bioinformatics community, enjoyed the chance for a trip down memory lane.

Musical encores in bioinformatics and other sciences

I've previously flagged a few examples of independently developed bioinformatics software tools that share the same name. My recent post about the JABBA-award winning software called MUSIC prompted some people to let me know that this is another name that has been used repeatedly by different groups.

So thanks to Nicolas Robine and commenter LMikeF, we can see that MUSIC is a very popular name for bioinformatics tools:

  1. MuSiC: a tool for multiple sequence alignment with constraints (2004)
  2. RE-MuSiC: a tool for multiple sequence alignment with regular expression constraints (2007)
  3. MuSiC: identifying mutational significance in cancer genomes (2012)
  4. MUSIC: Identification of Enriched Regions in ChIP-Seq Experiments using a Mappability-Corrected Multiscale Signal Processing Framework (2014)
  5. MUSiCC: Towards an accurate estimation of average genomic copy-numbers in the human microbiome (2014)

The first two publications sadly suffer from link rot and the provided URLs no longer work. These two publications are also by the same group, which begs the question, what would they call a 3rd iteration of their software (RE-RE-MuSiC?).

A little bit of additional searching reveals that MUSIC is a popular name in other scientific endeavors as well:

  1. MUSIC: MUltiScale Initial Conditions — software to generate initial conditions for cosmological simulations
  2. MUSIC: MUltiScale SImulation Code — fluid dynamics software: warning this website will make you nauseous!
  3. MUSIC: Muerte Subita en Insufficiencia Cardiaca — a longitudinal study to assess risk predictors of death inpatients with heart failure
  4. MUSIC: MUtation-based SQL Injection vulnerabilities Checking tool — a tool to help check for vulnerabilities in web based applications

I guess people like the name MUSIC and will go to almost any lengths to make an acronym/initialism for it.