Some short slide decks from a recent Bioinformatics Core workshop

Last week I helped teach at a workshop organized by the Bioinformatics Core facility of the UC Davis Genome Center. The workshop was on:

  • Using the Linux Command Line for Analysis of High Throughput Sequence Data

I like that the Bioinformatics Core makes all of their workshop documentation available for free, even if you didn't attend the workshop. So have a look at the docs if you want to learn about genome assembly, RNA-Seq, or learning the basics of the Unix command-line (these were just some of the topics covered).

Anyway, I tried making some fun slide decks to kick off some topics. They are included below.

 

This bioinformatics lesson is brought to you by the letter 'D'

'D' is for 'Default parameters', 'Danger', and 'Documentation

 

This bioinformatics lesson is brought to you by the letter 'T'

'T' is for 'Text editors', 'Time', and 'Tab-completion'

 

This bioinformatics lesson is brought to you by the letter 'W'

'W' is for 'Worfklows', 'What?', and 'Why?'

Developments in high throughput sequencing – June 2015 edition

If you're at all interested in the latest developments in sequencing technology, then you should be following Lex Nederbragt's In beteween lines of code blog. In particular you should always take time to read his annual snapshop overview of how the major players are all faring.

This is the fourth edition of this visualisation…As before, full run throughput in gigabases (billion bases) is plotted against single-end read length for the different sequencing platforms, both on a log scale.

The 2015 update looks interesting because of the addition of a certain new player!

L50 vs N50: that's another fine mess that bioinformatics got us into

N50 is a statistic that is widely used to describe genome assemblies. It describes an average length of a set of sequences, but the average is not the mean or median length. Rather it is the length of the sequence that takes the sum length of all sequences — when summing from longest to shortest — past 50% of the total size of the assembly. The reasons for using N50, rather than the mean or median length, is something that I've written about before in detail.

The number of sequences evaluated at the point when the sum length exceeds 50% of the assembly size is sometimes referred to as the L50 number. Admittedly, this is somewhat confusing: N50 describes a sequence length whereas L50 describes a number of sequences. This oddity has led to many people inverting the usage of these terms. This doesn't help anyone and leads to confusion and to debate.

I believe that the aforementioned definition of N50 was first used in the 2001 publication of the human genome sequence:

We used a statistic called the ‘N50 length’, defined as the largest length L such that 50% of all nucleotides are contained in contigs of size at least L.

I've since had some independent confirmation of this from Deanna Church (@deannachurch):

I also have a vague memory that other genome sequences — that were made available by the Sanger Institute around this time — also included statistics such as N60, N70, N80 etc. (at least I recall seeing these details in README files on an FTP site). Deanna also pointed out that the Celera Human Genome paper (published in Science, also in 2001) describes something that we might call N25 and N90, even though they didn't use these terms in the paper:

More than 90% of the genome is in scaffold assemblies of 100,000 bp or more, and 25% of the genome is in scaffolds of 10 million bp or large

I don't know when L50 first started being used to describe lengths, but I would bet it was after 2001. If I'm wrong, please comment below and maybe we can settle this once and for all. Without evidence for an earlier use of L50 to describe lengths, I think people should stick to the 2001 definition of N50 (which I would also argue is the most common definition in use today).

Updated 2015-06-26 - Article includes new evidence from Deanna Church.

And the award for needless use of subscript in the name of a bioinformatics tool goes to…

The following paper can be found in the latest issue of Bioinformatics:

MoRFs are molecular recognition features, and the tool that the authors developed to identify them is called:

MoRFCHiBi

So the tool's name includes a subscripted version of 'CHiBi', a name which is taken from the shorthand name for the Center for High-Throughput Biology at the University of British Columbia (this is where the software was presumably developed). The website for MoRFCHiBi goes one step further by describing something called the MoRFChiBi,mc predictor. I'm glad that they felt that some italicized text was just the thing to complement the subscripted, mixed case name.

The subscript seems to serve no useful purpose and just makes the software name harder to read, particularly because it combines a lot of mixed capitalization. It also doesn't help that 'ChiBi' can be read as 'kai-bye' or 'chee-bee'. I'm curious whether the CHiBi be adding their name as a subscripted suffix to all of their software, or just this one?

A great slide deck about how to put together the new format NIH Biosketch

This is a little bit off-topic, but I found it useful…

Earlier this year, Janet Gross and Gary Miller from the Rollins School of Public Health at Emory University put together a very useful guide on how to put together the new format NIH Biosketch:

There is also a nice page of grant writing tools available. I liked how they highlight the important point that cramming as many words as possible into the five pages should not be the goal. Aesthetics and layout matter!

101 questions with a bioinformatician #27: Michael Barton

Michael Barton is a Bioinformatics Systems Analysis at the Joint Genome Institute (that makes him a JGI BSA?). His work involves developing automated methods for the quality checking of sequencing data and evaluating new bioinformatics software. He may introduce himself as Michael, but as his twitter handle suggests, he is really Mr. Bioinformatics.

His nucleotid.es website is doing amazing things in the field of genome assembly by using Docker containers to try to parcel up genome assembly pipelines. This is enabling the ‘continuous, objective and reproducible evaluation of genome assemblers using docker containers’. Related to this is the bioboxes project — a great name by the way — which may just succeed in revolutionizing how bioinformatics is done. From the bioboxes manifesto:

Software has proliferated in bioinformatics and so have the problems associated with it: missing or unobtainable code, difficult to install dependencies, unreproducible workflows, all with terrible user experiences. We believe a community standard, using software containers, has the opportunity to solve these problems and increase the standard of scientific software as a whole.

You can find out more about Michael by visiting his Bioinformatics Zen blog or by following him on twitter (@bioinformatics). And now, on to the 101 questions…

Read More

Registration is now open for the UK 2015 Genome Science meeting

Registration for the 2015 Genome Science meeting is now open. This is the meeting formally known as Genome Science: Biology, Applications and Technology, which in turn was formally known as The UK Next Generation Sequencing Meeting. I expect that next year it will just be known as Genome.

It's a fun meeting which all the cool kids go to, and it's in Brum so you will at least be able to get a decent curry.

The scientific sessions will be as follows:

  • 20 years of bacterial genomics: Dr Nick Loman, University of Birmingham
  • Environmental genomics: Dr Holly Bik, University of Birmingham
  • Functional genomics: Associate Professor Aziz Aboobaker, University of Oxford
  • New technologies: Dr Mike Quail, Wellcome Trust Sanger Institute
  • Plant and animal genomics: Mr Mick Watson, Edinburgh Genomics
  • Novel computational methods: Professor Chris Ponting, University of Oxford
  • Single cell genomics, Professor Neil Hall, University of Liverpool

It is not altogether inconceivable that evening entertainment will be provided by The Nick & Mick Show, where Messrs. Loman and Watson might showcase their latest venture — Nanopore: the musical.

When will ‘open science’ become simply ‘science’?

A great commentary piece by Mick Watson (@BioMickWatson) in Genome Biology where he discusses the six O's (Open data, Open access, Open methodology, Open source, Open peer review, and Open education). On the former:

…it is no longer acceptable for scientists to hold on to data until they have extracted every last possible publication from it. The data do not belong to the scientist, they belong to the funder (quite often the taxpayer). Datasets should be freely available to those who funded them. Scientists who hoard data, far from pushing back the boundaries of human knowledge, instead act as barriers to discovery.

Amen.