Winning an award that shouldn't exist: progress towards 'open data' and 'open science'

It was announced yesterday that the Assemblathon 2 paper has won the 2013 BioMed Central award for ‘Open Data’ (sponsored by Lab Archives). For more details on this see here and here.

While it is flattering to be recognized for our efforts to conduct science transparently, it still feels a little strange that we need to have awards for this kind of thing. All data that results from publicly funded science research should be open data. Although I feel there is growing support for the open science movement, much still needs to be done.

One of the things that needs to become commonplace is for scientists to put their data and code in stable, online repositories, that are hopefully citable as independent resources (i.e. with a DOI). For too long, people have used their lab websites as the end point for all of their (non-sequence[1]) related data (something that I have also been guilty of).

Part of the problem is that even when you take steps to submit data to an online repository of some kind, not all journals allow you to cite them. This tweet by Vince Bufflo from yesterday illustrated one such issue (see this Storify page for more details of the resulting discussion):


Tools like arXiv.org, BioRxiv, Figshare, Slideshare, GitHub, and GigaDB are making it easier to make our data, code, presentations, and preliminary results more available to others. I hope that we see more innovation in this area and I hope that more people take an ‘open’ approach to other aspects of science, not just the sharing of data[2]. Luckily, with people around like Jonathan Eisen and C. Titus Brown, we have some great role models for how to do this.

How will we know when we are all good practitioners of open science? When we no longer need to give out awards to people just for doing what we should all be doing.


  1. For the most part, journals require authors to submit nucleotide and protein sequences to an INSDC database, though this doesn’t always happen.  ↩

  2. I have written elsewhere about the steps that the Assemblathon 2 took to try to be open throughout the whole process of doing the science, writing the paper, and communicating the results.  ↩

101 questions with a bioinformatician #4: Michael Hoffman

This post is part of a series that interviews some notable bioinformaticians to get their views on various aspects of bioinformatics research. Hopefully these answers will prove useful to others in the field, especially to those who are just starting their bioinformatics careers.


Michael Hoffman is a principal investigator at the Princess Margaret Cancer Center in Toronto. His research group is based in the glamorous sounding Toronto Medical Discovery Tower, and the focus of his current work is on developing machine learning techniques to better understand chromatin biology. The highest complement that I can pay to Michael is that he understands the need to properly document his code; the description for his segway software states:

Our software has extensive documentation and was designed from the outset with external users in mind.

I wish more bioinformaticians had this attitude! You can find out more about Michael by following him on Twitter (@michaelhoffman).

 

001. What's something that you enjoy about current bioinformatics research?

I love how easy it is to experiment with new ideas. The activation energy for writing and managing a useful piece of code or looking at results keeps reducing. Improvements in lower levels of abstraction keep making it easier to think about more complex problems rather than low-level of implementations.

 

010. What's something that you *don't* enjoy about current  bioinformatics research?

The amount of time wasted by moving data around, converting it from one format to another. Was it Nick Loman who referred to bioinformatics as "advanced file copying"? I hate that stuff. I can't believe no one has solved this problem yet.

 

011. If you could go back in time and visit yourself as an 18 year old, what single piece of advice would you give yourself to help your future bioinformatics career?

I was a biochemistry undergraduate in a chemistry and biochemistry department. I would have been served better by more statistics classes and fewer advanced chemistry classes. I still learned some cool stuff in those classes though, and I got to quantify the hotness of commercial salsas via HPLC. Best lab teaching experiment ever.

 

100. What's your all-time favorite piece of bioinformatics software, and why?

Can I bend the rules and name and name my all-time favorite bioinformatics data resource? That would be Margaret Dayhoff's Atlas of Protein Sequence and Structure (here is a good review on how this resource was developed). Dayhoff and colleagues were the first people to realize that we needed to gather all the available protein sequence information in a database so that we could do cool stuff with it. The whole field traces its origin to Dayhoff's work starting in the 1950s. Of course, back then you could print out all the sequence information available in a book. Try doing that today (well there is this, KB).

Bioinformatics has been around longer than people realize.

 

 

101. IUPAC describes a set of 18 single-character nucleotide codes that can represent a DNA base: which one best reflects your personality?

I'm going to go with R because of my interest in pure science.

 

2014-04-22 11.04 - Article updated to correct typo and correct the web link for Michael's research group.

When is a genome complete...and does it even matter? Part 1: the 1% rule vs Sydney Brenner's CAP criteria

This will be the first in a new series of blog posts that discuss my thoughts on the utility of genomes at various stages of completion (both in terms of genome assembly and annotation). These posts will mostly be addressing issues that pertain to eukaryotic genomes...are there any other kind? ;-)




I often find myself torn between two conflicting viewpoints about the utility of unfinished genomes. First, let's look at the any-amount-of-sequence-is-better-than-no-sequence-at-all argument. This is clearly true in many cases. If you sequence only 1% of a genome, and if that 1% contains something you're interested in (gene, repeat, binding site, sequence variant etc), then you may well think that the sequencing effort was tremendously useful.

Indeed, one of my all-time favorite papers in science is an early bioinformatics analysis of gene sequences in GenBank. Published way back in 1980, this paper (Codon catalog usage and the genome hypothesis) studied "all published mRNA sequences of more than about 50 codons". Today, that would be a daunting exercise. Back then, the dataset comprised just 90 genes! Most of these were viral sequences, with just six vertebrate species represented (and only four sequences from human).

The abstract of this paper concluded:

Each gene in a genome tends to conform to its species' usage of the codon catalog; this is our genome hypothesis.

This mostly remains true today and the original work on this tiny dataset established a pattern that spawned an entire sub-discipline of genomics, that of codon-usage bias (now with over 7,000 publications). So clearly, you can do lots of great and useful science with only a tiny amount of genome sequence information. So what's the problem?

pause-to-switch-hats-to-argue-the-other-point

Well, 1% of a genome may be better than 0%, and 2% is better than 1%, and so on. But I want 100% of a genome (yes, I'm greedy like that). However, I begrudgingly accept that generating a complete and accurate genome assembly (not to mention a complete set of gene annotations) currently falls into the nice-idea-kid-but-we-can't-all-be-dreamers category.

The danger in not getting to 100% completion is that there is a perception — by scientists as well as the general public — that these genomes are indeed all finished. This disconnect between the actual state of completion, versus the perceived state of completion can lead to reactions of the wait-a-minute-I-thought-this-was-meant-to-be-finished!?! variety. Indeed, it can be highly confusing when people go to download the genome of their species of interest, under the impression that the genome was 'finished' many years ago, only to find that they can't find what they're looking for.

Someone might be looking for their favorite gene annotation, but maybe this 'finished' genome hasn't actually been annotated. Or maybe it's been annotated by four different gene finders and left in a state where the user has to decide which ones to trust. Maybe the researcher is interested in chromosome evolution and is surprised to find that the genome doesn't consist of chromosome sequences, just scaffolds. Maybe they find that there are two completely different versions of the same genome, that were assembled by different groups. Or maybe they find that the download link provided by the paper no longer works and they can't even find the genome in question.

The great biologist Sydney Brenner has often spoke of the need to achieve CAP criteria in efforts such as genome sequencing. What are these criteria?

  • C - Complete I.e. if you're going to do it, do a thorough job so that someone doesn't have to come along later to redo it.
  • A - Accurate This is kind of obvious but there are so many published genomes out there that are far from accurate.
  • P - Permanent Do it once, and forever.

The last point is probably not something that is thought about as much as the first two criteria. It relates to where these genomes end up being stored and the file formats that people use. But it also applies to other subtle issues. I.e. let's assume that research group 'X' has sequenced a genome to an impressive depth but that they made a terrible assembly. As long as their raw reads remain available, someone else can (in theory) attempt a better assembly, or attempt to remake the exact same assembly (science should be reproducible, right?).

However, reproducibility is not always easy in bioinformatics. Even if all of the methodologies are carefully documented, the software involved may no longer be available, or it may only run on an architecture that no longer exists. If you are attempting to make a better genome assembly, you could face issues if some critical piece of information was missing from the SRA Experiment metadata. A potentially more problematic situation would be if the metadata was incorrect in some way (e.g. a wrong insert size was listed).

In subsequent posts, I'll explore how different genomes hold up to these criteria. I will also suggest my own 'five levels of genome completeness' criteria (for genome sequences and annotations).

101 questions with a bioinformatician #3: Deanna Church

This post is part of a series that interviews some notable bioinformaticians to get their views on various aspects of bioinformatics research. Hopefully these answers will prove useful to others in the field, especially to those who are just starting their bioinformatics careers.


After 15 years working at the NCBI as a staff scientist, Deanna Church packed her bags and headed over to the West Coast (which some of us think of as the best coast) to join Personalis, a company that is 'pioneering genome guided medicine'. In her new role as Senior Director of Genomics and Content, Deanna is helping to improve their bioinformatics pipelines which will help lead to improved analysis of human genome data. This work will also involve supporting the move to GRCh38

If you don't know what GRCh38 is, then you've either been living under a rock or you probably have never worked with vertebrate genomes. The 'GRC' part of GRCh38 refers to the Genome Reference Consortium, an organization that Deanna was heavily involved with during her time at the NCBI. The GRC are the official 'gatekeepers of genomic light and truth' (a title which I may or may not have just invented)...the key point is that they ensure that the 'reference sequence' for the genomes of human and other species remains a trusted reference. They coordinate the incorporation of changes to the reference sequence, changes that need to be made based on the latest sequencing and genome variation data.

I think that Deanna's work in genomics can best be summarized using her very own words taken from her About.me page:

Deanna Church: making the genome a friendlier place

To find out more about Deanna, follow her on twitter (@DeannaChurch). And now, on to the 101 questions...

 

 

001. What's something that you enjoy about current bioinformatics research?

In general, I really enjoy bioinformatics for the problem solving aspects. Most of the time, even the (seemingly) smallest problem will throw you unanticipated challenges. The thing I like most about the work I’m currently doing is that I feel like I’m part of a team that is really working on processes that will have a direct impact on people’s medical care. 

 

010. What's something that you *don't* enjoy about current  bioinformatics research?

This could change on a day to day basis, but my current woe is managing sequence identifiers.  This is a serious problem — while I understand the convenience of reporting results as either ‘chr1’ or ‘1’ these are not robust sequence identifiers. We should be managing and exchanging data using a more robust nomenclature (e.g. by using things like accession.version) as these provide a robust and traceable history of a sequence. The current standards make it too difficult to make simple mistakes — I fear we may see a lot of this as folks transition from GRCh37 to GRCh38.

 

011. If you could go back in time and visit yourself as an 18 year old, what single piece of advice would you give yourself to help your future bioinformatics career?

Can I give two? Keep taking the liberal arts classes as an undergrad, but work more computer programming and math into your schedule!

 

100. What's your all-time favorite piece of bioinformatics software, and why?

This is a little self-serving, but I really like the the GeT-RM browser. I managed the development of this tool while I was at NCBI. It is not my favorite necessarily because of the usage or impact this has had in the community, but rather for what I learned while we were doing this project. I learned a huge amount about gathering user requirements, writing specifications, agile development and testing. Plus, we’ve gotten good feedback from users so that is always a plus.  

 

101. IUPAC describes a set of 18 single-character nucleotide codes that can represent a DNA base: which one best reflects your personality?

I think I might have to say ‘.’ for a couple of reasons. First, I’ve spent a huge amount of my career trying to fill the actual gaps in assemblies — especially the human and mouse assemblies. Second, on many of my projects I’ve been a metaphorical gap filler: project manager in some cases, backend developer in others, and even a couple of turns at web UI development. I’m not quite comfortable calling myself a jack of all trades, but I try not to be too afraid of taking on new roles. It is good to continually test yourself...and to fail every now and again.