Why I think buzzword phrases like 'big data' are a huge problem

There are many phrases in bioinformatics that I find annoying because they can be highly subjective:

  • Short-read mapping
  • Long-read technology
  • High-throughput sequencing

One person's definition of 'short', 'long', or 'high' may differ greatly from someone else's. Furthermore, our understanding of these phrases changes with the onwards march of technological innovation. Back in 2000 'short' meant 16–20 bp whereas in 2014, 'short' can mean 100–200 bp.

The new kid on the block, which is not specific to bioinformatics, is 'big data'. Over the last week, I've been helping with a NIH grant application entitled Courses for Skills Development in Biomedical Big Data Science. This grant mentions the phrase thirty-nine times so it must be important. Why do I dislike the phrase so much? Here is why:

  1. Even within a field like bioinformatics, it's a subjective term and may not mean the same thing to everyone.
  2. Just as the phrases 'next-generation' and 'second-generation' sequencing inspired a set of clumsy and ill-defined successors (e.g. '2.5th generation', 'next-next-next generation' etc.), I expect that 'big data' might lead to similar language atrocities being committed. Will people start talking about 'really big data' or 'extremely large data' to distinguish themselves from one another?
  3. This term might be subjective within bioinformatics, but it probably much more subjective when used across different scientific disciplines. In astronomy there are space telescopes that are producing petabytes of data. In the field of particle physics, the Data Center at the Wigner Research Centre for Physics processes one petabyte of data per day. If you work for the NSA, then you may well have exabytes of data lying around.

I joked about the issue of 'big data' on twitter:

My Genome Center colleague Jo Fass had a great comment in response to this:

This is an excellent point. When people talk about the challenges of working with 'big data', it really depends on how well your infrastructure is equipped to deal with such data. If your data is readily accessible and securely backed up, then you may only be working with 'data' and not 'big data'.

In another post, I will suggest that the issue for much of bioinformatics is not 'big data' per se but 'obese data', or even 'grotesquely obese data'. I will also suggest a sophisticated computational tool that I call Operational Heuristics for Management of Your Grotesquely Obese Data (OHMYGOD), but which you might know as rm -f.