Once upon a time a former editor of WIRED, Chris Anderson, … envisaged how scientists would take the ever expanding ocean of data, send a torrent of bits and bytes into a great hopper, then crank the handles of huge computers that run powerful statistical algorithms to discern patterns where science cannot.
In short, Anderson dreamt of the day when scientists no longer had to think.
Eight years later, the deluge is truly upon us. Some 90 percent of the data currently in the world was created in the last two years … and there are high hopes that big data will pave the way for a revolution in medicine.
But we need big thinking more than ever before.
Today’s data sets, though bigger than ever, still afford us an impoverished view of living things.
It takes a bewildering amount of data to capture the complexities of life.
The usual response is to put faith in machine learning, such as artificial neural networks. But no matter their ‘depth’ and sophistication, these methods merely fit curves to available data. we do not predict tomorrow’s weather by averaging historic records of that day’s weather
… here are other limitations, not least that data are not always reliable (“most published research findings are false,” as famously reported by John Ioannidis in PLOS Medicine). Bodies are dynamic and ever-changing, while datasets often only give snapshots, and are always retrospective.
Source: Wired