In a news article in Science, Jeffrey Mervis writes:
John Holdren, the president's science adviser, wasn't exaggerating when he said last week that “big data is indeed a big deal.” About 1.2 zettabytes (a zettabyte is 10 to the 21st power) of electronic data are generated each year by everything from underground physics experiments and telescopes to retail transactions and Twitter posts......
Last week's event gave half a dozen agencies a chance to showcase what Holdren described as “$200 million in new commitments.”The same issue has an editorial by Marie Davidian1 and Thomas A. Louis2 which states:
A dramatic increase in the number of statisticians is required to fill the nation's needs for expertise in data science. A 2011 report by a private consulting firm projected a necessary increase of nearly 200,000 professionals (a 50% increase) by 2018.Of course these figures should be taken with a lot of caution, but they illustrate the huge flood of data that is beginning to flood into our computers and the huge job that is before society to try to make sense of it all. Does anyone doubt that a part of the digital divide is related to the relative inability of poor people and poor countries to take advantage of the increasing technological ability to generate data, both because of their lack of technology infrastructure and the difficulties that they will have in training and keeping experts capable of extracting meaning from the data and getting it disseminated and used?
No comments:
Post a Comment