Data Infrastructure, Education & Sustainability: Notes from the Symposium on the Interagency Strategic Plan for Big Data

Last week, the  National Academies Board on Research Data and Information hosted a Symposium on the Interagency Strategic Plan for Big Data. Staff from the National Institutes of Health, the National Science Foundation, the U.S. Geological Survey and the National Institute for Standards and Technology presented on ongoing work to establish an interagency strategic plan for Big Data. In this short post I recap some of the points and issues that were raised in the presentations and discussion and provide links to some of the projects and initiatives that I think will be of interest to readers of The Signal.

Vision and Priority Actions for National Big Data R&D

Slide with the vision for the interagency big data activity.

Slide with the vision for the interagency big data activity.

Part of the occasion for this event is the current “Request for Input (RFI)-National Big Data R&D Initiative.” Individuals and organizations have until November 14th to provide comments on “The National Big Data R&D Initiative: Vision and Actions to be Taken” (pdf). This short document is intended to inform policy for research and development across various federal agencies. Relevant to those working in digital stewardship and digital preservation, the draft includes a focus on issues related to trustworthiness of data and resulting knowledge, investing in both domain-specific and shared cyberinfrastructure to support research and improving data analysis education and training and a focus on “ensuring the long term sustainability” of data sets and data resources.

Sustainability as the Elephant in the Room

In the overview presentation about the interagency big data initiative, Allen Dearry from the National Institute of Environmental Health Sciences noted that sustainability and preservation infrastructure for data remains the “elephant in the room.” This comment resonated with several of the subsequent presenters and was referenced several times in their remarks. I was glad to see sustainability and long-term access getting this kind of attention. It is also good to see that “sustainability” is specifically mentioned in the draft document referenced above. With that noted, throughout discussion and presentations it was clear that the challenges of long-term data management are only becoming more and more complex as more and more data is collected to support a range of research.

From “Data to Knowledge” as a Framework

The phrase “Data to Knowledge” was a repeated in several of the presentations. The interagency team working in this space has often made use of it, for example, in relation to last years “Data to Knowledge to Action” event (pdf). From a stewardship/preservation perspective, it is invaluable to recognize that the focus on the resulting knowledge and action that comes from data puts additional levels of required assurance on the range of activities involved in the stewardship of data. This is not simply an issue of maintaining data assets, but a more complex activity of keeping data accessible and interpretable in ways that support generating sound  knowledge.

Some of the particular examples discussed under the heading of “data to knowledge” illustrate the significance of the concept to the work of data preservation and stewardship. One of the presenters mentioned the importance of publishing negative results and the analytic process of research. A presenter noted that open source platforms like iPython notebook are making it easier for scientists to work on and share their data, code and research. This discussion connected rather directly with many of the issues that were raised in the 2012 NDIIPP content summit Science@Risk: Toward a National Strategy for Preserving Online Science and in its final report (pdf). There is a whole range of seemingly ancillary material that makes data interpretable and meaningful. I was pleased to see one of those areas, software, receive recognition at the event.

Recognition of Software Preservation as Supporting Data to Knowledge

Sky Bristol from USGS presenting on sustainability issues related to big data to an audience at the National Academies of Science in Washington DC.

Sky Bristol from USGS presenting on sustainability issues related to big data to an audience at the National Academies of Science in Washington DC.

The event closed with presentations from two projects that won National Academies Board on Research Data and Information’s Data and Information Challenge Awards. Adam Asare of the Immune Tolerance Network presented on “ITN Trial Share: Enabling True Clinical Trial Transparency” and Mahadev Satyanarayanan from the Olive Executable Archive presented on “Olive: Sustaining Executable Content Over Decades.” Both of these projects represent significant progress supporting the sustainability of access to scientific data.

I was particularly thrilled to see the issues around software preservation receiving this kind of national attention. As explained in much greater depth in the Preserving.exe report, arts, culture and scientific advancement are increasingly dependent on software. In this respect, I found it promising to see a project like Olive, which has considerable implication for the reproducibility of analysis and for providing long-term access to data and interpretations of data in their native formats and environments, receiving recognition at an event focused on data infrastructure. For those interested in the further implications of this kind of work for science, this 2011 interview with the Olive project explores many of the potential implications of this kind of work for science.

Education and Training in Data Curation

Slide from presentation on approaches to analytical training for working wtih data for all learners.

Slide from presentation on approaches to analytical training for working with data for all learners.

Another subject I imagine readers of The Signal are tracking is education and training in support of data analysis and curation. Michelle Dunn from the National Institutes for Health presented on an approach NIH is taking to develop the kind of workforce that is necessary in this space. She mentioned a range of vectors for thinking about data science training, including traditional academic programs as well as the potential for the development of open educational resources. For those interested in this topic, it’s worth reviewing the vision and goals outlined in the NIH Data Science “Education, Training, and Workforce Development” draft report (pdf). As libraries increasingly become involved in the curation and management of research data, and as library and information science programs increasingly focus on preparing students to work in support of data-intensive research, it will be critical to follow developments in this area.

Add a Comment

This blog is governed by the general rules of respectful civil discourse. You are fully responsible for everything that you post. The content of all comments is released into the public domain unless clearly stated otherwise. The Library of Congress does not control the content posted. Nevertheless, the Library of Congress may monitor any user-generated content as it chooses and reserves the right to remove content for any reason whatever, without consent. Gratuitous links to sites are viewed as spam and may result in removed comments. We further reserve the right, in our sole discretion, to remove a user's privilege to post content on the Library site. Read our Comment and Posting Policy.

Required fields are indicated with an * asterisk.