Today’s guest post is from Madeline Goebel, a Digital Collections Specialist at the Library of Congress.
As a reader of the Signal, you may already be familiar with By the People, the Library of Congress’s crowdsourcing program that allows volunteers to transcribe, review, and tag digitized pages from the Library’s collections. Further, you may already know that, once completed, those transcriptions are released to the Library’s website, where they help make items more accessible and discoverable. However, were you aware that in addition to those transcriptions, By the People also produces and releases datasets of completed transcriptions to the Library’s website?
To date, there are 19 By the People datasets in the Selected Datasets Collection, and that number will continue to grow as more transcription campaigns are completed by volunteers. Each available dataset package consists of a .CSV file containing data exported from Concordia (the software behind By the People) and a README with information about the transcription campaign and the structure of the dataset. The .CSV file includes all the transcriptions and tags that were created by volunteers, opening up the possibility for computational research across collections with By the People transcriptions.
If you are now feeling simultaneously excited about these datasets, but unsure where to start, we have just the resource for you! Library staff have created a Python tutorial for using By the People datasets from four campaigns with materials related to the women’s suffrage movement (from the Susan B. Anthony Papers, Carrie Chapman Catt Papers, Elizabeth Cady Stanton Papers, and Mary Church Terrell Papers) to experiment with Natural Language Processing and create simple visualizations. The tutorial is organized in a series of Jupyter Notebooks (the notebooks, themselves are available through GitHub) that use the spaCy Python library to break down and analyze the transcriptions using Natural Language Processing techniques. There are two visualizations: the first charts word frequency for each dataset, and the second charts word frequency for the speeches in the Susan B. Anthony papers (see below).
The code utilizes these four datasets, but could be applied to other By the People datasets as well. Additionally, the data processing techniques presented in this tutorial could be used as a starting point for other visualizations or analytical work. For further inspiration, you can look to students at the University of Michigan School of Information who experimented with data from the Branch Rickey papers. For a more advanced project, or just to interact with the Mary Church Terrell transcriptions in an engaging and exciting way, you should also check out the At the table with: Mary Church Terrell project, created by Library staff and interns. We look forward to seeing what you can learn and create!
Comments (2)
To what extent are these datasets available for or being used to train artificial intelligence?
Hi there – thanks for the question. From the guest author: “These datasets are publicly available, so they may be used for that purpose. However, we are not currently aware of any such projects.”