Top of page

Stay “in the loop” with LC Labs experiment combining crowdsourcing and machine learning

Share this post:

In 2020, LC Labs began the Humans in the Loop experiment to explore ways to responsibly combine crowdsourcing experiences and machine learning workflows.

As you may know from following along with LC Labs’ investigations into these methods, machine learning’s reliance on pattern recognition and training decisions made by human annotators makes it really good at predicting past classifications. However, complexities emerge in accounting for human bias and error in machine learning and especially when it comes to the potential to replicate and even proliferate bias and harmful effects. This practice benefits from methodical treatment. This is true even in cases where it can be used to make massive corpora more searchable, as 2020 Library of Congress Innovator in Residence Ben Lee presented in his Newspaper Navigator experiment. Meanwhile, data generated by crowdsourcing participants show promise to serve as training data, but only if participants are fully informed. This type of engagement with participants would also require carefully designed workflows and communications strategies.

For Humans in the Loop, we are collaborating with data management solutions provider AVP as they develop a framework for ethically, engagingly, and usefully incorporating human feedback into training data and the results they drive through crowdsourcing. The experiment aims to create an experience that is both engaging and educational for users. By providing scaffolding and contextualization, it will hopefully also create training data in ethical ways that can also be used by machine learning to enrich the collections. In upcoming experiments, AVP will prototype workflows for combining crowdsourced human expertise with machine learning. One workflow will use human-generated input as the data on which to train a machine learning model; i.e. it learns what is a ‘cat’ based on what users have selected as ‘cats’ vs. ‘not-cats.’ Another prototype will incorporate human feedback into machine-generated results. This process is often called validation, i.e. users confirm or deny whether what the machine reads as a being a ‘cat’ is or is not in fact a cat. This data is then used to train the algorithm to guide its future predictions.

The Humans in the Loop experiment builds directly on LC Labs’ sustained exploration of machine learning in cultural heritage for tasks such as pre-processing, segmentation, classification, clustering, transcription, and extraction. An example is the Speech-to-Text viewer experiment designed by colleagues in the Library’s Office of the Chief Information Officer and American Folklife Center that tested the feasibility of using an out-of-the-box speech-to-text solution on digital spoken-word collections held by AFC. In 2019, the team partnered with the Project AIDA researchers on a series of demonstration projects applying machine learning to Library of Congress collections in different ways. Project results and Library-specific recommendations can be found in their Digital Libraries, Intelligent Data Analytics, and Augmented Description report and GitHub code repository.

screenshot of eight scanned manuscript pages. visual content is identified on the page with yellow and red markings.
Screenshot of the findings presented in the Project AIDA team’s report.

In September 2019, LC Labs hosted the Machine Learning + Libraries Summit, convening over 75 cultural heritage practitioners and machine learning experts. The event coincided with the announcement of Ben Lee as one of the 2020 Innovators in Residence alongside Brian Foo. Lee’s Newspaper Navigator project was released in 2020 and used a machine learning algorithm to identify, segment, and search all of the visual content in the Chronicling America database of historic newspapers. Innovator Brian Foo also used machine learning to identify, classify, and cluster samples of music from Library of Congress collections in his design of the Citizen DJ experiment. Finally, LC Labs commissioned Professor Ryan Cordell to conduct a comprehensive survey of the state of field regarding machine learning and libraries. In his final report, Cordell built on some of the Aida team’s recommendations and laid out steps for cultivating responsible ML in libraries.

The front page of a West Virginia newspaper with red boxes around visual content labeled "comics" and a purple box around visual content titled "photograph."
Screenshot of the Newspaper Navigator algorithm being used to identify and categorize visual content on a newspaper page.

Humans in the Loop is both an enactment of the Digital Strategy goal to throw open the treasure chest via computational means and a response to the recommendations made in the reports mentioned above. The University of Nebraska-Lincoln team’s top recommendations focused on developing “social and technical infrastructures” and investing in “intentional explorations and investigations of particular machine learning applications” (30). Humans in the Loop works to achieve both these goals.

Similar to the design principles that guided the development of By the People, the Library’s crowdsourced transcription program, the values guiding Humans in the Loop scrutinize the decisions underlying ML technology to redress bias and mitigate risk. One desired outcome of the project is that increased exposure to machine learning algorithms at work will lead to greater literacy about this technology. The project team’s hope is that users’ participation in the process will reveal the ways in which machine learning relies on human subjectivity and decision-making rather than objective, or neutral, classification.

As Cordell writes, “one of largest challenges facing library ML work is the labor required to create meaningful training data, and crowdsourcing efforts hold much potential for addressing that need” (18). The design of projects that combine the two thus merits careful thought and thorough investigation. When done well, the pay-off can be remarkable. A great example is the Beyond Words experiment designed by Staff Innovator Tong Wang. The application was not only incredibly popular and fun for users who wanted to dig into WWI-era newspapers but also generated derivative data that was instrumental to both the demonstration prototypes done by the University of Nebraska-Lincoln and for the Newspaper Navigator application. Without this wealth of crowd-created data that was released into the public domain as it was created, neither of these projects would have been possible. Humans in the Loop pilots the creation of interfaces that intentionally combine crowdsourcing and machine learning in the same space.

We will share more information about the project, including the collections being used in the experiment and a call for user testing of prototypes, soon.

If you have questions in the meantime or would like to sign up to test these prototypes, get in touch by email at [email protected].

Comments (2)

  1. Could you please give three concrete examples of what was, or could be, found using these techniques, and which was actually useful to someone doing actual research?

    • Thank you for your comment, Thomas. We’re glad to share the final recommendations report from the Humans in the Loop (HITL) initiative with you here: https://labs.loc.gov/work/experiments/humans-loop/. This work emphasizes the imperative role of human expertise in responsible adoption of machine learning methods for making collections more discoverable and accessible. It also surfaces the potential of humans in the loop workflows for increasing public literacies around these technologies.

      You may also wish to learn more about the Computing Cultural Heritage in the Cloud initiative and the three researchers applying computational methods to Library collections data: https://labs.loc.gov/work/experiments/cchc/

Add a Comment

This blog is governed by the general rules of respectful civil discourse. You are fully responsible for everything that you post. The content of all comments is released into the public domain unless clearly stated otherwise. The Library of Congress does not control the content posted. Nevertheless, the Library of Congress may monitor any user-generated content as it chooses and reserves the right to remove content for any reason whatever, without consent. Gratuitous links to sites are viewed as spam and may result in removed comments. We further reserve the right, in our sole discretion, to remove a user's privilege to post content on the Library site. Read our Comment and Posting Policy.


Required fields are indicated with an * asterisk.