Top of page

Understanding User Generated Tags for Digital Collections: An Interview with Jennifer Golbeck

Share this post:

Jennifer-Golbeck, Assistant Professor and Director of the Human-Computer Interaction Lab at the University of Maryland

This is a guest post by Jose “Ricky” Padilla, a HACU intern working with NDIIPP.

More and more cultural heritage organizations are inviting their users to tag collection items to help aggregate, sort and filter collection items. If we could better understand how and why users tag and what they’re tagging we can better understand how to invite their participation. For this installment of the Insights series I interview Jennifer Golbeck, an assistant professor at the University of Maryland, Director of the Human-Computer Interaction Lab and a research fellow at the Web Science Research Initiative about her ongoing studies of how users tag art objects.

Ricky: Could you tell us about your work and research on tagging behaviors?

Jennifer: I have studied tagging in a few ways. With respect to images of artworks, we have run two major studies. One looks at the types of tags people use. The other compares and contrasts tags generated by people in different cultures.

In the project on tag types, we used a variation of the categorization matrix developed by Panofsky and Shatford. This groups tags by whether they are about things (including people), events, or places and also by whether they are general (like “dog”), specific (like “Rin Tin Tin”), or abstract (like “happiness”). We also included a category for tags about visual features like color and shape.  We found that people tended to use general terms to describe people and things most commonly. However, when they are tagging abstract works of art, they are much more likely to use tags about visual elements.

My PhD student Irene Eleta led our other study. She asked American native English speakers and native Spanish speakers from Spain to tag the same images. She found differences in the tags they assigned which were often culture specific. For example, on Winslow Homer’s “The Cotton Pickers”, Americans used tags like “Civil War” and “South” which Spanish taggers didn’t. This illustrates how translating tags can open up new types of access to people who use different languages and come from different cultures.

Example of the different kinds of tags for the same object from people who speak different languages and come from different cultures.

Ricky: Is there any of your research that you find would be particularly beneficial to those interested in digital stewardship?

Jennifer: Irene Eleta’s work on culture and language is very interesting. I think this is a relatively unexplored area, and there is so much that can be done by combining computational linguistics, other computing tools and metadata like tags to improve access.

Ricky: In your talk for the Digital Dialogues at the Maryland Institute for Technology in the Humanities you presented three research projects using tags on art. Could give us some background on research that was helpful informing your research in this area?

Jennifer: I come from a computer science background, so I am far from an expert in this area. I read up a lot on metadata and some existing tools and standards like the Art & Architecture Thesaurus. We also worked with museum partners who brought the art and museum professional perspective, which was very helpful.

Ricky: You explained in the talk that understanding what people are tagging and why can design better tagging systems. Could you elaborate on this idea?

Jennifer: Tags have been shown to provide a lot of new data beyond what a cataloger or museum professional will usually provide. However, to maximize the benefit of tags, it helps to understand how they will improve people’s access to the images. Worthless tags do not help access. Our work is designed to understand what kinds of tags people are applying. This can help in a few ways. First, we can compare this to the terms people are searching for. If search terms match tags, it definitely reveals that tags are useful. Second, we can see if tags are applied more to one type of image than another. For example, I mentioned that people use a lot of color and shape tags for abstract images. This means if someone searches for a color term, the results may be heavily biased toward abstract images. This has implications for tagging system design. We might build an interface that encourages people to use visual element tags on all images or we might use some computer vision techniques to extract color and shape data. At the core, by understanding what people tag, we can think about how to encourage or change the tagging they are doing in order to improve access.

Ricky: Has your research uncovered any ways to encourage tagging? If so what are some of the factors which encourage and discourage tagging?

Jennifer: We haven’t made it to that point yet. We have uncovered a number of results that suggest how we can begin to design tagging systems and what we might want to encourage, but how to do this is still an open question.

Ricky: In a study you compared tags from native English speakers from the USA and native Spanish speakers from Spain. Could you tell us a little about the findings of this investigation and how cultural heritage institutions could benefit from this research?

Jennifer: (I described this work a bit above). Cultural heritage institutions can benefit from this in a couple ways. If they have groups who use different languages, they can provide bridges between these languages to allow monolingual speakers to benefit from the cultural insights shared in another language. This can be done by translating tags on the back end of the system. It also suggests that in order to open up their collections to other cultures, language tools will be important.

Ricky: You mentioned automatic translations could help in improving the accessibility of digital collections but it was more complex than that. What are some of the pros and cons of automatic translation which you came across in your research?

Jennifer: I discussed some of the pros above. However, automated translation is a hard problem, especially when working with single words. For example, disambiguation is a classic problem. If you see the tag “blues”, does it refer to the colors or to the music? When there is surrounding text, a tool can rely on context, but that is much harder with tags. If we want to rely on translation, we will have to do more work in this area.

Ricky: Is there any other work you would like to do with data from theses studies, like the recordings of the eye-tracking sessions?

Jennifer: We have eye tracking data for people tagging images and looking at images. We also have it for people who spent time looking at an image for a while before tagging it and for people who began tagging immediately. It would be interesting to compare those to see how people look at art when they are given a task compared to when they are simply asked to look at it. Also, we can compare how people tag when they are familiar with an image vs. when they are seeing the image for the first time.

 

Add a Comment

This blog is governed by the general rules of respectful civil discourse. You are fully responsible for everything that you post. The content of all comments is released into the public domain unless clearly stated otherwise. The Library of Congress does not control the content posted. Nevertheless, the Library of Congress may monitor any user-generated content as it chooses and reserves the right to remove content for any reason whatever, without consent. Gratuitous links to sites are viewed as spam and may result in removed comments. We further reserve the right, in our sole discretion, to remove a user's privilege to post content on the Library site. Read our Comment and Posting Policy.


Required fields are indicated with an * asterisk.