This is a guest post by Rachel Trent, Digital Collections and Automation Coordinator in the Geography and Map Division.
Every time you look at an online image of a historical map, what you’re viewing is really just a spreadsheet of numbers. Or more likely, three spreadsheets, one each for red, green, and blue (the technical way to describe this is as a “3-dimensional array”, but it’s ok to simply think of it as three spreadsheets). Each of the image’s pixels is represented by a number from the red spreadsheet, the green spreadsheet, and the blue spreadsheet. Your device simply visualizes that numerical data as a grid of colors.
Thousands of Library of Congress maps are imaged each month, allowing you to not only view them online but also allowing you to analyze and transform the images using relatively straightforward mathematical computation. This computation is the same approach used any time you apply a filter to a photo on your phone, increase its contrast, crop it, etc. Your device treats the images as arrays of numbers and runs quick calculations over them. With a bit of programming knowledge, it is surprisingly easy to replicate a wide range of basic image editing techniques.
Below is a Library of Congress map sheet that was imaged and made available online this year. It is the first in a set of U.S. Army Map Service maps covering Pennsylvania at a scale of 1:25,000. Compiled in 1953, this sheet shows the west side of Pittsburgh and surrounding areas. The set has over 300 sheets, which makes it a little larger than average amongst the 12,000+ sets in the Geography and Map Division’s Set Map collection.
When we use image editing software to alter images, they run calculations across these numbers to make edits. If we want to use a more speedy approach that gives us more control, we can use programming languages, such as Python, instead of image editing software.
Let’s say we’d like to crop each of this set’s sheets to the neatline, in order to remove the collar and leave only the actual map. Although there are effective machine learning approaches to this kind of task, for this demo we’ll stick to a more straightforward approach that relies on more intuitive image processing steps, using Python’s OpenCV library. (A few of these steps do employ machine learning techniques behind the scenes, but our overall process is mostly manually configured.) Such an approach often works well for simple maps like those in our Pennsylvania set, but would be less effective for visually complex, diverse, or larger sets.

First, we’ll convert our image from red, green, and blue (known as RGB) to grayscale. Instead of each pixel being defined by three numbers, now each will be defined by just one number (ranging from 0 for black to 255 for white).

Next, we will convert the image from grayscale to black and white. Each pixel will now become either 0 (black) or 255 (white), without anything in between. For this example, we will we set the divider simply at 200: anything 0 – 200 we will round down to 0 (in other words, most grays will become black), and anything 201 – 255 we will round up to 255 (light grays will become white). Only one of our nine example pixels becomes white.

Next, we will use a process called “closing” to remove noise in the white areas of the image. (This is what’s called a “morphological” process to “dilate” and then “erode” an image, and it comes pre-packaged in Python’s OpenCV.) This step will help to close potential holes in the white border around the map, so that it is easier to detect in the next step. In the example above, we’ve applied a very limited amount of closing.

Next, we will use two more processes built into the Python OpenCV library, the first for finding “contours” (also known as shapes) and the second to smooth out any dents along the edges of our shapes (known as “contour approximation”). We really only want to find one contour (the one along the neatline around the inner map), but our result is over 19,000 contours . . . too many! In the image above, each contour is shown outlined in green.

We can easily calculate a few filters to remove the contours we don’t want, such as filters to remove contours whose area is too small or too large in proportion to the overall image. We can also filter out any shapes that aren’t four-sided (because we know that the neatline is roughly four sided). With some trial and error, it’s possible to create a set of filters that reliably leaves us with just one contour along the neatline of each sheet in our example set.

Lastly, we can return to our original color image and cut out out any pixels whose position falls outside the contour’s corners. We can then save the image as a new, cropped image file.
Running this process over all the images in our example Pennsylvania set takes just a few minutes and gives us relatively reliable results. For more varied or visually complex sets of map sheets, it may be more effective to use a more nuanced machine learning approach or simply manually crop the images in image editing software. Regardless, if you peek under the hood of any of these three approaches, what you’ll find is a set of numbers and a lot of math.
Comments
Great post, thanks for sharing these workflows!