Top of page

A Half Century of Library Computing

Share this post:

(The following is a guest post from Audrey Fischer, editor of the Library of Congress Magazine.)

Fifty years ago, the Library installed its first computer and began charting a course to bibliographic control and global shared access.

George R. Perreault, head of the Data Processing Offiice, standing at the computer storage unit; Ernest Acosta Jr., digital computer programmer, working at the card reader unit; and Joseph B. Murphy, digital computer programmer, inserting a new tape in one of the tape units. Jan. 20, 1964.
From left to right: George R. Perreault, head of the Data Processing Offiice, standing at the computer storage unit; Ernest Acosta Jr., digital computer programmer, working at the card reader unit; and Joseph B. Murphy, digital computer programmer, inserting a new tape in one of the tape units. Jan. 20, 1964.

On Jan. 15, 1964, the first components of a small-scale computer system were delivered to the Library of Congress and installed in the Library’s newly established Data Processing Office.

Provided for in the Legislative Branch Appropriation Act of 1964 (P.L. 88-248), the IBM 1401  was intended for use in payroll, budget control, card distribution billing, accounting for book and periodical purchases and to produce various statistical and management reports.

A week later, the Library announced the results of a multiyear study on the feasibility of automating its bibliographic functions. Sponsored by a $100,000 grant from the Council on Library Resources Inc., the 88-page report titled “Automation and the Library of Congress” concluded that automation in bibliographic processing, catalog searching and document retrieval was technically and economically feasible. But developmental work would be required for equipment—not yet in existence—and the conversion of bibliographic information to machine-readable format. The report also recommended that the Library of Congress, because of its central role in the nation’s library system, take the lead in the automation venture. Many of the report’s recommendations were implemented in the coming decades, while others, such as a plan for an integrated library system, would wait until the turn of the century.

Throughout the remainder of the 1960s, attempts were made for contractual development of a highly specialized bibliographic information system. The Library ultimately established its own in-house automated systems office (known today as the Information Technology Services Office) for system development. Over the past five decades, the Library has developed more than 250 enterprise systems and applications for use by Congress, and the library, legal and copyright communities, to name a few.

By the early 1970s, the machine-readable cataloging standard known as MARC became the national and international standard for creating records that can be used by computers and shared among libraries. The standard was developed at the Library of Congress by data processing pioneer Henriette Avram, working with various library associations and scientific standards groups.

MARC and the Anglo-American Cataloguing Rules (AACR) in their various iterations served the library community for nearly 50 years to describe and organize library collections. Released in 2010, RDA: Resource Description & Access, a new set of instructions suitable for use in a linked data environment, has succeeded AACR2. The following year, the Library of Congress launched the Bibliographic Framework Initiative to address the future bibliographic infrastructure needed to share data, both on the web and in the broader networked world. A major focus of the initiative is to continue the tradition of a robust data exchange that has supported resource-sharing and cataloging cost-savings in recent decades while addressing the needs of 21st-century libraries and information storehouses across the globe.

The Library’s foray into the digital era began in the mid-1980s with several pilot projects to digitize selected items from the Library’s print and non-print collections. Building on the success of the CD-ROM-based American Memory pilot project, Librarian of Congress James H. Billington vowed to make 5 million items accessible electronically to the nation by the year 2000, the Library’s bicentennial year. This goal was realized and the bolstered by the advent of the World Wide Web in the intervening years. The Library’s website debuted in 1993. Today, the Library provides free global access to approximately 40 million online primary-source files.

The Library’s current information technology (IT) infrastructure includes five data centers in four building locations. These facilities support more than 650 physical servers, 400 virtual servers, 250 enterprise systems and applications, 7.1 petabytes of disk storage and 15.0 petabytes of backup and archive data on tape. The Library’s IT infrastructure also includes a wide-area network, a metropolitan-area network and local-area networks that comprise 350 network devices. The Library’s Information Technology Services Office also supports more than 8,600 voice connections, 14,700 network connections and 5,300 workstations.

MORE INFORMATION

Acquisitions and Bibliographic Access

Comments (4)

  1. What about MUMS and SCORPIO? These were large, successful endeavors, where many of the others, despite PR glitz, were not.

    These were major endeavors in bibliographic and congressional data, at a time when all other major library efforts were failing, from 1972 to the advent of the replacement systems in the 90’s.

  2. Przyszłe pokolenia będą mogłi odtworzyć nasze życie dzień po dniu.
    A nasze pokolenie będzie trwać wiecznie.

    Future generations will be able to recreate our lives day after day.
    And our generation will last forever.

  3. Interesting article!

  4. Thanks for the reminder with the effort and work towards this including myself learning such.

Add a Comment

Your email address will not be published. Required fields are marked *