A digital collection called The General News on the Internet, a free archive of online-only news sites collected from the web, is now available. The Library of Congress began preserving these sites in June 2014.
How are these news-based sites captured? The Library uses a hybrid approach of weekly captures of the websites, augmented with twice-daily capture of known RSS feeds (Real Simple Syndication). This produces a more complete news archive. Given the dynamic nature of the 24-hour news cycle of today, these archives are meant to capture as much of the news distribution as possible given current limitations in technology and resources.
You will see that we are not including major news sites and are only focusing on born-digital sites. Copyright restrictions play a major role and we also wanted to capture sites that could be at risk of disappearing. For instance, the Christian Science Monitor ceased daily print publication in 2009 and we wanted to add its website to the archives to preserve its content for posterity.
Why do the archives stop at 2018 right now? Everything in this archive is under a one-year embargo. As items come out of the embargo period, more recent captures will appear. You’ll continue to see more and more content available as records are added.
More information on the web archiving program for researchers and site owners is available here.
Have you used this resource? Let us know in the comments! Questions about Using the Web Archive? Contact the Web Archiving Team or Ask a Librarian and follow The Signal blog for announcements of additional content being made available.