The perfect digital preservation system does not exist. It may someday, but I don’t expect to live to see it.
Instead, people and organizations are working on iterations of systems, and system components, that are gradually improving how we steward digital content over time. This concept of perpetual beta has been around for a while; Tim O’Reilly explained lucidly in What Is Web 2.0 in 2005.
I gave a presentation recently in which I was expressing hope that prospective infrastructure developments for stewarding big data would bring benefits to the work of libraries, archives and museums to preserve digital content.
My intent was to convey that change should be iterative along a path to radical. In the spirit of avoiding bulleted presentation slides wherever possible, I searched for graphics that might help tell the story.
The one I ended up using was a picture from the Norfolk Record Office (UK) that showed delivery of a computer system some years ago. In it’s day, the Elliot computer was an advanced machine that cost the modern equivalent of nearly a million dollars. It read paper tape at 500 characters per second and had a CPU that was stored in a “cabinet about 66 inches long, 16 inches deep and 56 inches high.”
The picture got a good response from the audience and I wondered if perhaps I should have used others, perhaps from a later era, such as this one from Bell Labs in the late 1960s. This IBM mainframe was many iterations ahead of the Elliot, but any computer big enough to hide in surely needed to be delivered by truck as well.
These pictures are useful in illustrating a point that Clay Shirky and others made some time ago: the system should never be optimized. In other words, iteration and change should be embraced as a design principal. Any system surely can be improved–often radically– in the future. And, as time passes and successful migrations occur (our intent) the way we used to do things will inevitably will seem quaint in retrospect.