Top of page

The is of the Digital Object and the is of the Artifact

Share this post:

Fixity is a key concept for digital preservation, a cornerstone even. As we’ve explained before, digital objects have a somewhat curious nature. Encoded in bits, you need to check to make sure that a given digital object is actually the same thing you started with. Thankfully, we have the ability to compute checksums, or cryptographic hashes. This involves using algorithms to generate strings of characters that serve as identifiers for digital objects. Under normal non-tampered with conditions, these hash values more uniquely identify files than DNA uniquely identifies individuals.  When we generate these hashes for digital objects to audit digital content we want to know if an object is the same as it was before.  Is it still bit-for-bit the exact same thing?  It is important to note that the “is” in that last sentence is only one tradition of saying that something is still the same.

Recording of single magnetisations of bits on a 200MB Harddisk-platter; Matesy GmbH

An analog corollary to this kind of fixity checking is helpful in unpacking the different ways we can say “this is the same thing”. To ensure the authenticity of copies of texts  scribes would count their way to check and make sure that new copies had the same middle paragraph, the same middle word and the same middle letter. It’s an analog fixity check; a technique to check if the encoded content of the text is identical to the encoded content of the copy (functionaly, it is a rather poor fixity check, but a fixity check nonetheless).  In this case, much the same as in computing, the two scrolls would have the same text on them but they are actually two physically different objects, potentially created by different scribes and expressing unique characteristics, for example, each scribes handwriting. If you had two copies of the same ancient text and you told a manuscripts specialist they were identical they might scoff at you. Clearly they are two different artifacts; they are two distinct material objects that have their own physical properties. If we looked into  the chemical properties of the papyri that each was encoded on we might be able to date them and find out which one is older, or we might find that the materials of one came from one place and the materials of the other came from another. While the encoded text of the two objects could be identical, there is an infinite amount of contextual information that could exist in the materiality of the objects they are encoded on.

The is of the Autograph and the is of the Allograph

Is means different things in different statements. This is Mary Shelly’s Frankenstein and this is the Mona Lisa. (Ok, so those are links to Frankenstein and the Mona Lisa). However, the  link to the Mona Lisa isn’t really a link to the Mona Lisa. The Mona Lisa is on the wall in the Louvre. That link just points to an image of the Mona Lisa.  If you load up the link to Frankenstein and the image of the Mona Lisa you can think through two of the different ways that something can be the same as something else. Most would agree that the former is  Frankenstein, but that the latter is a copy of the Mona Lisa. Something is Frankenstein when it has the same text in it. In the art world, these kinds of art are referred to as allographic. You are actually looking at the piece of art when you see something that has the same spelling, that has the same encoded information. It is the same thing when it has the same encoded information in it. In the case of the Mona Lisa, we demand a different kind of is, the autographic is. There is only one Mona Lisa, it’s on the wall in the Louvre.

These conceptions, of something being the same as something else have corollaries in how Matt Kirschenbaum defines assertions that digital things are the same. In his vocabulary there is a formal sense, in which one object has the same bits as another (the same one’s and zeros) and the forensic sense, in which we think about how those bits are physically encoded and inscribed on an individual artifact. All the bits we care about are inscribed on storage media. Interestingly, in the forensic sense, all digital objects are also analog objects. While we read bits off disks each of those individual bits is on some level its own little unique snowflake. Each bit could conceptually be analyzed at the electron microscope level as having a signature, as having a length and a width on the medium on which it is encoded. This said, there really aren’t many cases in which we care about the physical material sense of the forensic bit. Sure, it is possible to use forensic techniques to read back several reads on a hard drive, but even in that case, what we care about is reading back layers of the encoded information, not examining the qualities of the actual bits themselves.

The Mutual Exclusivity of These Senses of Sameness

I find it interesting that these two different senses of sameness, the allographic and the autographic are fundamentally mutually exclusive properties. Try this little thought experiment. Imagine someone came up with a way to compute a fixity check on people. It might look like a CT scanner or something. It would scan you and then generate a string of characters that more or less uniquely identified you. If you came back the next day, climbed up in the machine again, and got your next reading your numbers wouldn’t match. Our bodies are always changing, today I had a lot of coffee so I have more caffeine in me, tonight I might go to spin class and then as a result tomorrow I would have burned some calories. This isn’t just the case for living things. Entropy (and it’s step-cousin in conservation science inherent vice) explain to us that all objects are in flux, slowly deteriorating toward the ultimate heat death of the universe.

Imagine if we stuck some fantastic rare book in this device that checks the fixity of physical objects, how about the Library of Congress copy of Sidarius Nuncius (not these digital images but the actual physical book). Even here, if we came back the next day we would get a different string of characters. While conservationists do their best, from day to day there are changes in things like the water content in pages or other minor fluxuations in the chemical composition of any artifact. I suppose if the device wasn’t particularly sensitive it wouldn’t detect the difference, but even if it did say they were the same thing we would know that it was a lie, it just wasn’t sensitive enough to pick up the subtle changes in the artifact. This is a key distinction between analog and digital objects. Digital objects are always encoded things, in this sense they (like the text of Frankenstein or the text transcribed by scribes)  are allographic. Their essence is actually more allographic than those analog corollaries, as the encoding is much richer and leaves much less interesting information residing in the artifact itself. The medium on which a text is inscribed and the autographic components of an individual scribe or printer’s work actually carries a lot of interesting information in it. In contrast, a forensic disk image of a hardrive contains considerable information about the size and nature of the medium (the drive) and the additional information beyond the bits on a drive is actually older bits (computer forensic folks can get previous writes of a disk by looking at the parts where the write bands overlap.)

What is wild about digital objects is that there are extensive forensic, or artifactual, traces of the media they were stored on encoded on inside the formal digital object like a disk image. That is, the formal object of a disk image records some of the forensic, the artifactual, the thingyness of the original disk media that object was stored on. The forensic disk image is allographic but retains autographic traces of the artifact.

 

Comments (11)

  1. Thanks for this really interesting post Trevor. I think you’re hitting upon a really important distinction here, which is that objects in and of themselves are not allographic or autographic (nor digital, nor analog). The distinction is a matter of practice or perception. The read head of a hard drive is sensitive to magnetic poles of a platter in a way that is digital, but a phonographic stylus translates a spinning LP in a way that is analog. The fact that the Mona Lisa is autographical is merely a social convention and our decision to treat any object as allographical, autographical, digital, or analog suggests a great deal about the nature of socially meaningful material difference.

  2. Fascinating post, Trevor, and I’m well out of my depth here – which is why I’m grateful I learned something from reading your work here. My only thought related to this difficulty of defining the boundaries of a digital object (and the blurring between allographic and autographic) using the terms, methodology of digital forensics is that I wonder if the entire foundation of D.F. is so grounded in its analog roots that, like literary analysis which similarly can never get away from its bookbound roots, it can’t ever properly account for digital objects on their own terms? rI’m just talk abstractly here because I lack the hands-on experience that you clearly have.

  3. @ Nathen Good points, I completely agree that the allographic and autographic sense of “is” are present in all kinds of objects (possibly all material objects?). The default “is” in a given situation (a painting vs. a book) is a social convention, but I think the underlying distinction between the two kinds of “is” is itself fundamental. Glad you brought up the stylus of a record player! That’s an interesting case where the analog nature of the reader and the encoded media ends up making for the playing of a record as an autographic experience. Every record is a little different, each play changes the record, and that is evident in the play back of the record. In this sense, the record is much more like the manuscript copies than like copies of a recording on a CD.

    @Lori; It’s an interesting question. I think the forensic stuff is actually better at getting us to understand digital objects on their own terms. For example, it helps us escape the kinds of screen existentialism that Matt Kirschenbaum & Nick Montfort have described. That said, I do think it is fair to note that this focus can get us thinking about the material things (the bits, the source code, the file formats) at the potential expense of what these things mean in a particular context.

    So ultimately, I think we get somewhere interesting by juxtaposing the different way’s that a digital thing “is” to interrogate it from a range of perspectives. I really like how Ian Bogost does this in Alien Phenomenology (p 17) with “E.T. the Extra-Terrestrial” for the Atari 2600. Based on his flat ontology, he suggests 11 different definitions for what “E.T. the Extra-Terrestrial” ranging from detailed descriptions of the material object, to descriptions of it as a commodity, to descriptions of it as symbol of the video game crash of 1983. All 11 describe different valances of that particular game and understanding and making claims about each requires different kinds of source material.

  4. Nice, Trevor, and I love that the Signal has space for these sorts of conversations. Disk images are enormously complex artifacts (far more so than the casual “bit stream” description would seem to imply) and it seems to me that even those of us who work with them on a regular basis lack both comprehensive best practices resources and a robust theoretical framework (I made a stab toward the latter in my SAA paper this summer: http://scriptalab.org/wp-content/uploads/2012/09/Kirschenbaum-SAA.docx). Your final observation is really the kicker for me: “the formal object of a disk image records some of the forensic, the artifactual, the thingyness of the original disk media that object was stored on.” That’s it exactly, and I wonder what precedents we could find in the history of technologies of representation.

    As for the biases of “digital forensics” toward older paradigms, I think there’s obviously a strong historical component to the field’s construction, particularly as regards connections to diplomatics–read Lucianna Duranti on this (http://journals.sfu.ca/archivar/index.php/archivaria/article/viewArticle/13229). I remember the first time I opened a computer forensics textbook,, and it all just seemed so familiar (“This is bibliography!” I said to myself.) But I agree with you that digital forensics offers the most sophisticated community of practice I know for those of us who wish to understand the materiality of digital objects on their own terms. It’s very exciting to watch how quickly the convergence with digital archival practice is taking shape.

  5. This is a nice post. It resonates with some earlier favorite posts on The Signal, like Carl Fleischhauer’s Information or Artifact and Jefferson Bailey’s The Artifactual Elements of Born-Digital Records (to which you already linked). I agree with the sentiment, expressed across these pieces, that most every manifestation of an expression has both autographic (artifactual) and allographic (informational, or encoded symbolic) qualities.

    The challenge comes when we are forced to make choices about where on the spectrum the “is” lies, such as when describing objects, choosing how to represent them in encoded or derivative forms, or collecting supporting metadata — these choices are really guided by subjectivity and cultural conventions. So what to do? Rather than trying endlessly to make our choices more perfect, I think it’s important to be aware of their particular scope and context — which this post supports — and to permit other representations to coexist. Reminds me of this, from Clay Shirky’s Ontology is Overrated (which also talks a lot about what the “is” is): “…if we are, from a bunch of different points of view, applying some kind of sense to the world, then you don’t privilege one top level of sense-making over the other. What you do instead is you try to find ways that the individual sense-making can roll up to something which is of value in aggregate, but you do it without an ontological goal.”

  6. Love this post, Trevor, in part because your thought experiments (e.g. calculating a checksum or fixity check for a person) do a vastly better job than Goodman’s examples in communicating the distinction between autographic and allographic (I’m a huge fan of Languages of Art, but Goodman’s background in analytic philosophy often makes him impenetrable). FYI: There’s a classic riff on your Frankenstein/Mona Lisa example in the field of textual studies: “If the Mona Lisa is in the Louvre, then where is Hamlet?”

    I wrestled with the autographic/allographic conundrum in my dissertation and in my 2009 DHQ essay on conjectural criticism. The essay goes against the grain of materiality to make a case for the power and generativity of the allographic. (My forthcoming chapter in The Cambridge Companion to Textual Studies reprises these themes by showing how some of our most basic scholarly primitives [to borrow Unsworth’s term], such as comparison, become impossible in the absence of an allographic point of view.) The hazards of an overly zealous autographic mindset are brilliantly conveyed by Borges’ Funes the Memorius who is “almost incapable of general, platonic ideas. It was not only difficult for him to understand that the generic term dog embraced so many unlike specimens of differing sizes and different forms; he was disturbed by the fact that a dog at three-fourteen (seen in profile) should have the same name as the dog at three-fifteen (seen from the front).” For Funes, there is no over-arching “dog” category to which we can assign multiple members; instead, each dog is sui generis, and even the same dog is a radically and ontologically distinct thing from one minute to the next. This is the rabbit-hole of the autographic into which we sometimes fall.

  7. @Aaron I like this point a lot. “Rather than trying endlessly to make our choices more perfect, I think it’s important to be aware of their particular scope and context — which this post supports — and to permit other representations to coexist” Particularly that last bit. In terms of description, I’ve recently been trying to work through some ideas about the extent to which objects act as indices of other objects. For example, all of the links that point to some other page on the web function as annotations and classifications. (For more in this line of thinking, Melanie Feinberg has a great essay called “Organization as expression: classification as digital media.” I might need to write a post about that at some point 🙂

    @Kari : Thanks for the kind words about my thought experiments! Also thanks for further suggestions on more to read. I look forward to your forthcomming chapter. The more I think about this the more I become convinced that the value in the autographic is that it might have hidden inside it future allographic knowledge. That is, the autographic object as a physical thing never fully reveals itself to us. It can never be fully known, it can only be used or understood for particular uses. (I’m thinking here along the lines of some of Graham Harman’s take on Heidegger’s tool analysis).

    As an example. Who would have known that DNA analysis would come about? Thankfully, we had preserved a lot of evidence that contained DNA in it, so that we could go through and find out through a far better set of tools for knowing if someone did or didn’t commit a particular crime. The various pieces of physical evidence always had that DNA in them, but it wasn’t until we figured out how to read DNA as encoded information that it was of any use to us. Is there a general principle in this? When we use an object as evidence are we always making it legible through some process of reading encoded information from it?

  8. That makes a lot of sense, Trevor: the power of the allographic lies with its ability to help us make distinctions between signal and noise–to determine which properties of an object are consequential and which are inconsequential. But these distinctions, as you note, aren’t absolutes; they’re situational–and thus we find ourselves oscillating between the allographic and autographic. If a cactus plant contains needles with a healing balm, then the size and color of the plant are irrelevant to me if I suffer from the disease for which the needles offer a cure. If the cactus plant is a brilliant orange, then the needles and shape are irrelevant to me if I’m an interior decorator with a client who has a penchant for all things orange. The challenge, though, is that in the realm of preservation, we appeal to the principle of “significant properties” to deal with the problem of lossy information. Sig props are designed to help us adopt preservation strategies that will ensure the longevity of some properties and not others. But if we concede that all properties are potentially significant within some contexts, at some time, for some audiences, then we are forced into a preservation stance that brooks no loss. What to do?

  9. Thanks to Trevor and the folks who commented: this is a terrific idea-thread to ponder. Like Trevor, I work in the Library of Congress, and I am reminded of the much and long debated development of FRBR: Functional Requirements for Bibliographic Records, an intellectual structure from our librarian-colleagues. FRBR always struck me as being as scholastic as Trevor’s blog and the various comments that follow (a _good_ thing if you have a cup of coffee or a beer at hand!). FRBR tries to tease apart entities called the work, the expression, the manifestation, and the item. The Wikipedia article illustrates the distinctions between these entities with the example of a Beethoven symphony, which can have a little of the character of Kari Kraus’s “Where is Hamlet?” All of this is fun to ponder, as I say, and I’ll hope that the outcome of our collective musings sheds a bit of light on the “significant properties” question that Kari also mentions. If we can’t preserve every property when we format-migrate (or even when we system-emulate), which properties ought we save? But I guess the answer to this question partly depends on whether you are referring to, um, a work, an expression, a manifestation, or an item.

  10. Thanks for the FRBR note Carl. It reminds me of some of the Preserving Virtual Worlds folks attempts to FRBRize some video games in Digital Humanities Quarterly.

    As to how FRBR fits in, it would seem that we can map “item” to artifact. Item’s are individual physical objects with their own idiosyncratic autographic characteristics. Similarly manifestation seems to map well with the “same spelling” notion of allographic.

    The expression and the work always tends to trip me up a bit. That said, expressions and works feel like they are even more removed notions of sameness. That is, two performances by two different musicians of Bach’s Goldberg variations are expressions of the same work but the performances didn’t necessary sound the same and may have even involved people playing some different notes. What makes them the same is that they were playing the same song. (anyone can feel free to correct me if I’m wrong on how this plays out, or if they think they have a better way to map FRBR into this.)

    Getting back to Kari’s question about significant properties. Yes, every property is potentially interesting to someone. With that said, I think our situation becomes one of weighing two competing goals with the resources one has at hand. In my read, the two competing goals are;

    1. Keep things as raw and authentic as possible: I see this as being tied up in some of the archival principles around original order and respect de fonds.

    2. Keep things in formats that are as easy to open and access in the future as possible and minimize the required storage space to the extent that you can: This thread is tied up in concern about dependencies and the sustainability of file formats.

    As far as what to do in this case, depending on the scale of what you are working with, I think Peter Van Garderen’s description of the Archivematica approach makes a lot of sense. Below is the quote where I think he does a good job talking through this.

    “Our design principles are driven by the problem of digital preservation. We have to figure out how to keep existing digital information objects accessible, usable and authentic so that they can be used at some undetermined point in the future on some yet to be created technology platform. So far our profession has come up with three strategies or variations on these: emulation, migration, normalization. Digital preservation is risk management. Since time travel is the only way to judge which of the strategies will be most successful, we have to hedge our bets. Therefore, Archivematica is being designed to implement all three strategies.”

    With that noted, the “hedge your bets” approach might get too costly for particularly large sets of material, at which point I think it becomes an issue of clearly articulating what matters about the objects and picking the approach that will ensure those properties are retained.

  11. What is the is of the allographic/autographic relationship? Is there a third, unnamed, quality? What can be said about other dichotomies such as analog/digital or print/screen or source/surrogate or entropic/inert? Is there a third quality emergent from such intersections?

    Perhaps such a third quality must be inherently nameless to avoid observational skew derived from binary cognition, but we should concede that the question is real enough. We can also hope that in the context of patrimony we can take actions to better assure reliable transmission.

    Photography once posed a novelty as latent and transient configurations of light were recorded. The recording did mimic a bionic transaction but the extra-soma process left some evidences that differed from memories. Historical photographs continue to cascade such displacements. It doesn’t much matter if the photo technology is digital or analog; the displacements over-lay each other in a near re-enactment of the dynamic of latent and transient configurations of light from which they derived and now echo.

    Just such displacements, in time and meaning, offer a clue to a more noetic nature of patrimony. Also suggestive are the transactional or encoding handoffs needed to render the return to bionic interpretation. Preservation action can focus is on the experience of displacement, not simply with intent to fix meaning but with intent to record transactions of displacement using self-authenticating methods.

Add a Comment

This blog is governed by the general rules of respectful civil discourse. You are fully responsible for everything that you post. The content of all comments is released into the public domain unless clearly stated otherwise. The Library of Congress does not control the content posted. Nevertheless, the Library of Congress may monitor any user-generated content as it chooses and reserves the right to remove content for any reason whatever, without consent. Gratuitous links to sites are viewed as spam and may result in removed comments. We further reserve the right, in our sole discretion, to remove a user's privilege to post content on the Library site. Read our Comment and Posting Policy.


Required fields are indicated with an * asterisk.