Editorial, my new favourite app

A couple weeks ago a friend pointed me to this comprehensive review of the new iPad app Editorial. Since buying my iPad this past April, I've found myself using it more frequently than my laptop. Between the high-resolution screen and the ease of portability, the iPad suits my working style well—I can easily read articles and books on it without getting the same eye-strain I find I get when I read one low-resolution screens at length, and I can easily relocate to the coffee shop or the library as required. Unfortunately, I'm writing my dissertation in HTML, and have struggled with the best way to do so on the iPad. I've had moderate success composing drafts in Evernote and then marking them up in Diet Coda, but this workflow is disruptive and tedious, and opens up opportunities for errors to slip in.

I've often wanted and easier way to mark up my writing on the go, and was intrigued by markdown. I hadn't really seen an app that integrated markdown in a way that would fit well with my workflow, however, and so I never really followed through. Editorial, however, makes it easy to work with markdown, and allows me to compose long documents to be posted to the web without having to switch between multiple apps. In addition, Editorial has a number of features that improve writing on the iPad in general, such as an improved keyboard, built-in web browser, and shortcuts for frequently typed phrases or formatting. While I won't go into everything here (read Federico Viticci's above-linked review for an in-depth exploration of the app), I wanted to touch on some of the most important features for my type of work, which will hopefully interest other scholars, especially digital humanists.

One of the most helpful features of Editorial is the fact that it works with markdown. For those who haven't encountered markdown before, it is a system for marking up a document that will ultimately be converted to an HTML file, but without having to work with all the awkward tags. Instead, markdown uses simple combinations of common characters to allow you to mark up your writing with minimal disruption as you compose. For example, rather than having to enclose text in <em> tags to have it appear italicized, we simply wrap our text in asterisks: *this*, when converted to HTML, will appear as this. Markdown is a fairly robust system, and there should be syntax for most everything you need. If you find markdown too limiting, you can also use multimarkdown, an expanded version of markdown that adds syntax to handle more complicated work. While Editorial provides a helpful live preview of your markdown, indicating what it will look like when published, markdown can be written using any text editor. Editorial facilitates your ability to work with other applications by saving all your work to Dropbox. Thanks to this feature, you can work with your documents from any computer with Internet access. The ease and portability of Dropbox-synced markdown files makes Editorial an easy app to integrate into your regular routine, as it requires little training and avoids locking your files into a specific app.

While there are a number of ways to work with markdown on the iPad, one of the features that makes Editorial such a singularly helpful app is that it allows you to build automated workflows. These workflows allow you to process text, distribute writing, analyse text, and perform other such tasks repeatedly and efficiently. The power of these workflows is such that almost all of the built-in commands in Editorial are programmed through workflows, and are available for users to investigate and modify.

The workflows themselves are relatively simple to build: each workflow is composed of a number of Lego-like blocks that plug into one another, much as you would do with Apple's Automator) software or MIT's Scratch programming language. Blocks exist to allow you to select specific text, to accept user input, to pull text out of Evernote, to create emails or tweets, to modify content, and much more. If you find the existing blocks limiting, Editorial allows you to create blocks that process Python code. The addition of Python code allows users to program their workflows to work well beyond anything a regular word processor would allow: I can easily imagine workflows built to generate and publish epoetry, to play interactive, or to generate ASCII art.

My own efforts have been much more modest. As I am currently working on my dissertation, I've been looking for a handy way to organise my citations. While Zotero, Mendeley, and other options exist, they have never fit well with my workflow—I wanted to be able to simply dump citations into a file that could be sorted at my convenience. With Editorial, I was able to quickly compose a workflow that grabs the contents of a bibliography file of entries separated by a blank line, splits the entries apart using Python, sorts them, and then recomposes the entries in a completed bibliography. The workflow is available here for anyone interested either in using the workflow themselves, or just in seeing how Editorial workflows work.

One of the advantages of Editorial for digital humanists is that it provides tools for much of the work we do. Writing articles, blog posts, tweets, and research notes is simple and straightforward, but it is also just as simple to actually do research and text processing right within the app. Any form of statistical analysis that can be programmed using the many included Python modules (a list is available here) can be run without leaving Editorial, and the results can be effortlessly imported into your current research project.

While DH work is varied and requires work in a number of languages and styles, Python, with its powerful text-processing capabilities and extreme ease of use, is a valuable language to know, and can be adapted to perform many of the tasks that are required of scholars interested in text processing. Those who may find Editorial's implementation of Python somewhat limiting will be interested to know that omz:software, the developer of Editorial, has also developed Pythonista a dedicated Python app that expands on the capabilities of Editorial.

I've found Editorial to be an invaluable addition to my workflow, and imagine that I will use it to write the remainder of my dissertation, as well as many more projects in the years to come. iPads and other tablets have often been criticised for being developed entirely for consumption rather than production and creation. While these criticisms are, to a limited extent, true, they are rapidly becoming obsolete. Mobile app developers have been rising to the challenge, and have developed a number of powerful tools for creating a variety of content. These apps don't replace desktop tools, per se, but they provide a powerful supplement, and I can easily imagine using some of them, such as Editorial, as part of my primary workflow. I encourage you all to check Editorial out, and look forward to seeing how others use it. Feel free to share your stories and workflows—if I get enough interest, I'll post a follow up.

.

Playing your cards right

Jeremy Antley brought this Kill Screen article on the game DRONE to my attention this morning. DRONE uses a regular pack of cards to simulate the now-pervasive use of drones to kill military targets, and and the frequent civilian deaths that these strikes frequently lead to.

I was struck immediately by the degree to which the materiality of this game contributes to the production of meaning, something I’ve been working with a lot lately in my dissertation. The game is meant to be played with a standard set of cards, which does much to play down any sense of exceptionality—it’s just one more game, no different from the rest. This seems to me to have a strong resonance with popular rhetoric surrounding drones. We’re told that they’re simply one more tool for the military to use, and that there’s nothing exceptional about them, an argument that ignores the fact that this type of persistent, impersonal, asymmetric warfare is radically different. The plain cards, too, also resonate with the game-like control systems used by drone pilots, which, although highly elaborate, feature components familiar to most videogamers, and could well be found in many homes (albeit in less complex arrangements).

Another interesting wrinkle develops if players use one of the infamous decks of Most-Wanted Iraqi playing cards distributed to soldiers in the 2003 invasion of Iraq. Every one of these cards features the face of a member of Saddam Hussein’s government wanted by the US military, and as such, every card will appear to be a strike on an enemy, even though the majority of the cards, according to the rules of DRONE, are civilians. Combining the cards of Bush’s presidency with the DRONE of Obama’s reveals a curious reinforcement of logic, wherein all targets become military targets (even when they’re not), allowing us to ignore the human costs that DRONE’s rules struggle to bring to our attention.

Unfortunately, this game is somewhat outside of the area of my dissertation, and as such I likely won’t be able to go much further into it than I have in these initial thoughts, but with luck people like Jeremy will be able to help dig further into this game. As always, I’d love to hear your thoughts on this, and look forward to a discussion in the comments..

Massaging McLuhan

I’ve made one of my goals for the year—call it a New Year’s resolution, if you’d like, but of the variety that happens in late-February/early-March—to learn how to program generative texts. I’m hoping that this will eventually lead to longer-form poetry, but I decided at first to start with small, simple projects, like Twitter-bots, as they provide a good opportunity to learn more about the basics of textual processing, while at the same time having a smaller scope than longer-form poetry.

My first Twitter-bot was @tonightiate, a bot that announces random nouns that it’s “eating.” While it works well, the coding was quite sloppy, and a lot of the functionality was “hard-wired” in, quite difficult to expand. With my new bot, I wanted to remedy this, making something that could be expanded in scope with ease.

Subject-matter-wise, I thought it might be interesting to create a bot that reworked the text of a famous author. I’ve been really enjoying Leonardo Flores’s series of essays on Twitter-bots, and wanted to create something along the lines of what I’ve seen there. I decided that the subject of my bot would be Marshall McLuhan. I’d actually meant to make a McLuhan bot a few years back, and created an account—Martial McLuhan—for that reason. Unfortunately, I… uh… just plain forgot how to log into that account. Whoops.

Since I couldn’t log into my old account, I created @massagemcluhan, a bot that would “massage” McLuhan’s quotes—work them over completely, as McLuhan would say. I’ve noticed McLuhan’s penchant for reworking and revisiting phrases (“the medium is the message” and “the medium is the massage” being the most famous), and thought it would be interesting to rework some of these phrases by substituting various nouns into them.

The Python code I developed is as follows (with all my Twitter info redacted):

import tweepy
from random import choice
from random import randint
import time

#set up the OAuth and twitter API
consumer_key = '[redacted]'
consumer_secret = '[redacted]'
key = '[redacted]'
secret = '[redacted]'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(key, secret)
api = tweepy.API(auth)

#set up variables and seed all the random text arrays
noun = ''
words = [line.strip() for line in open('Nouns(5,449).txt')]
mcluhanquote = [line.strip() for line in open('mcluhanquotes.txt')]

#main loop
while True:
   noun = choice(words)
   quote = choice(mcluhanquote)
   #if the quote starts with a variable, capitalise the variable
   if quote[0] == '%':
      twitstatus = quote % (noun.capitalize())
   else:
      twitstatus = quote % (noun)
   #tweet the status
   api.update_status(twitstatus)
   time.sleep(600)

As you can see, there’s actually very little that happens here—the majority of the code is actually just setting up the Twitter API. Aside from that, my program draws a random quote from a file and a random noun from another file, then combines the two. The quotes all have one of the nouns replaced with a variable, and look like this:

Let us return to the %s.
The %s is the message.
The medium is the %s.
%s escapes attention as a communication medium just because it has no "content"
It is impossible to understand social and cultural changes without a knowledge of the workings of %s.
%s offers yesterday's answers to today's questions.

I’m hoping to expand this to allow for more complex substitutions/manipulations, but for the time being, this is working well, and I’m rather happy with the results.

In The Medium is the Massage McLuhan (or Quentin Fiore, or Jerome Agel, or someone) writes “When two seemingly disparate elements are imaginatively poised, put in apposition in new and unique ways, startling discoveries often result.” This notion seems to bear out in @massagemcluhan, where a number of genuinely thought-provoking utterances have emerged from this random process. Consider the following tweet:

When I was young, I remember visiting some student archaeologists on summer vacation (we were near an active dig site, and my parents, quite wisely, though we’d find a visit interesting). One thing that stuck with me is the usefulness of garbage: nothing can teach you about a society and culture quite like that which they throw away. A midden heap contains a wealth of information, and is most certainly a communication medium.

Another tweet that rings particularly true is this one:

What is data if not yesterday’s answers? And we so readily consult them, searching for the answers to today’s questions.

I look forward to working with this bot a bit more, as I think it has a lot of room to grow. It’s helpful, too, that the results are so interesting, making it well worth the while. I would love to hear everyone’s thoughts on this, especially comments on the bot itself—this is only my second one, so I have a lot to learn, and I appreciate all the advice I can get..

Citing HTML Articles

Recently I was discussing the citation of articles that have been posted in HTML with my fellow PhD students Peter Buchanan and Michael Donnelly. Once upon a time, the MLA had us number each paragraph, but that practice has been removed in the most recent citation standards. It makes good sense, too, as counting paragraphs in any work of of substantial length is a painful task, and for the most part it’s easy to tap ctrl+F and find the reference straight away.

But sometimes this new system doesn’t work terribly well. For example, if you’re trying to find a common phrase in a long document, you may find yourself searching for some time. Likewise, a paraphrase can prove fiendishly difficult to track down, as it could be substantially transformed from the original expression.

The metadata that paragraph numbers provide can also prove quite helpful in helping you situate the arguments spatially within the text. Is this an argument made early in the article, or perhaps towards the conclusion? Are these two pieces of evidence presented close to one another in the article they’re cited from, or have they been brought together by the author who is citing them? Did this I’m marking student engage with the entire article, or did they pick all their citations from the introduction? With paragraph numbering, we can more easily answer some of these questions at a glance.

With this in mind, it occurred to me that it would be easy to solve this using a tool available in all modern web browsers: bookmarklets! Using a bookmarklet, we can automatically add numbers to each of the paragraphs in any given website, without that website’s author having to make the changes themselves. If we visit a site and click the bookmarklet, the paragraph numberings will all appear automatically. If you cite paragraphs according to this numbering, any reader can visit the site and run the same bookmarklet to get the same paragraph numbers. This is an easy, minimally-intrusive way to get add the metadata we need to effectively and efficiently cite website content.

I encourage you all to try this yourselves. Simply drag this below link to the bookmark bar of your web browser. When you visit an HTML-based article, just click the bookmarklet in your bookmark bar and the paragraph numbering will appear on all the paragraphs in the article, available for easy citation.

Bookmarklet:   MLA Paragraph Tagging  

Please do let me know what you think, either in the comments or by tweeting me at @mattlaschneider..

The more you know

One of the most interesting things about working in the digital humanities is how much you can learn from even the smallest projects. I discovered a set of characters in an ebook I was reading that had been encoded not as capital Ks, but rather as capital Kappas (which I suspect to be the result of an OCR glitch, but haven’t had a chance to confirm yet), and thought that it might be handy to run the whole ebook through a program that could extract each of the individual characters and sort them so that I could discover any other similarly misencoded characters. Because this document was encoded as UTF-8, I knew that sorting the characters alphabetically would separate any other Greek characters, which have a higher numerical value than any look-alike Roman characters, allowing me to distinguish them quite easily. I put together a quick Python program and ran it on the text.

The results were not what I’d expected. Where I’d expected to see the character Kappa appear, I found nothing. I narrowed the text I was processing down to just known instances of the character Kappa, and again found nothing. Instead, it appeared that the character Kappa was being broken down into two characters, specifically “ö” and “Œ.” I had a hunch as to what was going wrong, but it still seemed strange to me: UTF-8 is so named because the characters it stores are 8 bits in length—one byte—and it appears that Kappa was actually a two-byte character, and that the two bytes were being broken apart by the Python program and treated as two one-byte characters.

A little research revealed that I was correct: UTF-8 does indeed include a number of two-byte characters (and three-byte, and four-byte). The first byte in a multi-byte character indicates to whatever is decoding the characters that the following byte is to be read as part of the same character (and in the case of three- or four-byte characters, the second and third bytes do the same), allowing for the occasional multi-byte character to be integrated into an encoding system that normally only works with single-byte characters, which saves a lot of space and processing power, as empty bytes are excluded from the most common characters.

This system for encoding multi-byte characters into what is normally a single-byte encoding standard works well, for the most part, but can cause problems in situations where it is not processed properly. This is what happened in the case of my Python program, and also happens frequently in webpages, where characters such as smartquotes are regularly misinterpreted as “junk.” This is why one of the first things you learn when working with HTML is to encode smart quotes using named and numeric references, rather than copying and pasting them in from Word (or other such programs). Personally, I find it kind of odd that such commonly used characters are treated so strangely, and would love to learn more about why that might be the case (something I’ll have to research once I’ve cleared up some of my current research).

As usual, the digital humanities community was very helpful. After I posted my issues to Twitter, a number of people either responded or retweeted, and with the help of Ted Underwood and my friend Chris, I was able to get my program working properly (for those wondering, they pointed me to this helpful Python doc on unicode—a must read for DHers interested in data mining and who would like to avoid encountering the same troubles I had). Moments like this always make me realise how much digital humanists have to learn as we go. While most scholars will never have to think about the way their computer encodes text, it is almost impossible for a digital humanist to work with that text without knowing exactly how it’s encoded, lest we mangle the very text we’re trying to work with. My students last year encountered the same problem when many of them tried to upload a .doc file to TAPoR instead of the .txt files they were supposed to be working with, resulting in a mess of XML tagging making its way into their results, rather than the text they’d expected.

These incidents also help highlight the strong kinship DH work has with that of bibliographers. Anyone undertaking a bibliographical project will quickly find themselves immersed in the printing technology of the time period they’re studying. For example, the rise of word processing has meant that we never really have to deal with kerned type and the ligatures that prevent kerned sorts from fouling when they are forced to precede a sort with an ascender—something that would have been obvious to 18th-century printers, and would have informed all of their typesetting. As such, we find ourselves mentally (and for many bibliographers, physically) venturing into printing shops that reflect the periods we study, much as I found myself learning the fundamentals of UTF-8 encoding and decoding.

The more you know..

Stephen Marche and the Digital Revolution

So Stephen Marche has written an article on the digital humanities that has caused a bit of a stir in the Twittersphere. As one might expect from Marche’s pugnacious style, this article comes to bury the digital humanities, not to praise them. It’s a curious approach, one that seems to revel in some elements of the digital revolution (most notably EEBO and Google Books) while rejecting others outright, a tension explored at length by Holger Syme in a characteristically excellent piece on his blog. I won’t really get into that here (as there’s nothing I can add to what Holger’s said); instead, I want to look more deeply into the specifics of the digital techniques Marche recoils at.

Marche dislikes the way that the digital humanities has turned literature into data, and robbed it of its context. And this makes sense, to a point, as context is an important part of humanistic inquiry. Likewise, Marche feels uneasy when confronted with what he calls the fascism of algorithms: “Algorithms are inherently fascistic, because they give the comforting illusion of an alterity to human affairs.” These two criticisms bear out, but they are also criticisms that DH has explored, and will likely continue to explore for years to come. As I note in my review of Stephen Ramsay’s excellent book Reading Machines, Ramsay notes that a humanistic use of algorithms will always require a return to the original text to see whether the results of our algorithms stay true to the text, and to avoid pinning our arguments to something that may be a distorted reflection of the text (something he gently criticises Franco Moretti for not paying enough attention to). Ramsay refers to these algorithmic transformations as “deformances,” a term that harkens back to the work of Jerome McGann and Lisa Samuels, who make an argument for deforming literary works so as to better understand how they function. And this is the same technique digital humanists use: we deform texts to exaggerate features and see how they function when taken to the extreme, and then we return to the original to explore in finer detail anything that the deformances may have brought to light. This is, at its heart, the very same type of work that Dominican monks undertook in 1270 when they developed the first biblical concordance, which catalogued all the words in the Bible as well as some major themes and motifs, allowing them to see relationships between different parts of the text that would otherwise be obfuscated by the massive size of the Bible. It’s all about new ways of seeing a text.

The fascism of algorithms has also been explored by a number of digital humanists, and to great effect. David Golumbia’s book The Cultural Logic of Computation explores the way that computational ways of thinking have begun to impose themselves on the non-computational. He looks, for example, at the way that Chomsky’s approach to linguistics has naturalised the use of computers to process language, and notes the ways that algorithms lack the dexterity to properly describe language without constant tweaking and reworking. We can also turn to someone like Lawrence Lessig, who has commented that with changes in copyright law, code has become law, and that perfectly legal uses of ebooks, for example, can be made illegal by the overriding enshrinement of (what is known in Canada as) technological protection measures. Katherine Hayles, an early advocate of digital humanities, has also noted the pervasiveness of computationalism, the subjugation of analogue processes to the digital, which leads to rigid ways of thinking that exclude possibilities that do not fit into quantised, defined, preselected options. So Marche is right in some of his objections to the large-scale effects of digitisation and computationalism, but he seems to be unaware that these are issues that digital humanists have long been keeping their eye one, and that we will continue to explore well into the future.

Marche also seems irritated by the idea that digital humanists are mistaking data for literature, and losing the aesthetic and poetic value of these texts in the process. This is a somewhat strange argument in my mind, as literature and data need not be in so strict opposition; indeed, data can have its own aesthetics. Moby-Dick begins not with the famous line “Call me Ishmael,” but rather with a collection of literary quotations about whales. These pages of quotations plucked from their literary context work to establish the unknowability of the whale that the novel explores in all its detailed descriptions of whales and whaling. Nick Cave’s novel And the Ass Saw the Angel contains a number of lists and charts, all of which map the decline of Ukulore Valley and its residents. Milorad Pavic’s The Dictionary of the Khazars draws much of its strength not from Pavic’s excellent writing, but from the way Pavic divides it into three dictionaries—the Judaic, Christian, and Islamic dictionaries—and juxtaposes these three reference books to find a story that takes place between all three. Ultimately, data can’t be distinguished from literature so cleanly as Marche suggests; literature is as much interpretation as anything. Just as art survived Duchamp’s readymades, literature will survive data.

I must say that I’m disappointed by Marche’s assessment of the digital humanities not just because it’s a rather poor critique of something I hold dear, but rather because Marche himself has made a valuable contribution to digital literature, and seems to have forgotten everything he learned when writing that. Stephen Marche’s interactive novel Lucy Hardin’s Missing Period is a compelling work that follows the titular Lucy’s life shortly after waking up after a one-night stand, on the anniversary of her father’s death. As we read this work, we are given a number of options that shape the way the story plays out. With each choice we make, new possibilities open before us while others recede into the distance. I was TAing in a class last year that Marche came to visit, and while he was there, he shared a number of insights about the making of Lucy Hardin’s Missing Period. The one that has stuck with me the most since then was his argument that interactive works like Lucy Hardin are actually better suited to realist narratives, as they better represent all the different paths our lives can take than a traditional, monolinial work that allows the reader only one possibility. By presenting readers with options that vanish with each choice, Marche felt that he was better representing the reality of our day-to-day existence. This Marche is much more thoughtful, more considered than the one who wrote this rather spotty article for the Los Angeles Review of Books. Admittedly, it’s not a Marche who has embraced data the way that other authors may have, but it is certainly a Marche who sees that even in digital form we are able to create “mushy” truths (as he says). We can create spaces where multiple outcomes hang in the air beside each other, and collapse as we make choices. This incompleteness, an incompleteness Marche feels is lacking in our work with data, is completely native to digital realms. While the Marche of the article decries the failings of the digital, offering critique but not much else, the Marche of Lucy Hardin is very much in the business of pushing boundaries and trying things out, of finding new ways of seeing the world and embracing the unique elements the digital has to offer. I do hope we see more of this latter Marche..

The Sinking of the Titanic

Towards the end of his life, Marconi became convinced that sounds once generated never die, they simply become fainter and fainter until we can no longer perceive them. Curiously enough, one of the rescue ships, the Birma, received radio signals from the Titanic 1 hour and 28 minutes after the Titanic had finally gone beneath the waves. To hear these past, faint sounds we need, according to Marconi, to develop sufficiently sensitive equipment, and one supposes filters, to pick up these sounds. Ultimately he hoped to be able to hear Christ delivering the Sermon on the Mount.

Gavin Bryars.

The Fuzzy Humanities

It seems that as of late the Digital Humanities has been ruffling some feathers. It’s not so much anything that any digital humanists have done, but rather the term has become perhaps too current, and as such has come to grate on the nerves. As a fan of the digital humanities, this is one of the last things I would want. So in an attempt to help smooth things out, I whipped up a little bookmarklet that should act as a soothing balm when applied to any webpage that is suffering from an outbreak of the DH. Simply drag the little grey “Fuzzy Humanities” box down there up to your bookmark bar where it will wait until the next time you find yourself overwhelmed by the DH-word. When that happens, click it and then sit back in relief as every appearance of “digital humanities” is replaced with the one thing people want most from the internet: fuzzy kittens.

Enjoy:   Fuzzy Humanities  .

Skeuomorph Descending a Staircase, No 2

I’d like to thank Zolani Stewart for pointing me to this thoughtful article by Tom Hobbs. While I think that Tom Hobbs would rather Apple just simplify or abandon the analogue reel-to-reel design than explore the latent-yet-untapped possibilities possessed by the skeuomorphic design (as I suggest), he has some wise words for us all to consider. Perhaps the best summation of his advice is as follows:

[T]here’s an opportunity to delight by incorporating elements that are there purely to serve emotive purposes. This is why the philosophy “just enough is more” is rather more important than just simply “less is more.” It is about scrutinizing everything, so there is a clear, purposeful rationale for every element. This means that all the elements and their layout support the primary objectives of the device and/or application. To do this effectively, it is not possible to achieve success without thoughtfully considering the ways we interact and use products in the physical, analog world. Otherwise designs are just far too cognitively taxing. However, this doesn’t mean just digitally re-creating or simulating analog models for the sake of familiarity–we all need to be constantly checking our metaphors to make sure they’re making sense. We need to be cognizant of how much of the pre-internet world is now completely obsolete and unrecognizable to any one under 20.

I think I would reformulate this by saying that in any good design there needs to be a conversation between the old way of doing things and the new. As Marshall McLuhan writes in Understanding Media, “A new medium is never an addition to an old one, nor does it leave the old one in peace. It never ceases to oppress the older media until it finds new shapes and positions for them” (174). In McLuhan’s formulation, this is a violent exchange; I, personally, would like to see a more respectful negotiation of the two media, but McLuhan probably more accurately reflects the reality. But nevertheless, there is an exchange going on here as the introduction of new media disrupts the ecology of the old media. Each medium shapes the other at this point of contact, leading to many different outcomes: we might see the new medium shaped temporarily by the conventions of old while users transition, or the new medium might bear the marks of the old medium for a long time to come; an old medium might vanish quickly, replaced a new medium that completely obviates the old, while other old media may find new vitality, with aspects heretofore overlooked or taken for granted suddenly jumping to our attention—as Derrida puts it: “[the rise of computers] has resacralized everything connected with the book (its time, its space, its rhythm, starting from the ways it is handled, the ways it is legitimated, even the body, the eyes and the hands bent around it)” (“The Book to Come”).

I think it is important that we be aware of the way media new and old make us rethink the way we engage with them. EBooks, for example, are fantastic things, and for most purposes they greatly exceed their print counterparts. But at the same time, there are a number of properties print books have that either have not yet been or simply cannot be reproduced in eBooks. I like, for example, being able to stick my fingers (or pencils, or cards, or, on those grimmest of days, other, smaller books) in the pages, allowing me to switch between parts of the book far faster than any eBook bookmarking interface yet allows me to. And indeed, those physical objects can sometimes offer their own benefit: I sometimes stick photocopies of related articles into the pages of a book to serve both as a bookmark as well as a way to keep related materials together. I have no doubt that this functionality can be transferred over to eBooks, but at the moment (so far as I know), it remains exclusively within the domain of print media. If we switch over to exclusively eBooks without considering the functionality we’ve lost, we will find ourselves poorer.

But neither should we feel obligated to maintain all the functionality of print media if we find that electronic media can do it better. Look at the bookshelf paradigm Hobbs highlights, for example: it’s rather restrictive, and while it can help some users transition, most users have grown used to the more malleable and adaptive interfaces and organisational structures offered by computers. Here it would make sense for us to abandon an artificially limited way of engaging with our eBooks, as it offers little and does much to limit us—ebooks, for example, always seem to appear faced, which means that we see only a fraction of the number of books we’d see on an actual bookshelf where the books are normally oriented with the spines facing out.

Ultimately, I think that this will involve a great deal of experimentation. Some things will prove useful, while others will not. I like the page curl simulation offered by iBooks, for example, as I often find myself lazily curling the top-right corner of the recto page down as I finish up the final sentence of the page, allowing me to peek a glance at the content to come before leaving the current content. But it’s not a deal-breaker. If the page curl were to vanish, I’d get by. And I can easily imagine a new way to interact with eBooks that would prove even more useful than this page curl, or that might offer far more interesting aesthetic possibilities. The point is that we should bring the new medium into conversation with the old, so as to see what each might offer the other. I see little reason to do otherwise, especially in a digital environment, where experimentation is often less costly and less wasteful than it is in a physical environment.

To return to Tom Hobbs, “This is why the philosophy ‘just enough is more’ is rather more important than just simply ‘less is more.’ It is about scrutinizing everything, so there is a clear, purposeful rationale for every element.” We should explore old UIs and see if they did anything better (or at the very least more interestingly) than what we’re doing right now. Likewise, we should never hesitate to see if a new take on an existing idea might not improve things greatly. We shouldn’t thumb our noses at Apple just for their (weirdly slavish) devotion to the design Braun TG 60, but rather tap them on the shoulder and say “Well, that’s lovely, but don’t you think that you could make a better use of some of these elements?” Because at the end of the day, most of the interface works well, and looks good while working. There’s really little reason to change 3/4 of the interface. But at the same time we wouldn’t be doing ourselves any favours if we were to blindly accept that final 1/4 of the UI as is. We should see if we can do more with it, and, if it ultimately winds up proving more trouble than it’s worth, we should leave it behind, replacing it with something different, but not before then..