Craig Mod considers the iPad as a reading platform, concluding that designers must do better.

What, then, is the problem?
It's not the screen — I've happily read several novels on my iPhone.
It's not the weight — it feels fine when resting on a table or my knee.
The problem is much simpler: iBooks and are incompetent e-readers. They get in the way of the reading experience and treat digital books like poorly typeset PDFs.
We can do better. (We have to do better.)

Simple design principles like typeface, navigational icons, and a poorly executed three-dimensional book metaphor are hindering the iPad’s ability to be the superb eBook reader it wants to be.

But Mod also offers a few simple solutions: increasing hyphenation, improving margins (including adding ragged-right text), and enabling copy-paste abilities are all small fixes that would help iPad eReaders realize their potential.

Perhaps rooted in anxiety over the digitization of the page, I’ve noticed an increasing trend toward the veneration of the physical page. The idea of carrying a paper notebook with you now feels romantic, and it makes what you have to say feel more important when it’s hand-written in a leather-bound codex.

Di Mezzo Il Mare focuses on the visual appeal of the written page. In addition to poetry and other reader-generated content, the new online journal offers images of hand-written notes and page-art ranging from chicken-scratched notes to doodles in the margin to elaborate notebooks of talented artists.

In the past, there has been literature on the relation of handwriting to psychological state and identification with the self, and perhaps people fear losing that individuality. Whatever the reason, I find myself in a frustrating tug-of-war between wanting to write beautiful things in a beautiful book and the frustration of that apparatus not being able to capture my thoughts as quickly as I can think them.

Many writers champion the small local bookstore. They relish the atmosphere, they cherish the support of their local bookseller, they pray that their store will somehow stay open. Sadly, a cozy atmosphere alone doesn't seem to be enough to keep many coming. But contrary to the gloomy atmosphere surrounding print publishing, there are still independent bookstores that are doing well. How can this be?

Rachel Cooke reminds us that “sometimes you don’t know what you want until you see it.” Very few people go to Amazon to browse for books; usually you know what you want before you get there, which limits how much you’re going to leave with.

Amazon does not set the synapses crackling the way the sight of a pristine shelf of books does: it does not surprise you, nor does it fuel book hunger. You click on what you came for, and then you leave. This, then, is where the independent store, with its carefully edited collection, comes in. Lutyens & Rubinstein has been open just seven weeks but things are going twice as well as its owners expected. "We are a local shop," says Rubinstein. "But we are also one with deep expertise and good taste."

Some writers strive to separate personal life from private. When Poe or Dickinson are taught in English classes, details of their character and personal life creep into discussion of their work. Sam Schulman examines whether we can and should separate writers’ moral character from their work.

Still the question remains.  What does it matter that Larkin sneered in his letters and conversation (fearfully and fretfully, it seems to me) about foreigners and women, that Naipaul made selfish use of people from the beginning of his life, and no doubt continues to do so now?  What does it matter that Dickens knew what it was like to be dependent and abandoned as a boy, but made sure that his wife would suffer the same fate?  It is this.  The weakness of character of Dickens, Larkin and Naipaul comes from the same source that drives their art.


Over at, a discussion is taking off on whether writers should worry about originality. After all, tropes are tools, and every story has already been told. rRight?


The Digital Fiction International Network is a group of scholars dedicated to more consistent approach to the study of of digital research. The group hopes define digital research on terms that are “accessible to the wider research community (e.g. stylistics, narratology, literary theory, media studies).”

Chief Investigator Alice Bell writes,

The co-investigator, Astrid Ensslin, and myself noted that the study of digital fiction had undergone a significant paradigm shift from a 'first-wave' of theoretical debate to a 'second-wave' of stylistic, narratological and semiotic analysis. While there was some important work going on across the world, we felt that there was a need to draw people together. Principally, therefore, we wanted to foster a collaborative international network of academics working on similar projects. Secondly, we had a methodological motivation. We wanted to define an area in digital fiction research which was devoted to methodological transparency. Each member of the DFIN takes a slightly different approach to digital fiction, but we all analyse digital fiction using a transparent and replicable methodology.

Members of the DFIN have recently published a manifesto in the Electronic Book Review, though they are reluctant to call it that:

A manifesto is political; a mission statement is corporate. We deleted these concepts from our discussion. We were not etching a scroll of aesthetic edicts for the digital ages. We were not carving a credo in the sense of monologic truth or certainty, of liberation, of democratization, of salvation, of renewal. […]
What emerged was a creed for the screen. In a word, a [s]creed.

Their [s]creed addresses their approach to research and analysis, promising a “body of exemplary analyses of digital fiction.”


Katherine Hayles sits down for a discussion on electronic literature, the problem of archiving, the trend toward collective authorship, and her thoughts on the state of the publishing world.

Roger Ebert’s essay, Video Games Can Never Be Art, has become a hot topic in gaming circles. I respect Mr. Ebert’s opinion, but I have to disagree.

Are games art today? That’s debatable. Most are not. But there are some edge cases, and those are the ones in which we’re interested.

The games Santiago cited were not the best examples, and Ebert is correct that her definition of art is weak. It seems nearly impossible to slap an objective, all-encompassing definition on art, because (as Ebert discusses) art is personal. Art can be beautiful or grotesque, but is above all emotional. It stirs something within us. It explores the essence of what makes us human, and so it affects each person differently.

Ebert argues that games cannot be art because the object of a game is to win. There has been a shift in video games in recent years to view “winning” at a video game as an ending to a story more than actual victory. Now that I think about it, the term “beating a game” has almost completely left my vocabulary in favor of “finishing a game.” Movies, plays, books, and songs all end. The “victory” in a game has become the conventional ending that signals the close of a narrative.

But it’s not the narrative alone that makes a game art. Nor is it the music, the visual art, the expertly-crafted AI, or the multiplicity of endings. It’s the way games emotionally affect the player, and by their nature, this is not something that can be predicted or observed from the outside, any more than we can judge a novel by looking at paintings of its crucial scene. To know a game, you have to play it.

Playing a game, manipulating it and experiencing it as the game responds to your inputs is the key to understanding. The truly moving experience comes from how the games make us feel, which is something that only the playe can experience. The famous airport scene in Modern Warfare 2 or the cafeteria scene in Super Columbine Massacre are excellent examples of how playing a game changes the meaning of a scene; without the controller in your hand, you wouldn’t feel the same hesitatation, guilt and powerlessness. Without playing a game, we cannot understand it.

Ebert is right, however, when he says that measuring a game’s artistic success by its commercial success is foolish. The point that Santiago makes, that the game culture is “growing up” and expecting more from games, is surely true. Video games are still very young, and video games that aspire to some artistic significance have only been around for a few years.

Mark Bernstein notes that games haven’t yet touched on some of the basics of human existence, like sexuality or age. I say give them time. Video games are still very fearful of what audiences might consider indecent, and they have only been around for a short time.

So can video games be art? Whole-heartedly, yes. Are there games now that qualify as such? They’re certainly getting there.


Susan Gibb reports back from the Tunxis Art Marathon where, bleary-eyed and high on pancakes, artists worked for 24 hours straight. Gibb notes that by the end of her stay she was “feeling a bit grumpy,” but she was able to finish a new hypertext fiction, On the Very Last Day, He Imploded.

Steve Ersinghaus also writes on the ordeal, revealing how these sprints can be beneficial, despite placing writers outside of their comfort zone.

I really had to squint, but, interestingly, I didn’t solve the real problems until 3:30 AM, when I started to drag and say, “Oh my god it’s only 3:30.” It was at this time that I should have gone into the new media lab and cranked up HL2 on the PS3 system and chilled some. Next year I’ll be better prepared.


John August remembers the joys of writing an antagonist and the challenges of writing about monsters or other-worldly forces. A good villain often has fuller backstory than the hero, even if it is only hinted at, and the villain’s motivations are often just one decision away from being heroic .

But should you make your monster human?

A certain balance must occur, between giving your villain depth and rationalizing away the villain’s impact. Also, there’s something very unsettling about a villain that we don’t fully understand. Monstrosity often comes down to Otherness, and there is nothing more Other than actions we can’t begin to rationalize due to their sociopathy or perceived insanity.

Of course, the best way to establish that an object of desire is lovable is also to give him or her no hint of an inner life. No matter how hard we work or how wonderful we are, one cannot understand, predict, or control the beloved.

Alice for iPad

It hasn’t taken long for interesting reading formats to reach the iPad, though I must admit I wasn’t expecting anything beyond eBooks and perhaps Vooks for a little while. However, Alice for the iPad has landed, and faliing somewhere between Voyager’s electronic edition of Martin Gardner’s The Annotated Alice and a children’s game, the work offers something different.

Interactive illustration with a small element of puzzle-solving is a cute way to make literature more engaging, and it certainly does seem particularly well-suited to children’s literature. It also attracts media coverage, since it’s easy to grasp and you don’t need to read the book in order to write a story about it.

One could certainly envision an original work that takes these ideas and expands them, with the puzzles and interactive illustrations actually influencing the outcome of the narrative. The trick is to make the narrative immersive and engaging without the interactivity feeling gimmicky—and this may be harder than it sounds.

In light of his insightful views on casual gaming put forth in A Casual Revolution , it’s no surprise that Jesper Juul found Bejeweled to be one of the most important games of the last decade:

Viewed strictly as a game design, this probably isn't the most enjoyable game of the decade. Neither is it the most innovative, being rather an incremental development based of a number of existing designs.What makes Bejeweled the game of the decade is its central role in the casual revolution: This game was instrumental in creating the first video game distribution channel aimed at an older and predominantly female audience (downloadable casual games), hence redefining our ideas of what a video game could be and who could play video games. Furthermore, its basic gameplay of swapping tiles to make colored matches has taken on a life of its own, now playable on cell phones and aeroplanes; as relaxed game sessions without any time pressure; packaged as a role-playing game set in a fantasy world (Puzzle Quest); as a one-minute intensive game for competing against friends (Bejeweled Blitz). That is the importance of Bejeweled: showing us how many different things video games can be, showing us that there are many ways to play, use, and enjoy video games.

Louis Gerbarg offers one of the most level-headed and informative approaches to the Apple/Flash debate that I’ve seen. He makes some points that might not be immediately clear from some other sources’ posts:

  • He doesn’t think that Flash is the only target
  • The language of the new license could potentially also apply to game developers, though it seems unlikely that Apple would pursue that course.
  • Apple doesn’t hate 3rd party runtimes because they are power-hungry; Apple is concerned with their OS development cycles and compatibility issues.
  • The desire for Flash might be quenched a bit if we had a light development tool (something like HyperCard Touch)

Though the article tends to lean on the side of Apple, it’s a pretty fair analysis of the situation and how it could possibly come to agreement.

Whitney Trettien reflects on her digital thesis to ask, “What does digital humanities really mean?”

So what was I doing? My born-digital thesis was not a scholarly resource: I wasn't and never intended to present or curate a collection of digital artifacts for others to browse. My work was critical and individualistic, conscious of its methodology and historical moment. It strove for self-awareness. In this respect, it had more in common with the essays on Kairos than with the work of NINES; yet it never emerged from the disciplines of rhetoric and composition. I was more interested in challenging notions of "old media" literacies, or even "literacy" itself, than exploring those of "new media."
I was positioning my work as Digital Humanities, but Digital Humanities didn't really want to claim it.

I have noticed a tendency in digital humanities to focus on the archive. This fits nicely with disciplinary convention, but digital content delivery may change scholarship in a profoundly. At times, the humanities need a reminder that born-digital works have a place in their digital collections.


Roger Ebert wrote a fascinating report on a very long and meticulous viewing of Aguirre, the Wrath of God, which involved pausing after each shot to discuss its discursive elements. Altogether, the entire viewing took eight hours split over the course of several sessions.

This detailed approach to film prompted if:book’s Dan Visel to observe that this type of “reading” seems “luxurious,” and he invites us to become more luxe readers. To be fair, much of the content that we’re quickly consuming and discarding is designed to be disposable; news buylletins, weblog posts, Tweets, internet memes are all products of their temporal context. I think that most of today’s truly great works are being pored over. After all, isn’t that what scholars do?

But Visel might be on to something with this idea of luxury: many of my favorite books, plays, games, and films have taken on a whole new majesty after I spent hours dissecting them and writing about them. Perhaps it’s worth reminding people to stop and smell the roses.

In The Times Literary Supplement, Mary Beard asks, "Can women write reviews?" There are relatively few women writing book reviews, perhaps because (in the words of Mary-Kay Wilmers who edited the Londongh Review of Books) they have “a tendency to be either a bit jargony, or a bit breathless.”

(Thanks, Diane Greco !)

@BettyDraper has over 25,000 followers. She tweets about her favorite recipes, and blogs faithfully. But she’s not a real person; she’s a character from TV’s Mad Men .

Helen Klein Ross is the person behind @BettyDraper. She has 20 years of marketing experience and knows how to capture an audience.

As Ross explains, @BettyDraper was part of a campaign launched to keep viewers engaged with the television show Mad Men between seasons. The campaign used Twitter and the blogosphere to draw in potential viewers and to invite fans to help scrape out an online story in a deep and engaging collective narrative project that lies somewhere between fan fiction and marketing.

Ross presents her insights from the campaign in a fascinating series of slides that shows how companies can use collaborative fiction environments similar to ARGs to bring consumers closer to products.

In “Digital Bibliography,” Ryan Trauman ponders the shift toward abstraction in computing and writing technologies. Inspired by a discussion of “cloud computing” (a term Trauman acknowledges has many meanings), he discusses the ways that our thoughts drifted from the tangible book to the abstract screen to the even more abstract Web.

What I’m talking about is our ontological relationship to texts. The move from static to dynamic. The move from actual to virtual. 20,000 pages of text used to mean a full bookshelf. Two years ago, it might have meant a small USB drive. We can’t possibly understand the modes of a textual “existence” in the ways we used to. It just doesn’t make any sense. First, and maybe most important, is the fact that now, the FACT of a book as paper-ink-binding is now remarkable (as in “worthy of remark”). In fact, almost any mode within which a book is instantiated is worthy of note. It becomes part of the text’s rhetoric. This is the first note I wanted to make. Digital texts tend to bring the “mode of delivery” back into any analysis of a text. There IS no longer any default.
The second ontological consequence of digital textuality has to do with the material existence of a text. To transition from the physical space occupied by a shelf of books to the tiny usb drive is a radical experience. Most people, I think, tend to experience it as a transition from actual to virtual. And while this model has some merit, it’s much more apt to stick with the idea of a shift in size and material. The books got smaller. Not just by shrinking but through reorganization, too. But they still exist in the material world. (Matthew Kirschenbaum probably makes the  best case for this.) We still need to “store” them someplace. We still “send” them from place to place. We cannot make them appear from the ether. Screens. Hard Disks. Processors. Random Access Memory. The are “inside” our computers and portable drives.
But this cloud thing is different, right? Now it really is like our files exist out there in the ether.

These transitions have interesting implications on our perception of writing. Once thought to be permanent and static, writing is now considered dynamic and to some extent infinitely-sprawling and connected. We expect the Web—and therefore what we read—to be ever-changing and impermanent.

The computer is always reread, an unseen beam of light behind the electronic screen replacing itself with itself at thirty cycles a second. Print stays itself–I have said repeatedly–electronic text replaces itself. – Michael Joyce

Dale Dougherty believes the iPad’s lacks ability to create imaginative, interactive content and declares that “The iPad needs its HyperCard.” (Amid the usual comment dross, there are some thoughtful observations in the comments from Tantek Celik and Dave Drucker, among others.)

It seems that many people have been looking back to predict the future of eBooks: the transition from radio to television, from horses to cars. But one place we haven’t gone is back to the year 2000, when eBooks were predicted to take off. Why didn’t they flourish then, and why, ten years later, are we back to the same place?

In a recent article, Michael Mace explains why eBooks didn’t take off then, and why they won’t take off as quickly as some predict.

  1. There were not enough eBooks to make it worth buying a reader.
  2. eBooks were too expensive, especially given that many readers value content differently than the publishers expect.
  3. The hardware form factor was wrong.
  4. Periodicals weren’t ready to make the digital leap.
  5. Poor marketing: marketers overvalued aspects of products that readers simply didn’t care about.

Mace notes that a lot of these problems still exist in some ways today, including the central issue of price. And since books are not “broken,” he suspects that they will not vanish soon. With the rise of tablet computing, areas of opportunity do exist in places like the short-story and periodical sectors, places that seem better-suited to digital form.

Recently, MIT hosted a Purple Blurb event which showcased interactive fiction writers Jeremy Freese and Emily Short.

Freese read from Violet, an interesting interactive fiction told in the voice of the protagonist’s girlfriend. She entreats you to write a thousand words of your dissertation, overcoming obstacles of procrastination that seem to keep popping up.

Short read from Alabaster, a work that experiments with a collaborative process of IF creation. You are the huntsman. You are traveling into the forest with Snow White. You intend to kill her. Is she as innocent as she seems, or is there more to the story than we know? The player interacts with Snow White, asking her questions to glean bits of information.

I had never been to an IF reading, and I must say that the experience is very different from reading the work at home, or even watching it played. The readers read the text while an “interactor” manipulated the software, typing in commands to ensure that no time was wasted. Thus, the audience did not get to experience the pleasure of solving the puzzles, but instead was privy to easter eggs and areas of the text that they might otherwise have missed. The format also allowed for showing lots of the text in a brief session.

David McCandles pointed us to Stefanie Posavec’s work with information visualization. Her “Literary Organism” is a beautifully illustrated visualization of Jack Kerouac’s On The Road. McCandles explains, “Here the lines divide into chapters, bloom into paragraphs, sprout sentences, and spread out into words. All are colour-coded according to the key themes.”

McCandles also offers a shot of the original marked-up text and several visualizations that were created along the way.

Flash War

With the release of the iPad , A List Apart’s Dan Mall predicts a looming battle that he calls the “Cold War of the Web.”

The arguments run wide, strong, and legitimate on both sides. Apple CEO Steve Jobs calls Flash Player buggy. John Gruber says that Apple wants to maintain their own ecosystem. On the other end, Adobe CTO Kevin Lynch argues that Flash is a great content delivery vehicle, and Adobe’s Mike Chambers expresses concerns over closed platforms . Interactive developer Grant Skinner reflects on the advantages of Flash.
However, the issue is larger than which one is better. It’s about preference and politics. It’s an arms race. This is the Cold War of the Web.

So if this is the Cold War of the Web, where does electronic literature fit in? For example, much of the ELO’s eLit anthology depends on Flash. Is this another occasion for archiving and adaptability?

Margaret Atwood recently post a lovely piece for the New York Review of Books Blog on her introduction to Twitter. In addition to being clever and insightful, the article is peppered with the lovely little phrases that only she could assemble:

Anyway, there I was, back in 2009, building the site, with the aid of the jolly retainers over at Scott Thornley + Company. They were plying me with oatmeal cookies, showing me wonderful pictures, and telling me what to do. “You have to have a Twitter feed on your Web site,” they said. “A what?” I said, innocent as an egg unboiled.

The iPad has been heralded as the savior of newspaper and magazine publishing, but much of the hype has been focused on consumption of media, not creation. Indeed, the input methods for generating substantial quantities of text on the go seem cumbersome.

Tablet computing pioneer Gene Golvochinsky examines the tablet computers of 1999, which were heavily dependent on a stylus, and compares the iPad’s touch controls. He concludes that the stylus is superior for applications that simulate digital ink, while the fingers are great for manipulating data once it exists.

For writing on the go, having the keyboard input (external and screen-display), stylus input, and touch input would be ideal. I can imagine a word processor that would let the user type in text with the keyboard, edit with a stylus, and highlight, cut, or move text as—well as inserting links and other forms of media—with the fingers.

The closest I have seen to this ideal is the demo of SketchNotes that Golovchinsky references, but as he notes, the motor control over the touch-input markup is noticeably inferior to that of the stylus.

Responding to a recent Seton Hill initiative to give every student an iPad , he also reports that the iPad is also not capable of multi-tasking—like running reading and writing programs at the same time—so the iPad may prove to be less desirable than a laptop for many students. (Of course, most students will also have a laptop, too.)

Noah Pedrini finds Post-It notes and turns them into digital art . His recent work Anthroposts demonstrates how these Post-Its represent small traces of our hurried daily lives and parallel the brief fragments of text we have grown accustomed to through digital communication.

Though Pedrini mentions the beauty of the handwriting itself and the commentary on digital life that the project offers, the real intrigue for me is the suggestion of narrative inherent in these notes.

For example, we have some directions to Park St in Boston. Why did this person need to go to Park Street? The fact that she didn’t know where it was indicates that she probably hasn’t been in Boston very long; is she visiting? Why?

These slender clues always suggest a narrative. I can instantly think of several scenarios that would call someone to Boston and would require that person to need to go to Park Street: shopping on vacation, a job interview, dinner with an old friend, catching a long-distance lover in the act of cheating over dinner. And that’s the interesting part. I’m projecting the narrative. These notes, perhaps without meaning to, have invited me to create dozens of little stories to justify each one.

One of the issues brought up during the IF panels at Pax East was the problem that IF games, though popular as hobby, academic, and amateur projects, have not been commercially successful since the fall of Infocom in the late 80s.

Don Woods noted in the panel, “Could the problem be that we still think of them as games instead of literature?”

I think this is a core problem. Newer IF games are addressing this central issue: with the rich stories of novels and console games, it’s no longer sufficient to tell us we need to go into the cave to kill the dragon. Oh – and solve these puzzles and navigate this maze on the way. Never mind graphics; iinteractive fiction needs to be able to compete with both the commercial game and the novel in narrative depth and complexity.

Thinking of Interactive Fiction in literary terms certainly seems like a step in the right direction. And the genre has been moving in this direction. English departments seem to be fostering the study of IF.

But many of the IF developers in an earlier panel spoke of IF in relation to commercial games rather than in relation to the novel. Perhaps this was a matter of setting—the convention was a gaming conference after all. But IF is really a form between literature and games, perhaps it’s time to lean a little more heavily on the side of literature. Making the puzzles and —God forbid—mazes part of the narrative is certainly a step in the right direction.

Remind me: who said we need puzzles at all?