Alan Kirby declares once and for all that “Postmodernism is Dead.” Philosophers and popular culture—everyone, it seems, but English professors—have moved on. The postmodern canon is all decades old; it's time for academia to stop calling postmodernism "contemporary."
Most of the undergraduates who will take ‘Postmodern Fictions’ this year will have been born in 1985 or after, and all but one of the module’s primary texts were written before their lifetime. Far from being ‘contemporary’, these texts were published in another world, before the students were born: The French Lieutenant’s Woman, Nights at the Circus, If on a Winter’s Night a Traveller, Do Androids Dream of Electric Sheep? (and Blade Runner), White Noise: this is Mum and Dad’s culture.
So if contemporary works aren’t postmodern, what are they? Some would argue for the New Aesthetic, and while Kirby doesn’t cite James Bridle, his assessment places heavy emphasis on interactivity, participatory culture, ephemerality of the material text—ideas which complement the New Aesthetic’s fetishization of digital influence (even in analog arts and technologies). Kirby uses the (intentionally?) problematic term “pseudo-modernism” to refer to the products of today’s culture.
A pseudo-modern text lasts an exceptionally brief time. Unlike, say, Fawlty Towers, reality TV programmes cannot be repeated in their original form, since the phone-ins cannot be reproduced, and without the possibility of phoning-in they become a different and far less attractive entity […] Radio phone-ins, computer games – their shelf-life is short, they are very soon obsolete. A culture based on these things can have no memory – certainly not the burdensome sense of a preceding cultural inheritance which informed modernism and postmodernism. Non-reproducible and evanescent, pseudo-modernism is thus also amnesiac: these are cultural actions in the present moment with no sense of either past or future.
What Kirby calls “banal” and indicative of “puerile primitivism” are merely the products of an oral culture. He further complicates his arguments by comparing only trite instances of today’s popular culture with postmodernism’s highest art objects. It might be more fair to compare the postmodern authors with serious hypertext authors, or to compare postmodern film with the more serious video games. Comparing Francis Ford Coppola to “Call of War 17: Heroes of Killing Stuff” for XBox is unfair, especially when Journey or even Bioshock are sitting above it on the bestseller list.
The agency and control that computers have afforded us have undoubtedly changed our approach to all other media. Moreover, our awareness of this fact has in turn changed our relationship to the digital. However, that doesn’t mean that these new forms can’t produce permanent cultural artifacts, or that we will have no memory of our participatory experiences. My cartridge of Super Mario Bros is easily as permanent as my VHS copy of Clockwork Orange, and surely Shigeru Miyamoto gets as much credit for authorship as Kubrick does, even if the game is more participatory.
I recently read Galloway and Thacker’s chapter on “Nodes” from The Exploit: A Theory of Networks on the systems of power and control within network structures. Borrowing terms from graph theory, “nodes” correspond to what we would call lexia, “edges” connect the nodes, and networks can be described in terms of their “order” or how many nodes they contain, or their “size” which relates to their edges. Networks can also be described in terms of their connectivity (the interconnectedness of the nodes) and their topology (how centralized or decentralized the structure is based on which nodes are linked to which).
A huge portion of the essay is spent on the idea of “protocol” in both the familiar, computer science sense, but more importantly as a broader concept that encompasses that understanding—the protocol is the set of rules and standards that governs the network. The protocol exists in a tension between functioning as an emergent governing force, and one that can be externally imposed by, say, a system administrator or by the design of the network itself.
Galloway and Thacker have an interesting idea of control—influenced by Delueze—through modulation. They argue that in node structures, control is no longer dictated from a central figure (and even the protocol control structure is not an all-encompassing power), but rather “emerge through the complex relationships between autonomous, interconnected agents.(29)”
Within the hypertext literary community, there has been much debate over the idea of “authorship” within interactive work, the argument being that when you give the reader agency to maneuver within the story, she becomes a co-author of the work. This argument has met resistance from authors who insist that in building the framework of the piece, they are in fact, allowing the reader only the impression of agency, and thus maintain full authorial control. Viewing hypertext literature as a series of nodes , and the author’s embedded link structure as the piece’s protocol, it becomes easy to see how “control is not simply manipulation, but rather modulation” (33). The author may not be forcing the reader into a specific choice (manipulation), but she is directing the reader through a series of diegetic choices and constraints (modulation).
Institutions and departments should develop written guidelines so that faculty members who create, study, and teach with digital objects; engage in collaborative work; or use technology for pedagogy can be adequately and fairly evaluated and rewarded. The written guidelines should provide clear directions for appointment, reappointment, merit increases, tenure, and promotion and should take into consideration the growing number of resources for evaluating digital scholarship and the creation of born-digital objects.
The fact that this even needs to be stated is ridiculous, but it might be a step in bridging some of the evaluation gaps between the humanities and the sciences (for example, different approaches to co-authorship, conferences vs. journals, etc.) that are used to evaluate scholarship.
Ian Bogost reviews the latest game from art game studio Thatgamecompany, Journey, arguing that the game reveals the maturation of an artist. Though thatgamecompany's previous artistic successes, Flow and Flower, carried the conversation of art games forward, they exhibit a certain immaturity that is not apparent until viewed in hindsight.
Bogost’s review doesn’t even indulge the possibility that games are not art. We no longer need to argue this, but many reviews of “art games” still nod to the debate. Journey is certainly a beautiful work of art, and the review is superbly written like a thoughtful book review. If only more game reviews were like this.
Hall, who invented a forerunner to the world wide web, said the problem of a scarcity of girls studying computer science was "getting worse" despite huge efforts from the scientific community to address the issue.
Hall, the dean of the faculty of physical and applied sciences at the University of Southampton, told the Guardian that girls still perceive computing to be "for geeks" and that this has proved to be a "cultural" obstacle, so far impossible to overcome.
Hall is right to worry about the lack of women, and to look toward cultural factors that might be contributing. But the perception of nerd culture is a real problem, and it’s more nuanced than “women don’t want to be geeks.”
Sometime in the last couple of decades, it became cool to be a nerd (which is different from a geek though the two are related). When I was young, my favorite caper films involved some kind of “hacking into a mainframe” that I found fascinating. Young Lex Murphy, the girl in Jurassic Park (who was roughly my age at the time) could hack into things. And then there were video games, which were also really cool. I knew that programmers made those, and I wanted to make them too. So if tech is cool, and women are using just as many gadgets as men (if not more), where is the disconnect?
If you look at the young men in the average computer science department, you will find that most of them self-identify somewhere on the “nerd” spectrum. Keep in mind that they do not see “nerd” as a derogatory point, simply as a cultural identifier and useful shorthand for people with similar interests and personality traits. That said, I would guess that many young people come to computer science with an interest in making computer games, or from a more deeply-entrenched identity within nerd culture, so examining these cultures is a good first step toward understanding the lack of women in computer science.
Neither gaming culture nor nerd culture are particularly welcoming towards women, and many women looking in from the outside—even those that share the same interests as nerd guys—do not want to enter an environment in which (they think) mouth-breathing basement dwellers will view them as a sex object. This stereotype, though not representative of everyone in the culture, is accurate enough that it will probably be confirmed as the woman enters the culture, whether she’s told “there are no girls on the internet” or sexually harassed on a web forum or video game. Many women within the culture have found that men (and even other women) assume they don’t belong or are feigning interest to be more attractive to men (or sell to them). And then there’s the problem of many girls not wanting to be in such a small minority, which in turn compounds and perpetuates the previous assumptions about the environment. The more women there are in the club, the more women looking in feel that it’s safe to be a woman in this environment. It’s not that girls are scared of being unpopular, many just don’t want to interact with what they see as a hostile culture.
So the culture that feeds into computer science classrooms isn’t particularly female-friendly, but surely the atmosphere in the classroom is better? Unfortunately, the lack of female role models means that girls often feel out of place or avoid asking questions for fear of confirming stereotypes.
We need to create an environment where girls feel safe and comfortable, an environment where it’s okay to ask questions, where girls won’t feel judged for their sex. And when we’ve done a reasonable job of that, we need to make sure the larger community knows that computer science is welcoming to women. Hopefully once women see that they won’t be alone to fend for themselves in a classroom full of troglodytes, more women will be willing to join the club.
The issue of misogyny in video game development and gaming culture is not exactly new, but within the academy we tend to treat it as a solved problem. So, when I read Nicole Leffel’s incisive look at the culture of dismissal that blankets the topic, I wanted to hug her. She puts into words a vague feeling I couldn’t express clearly at the last few “women and gaming” panels and events I’ve attended. Recognizing that women play games does not absolve the sexism of these games.
Feminism takes many forms which can often seem to contradict each other, and it can be difficult to reconcile what the “correct” approach to feminist problems should be.One school of feminism suggests that treating women with special reverence is a form of charity that subverts the idea of equality, that we should ignore gender differences altogether. Another school suggests that by subverting gender differences we are suggesting that femininity is something which should be suppressed, and we should therefore celebrate differences. One school wants to be freed of sexual objectification, another wants to embrace the power of female sexuality. How do we please everyone?
The last “women in gaming” panel I attend was at Pax East in Boston, where the women commented that they weren’t offended by women being sexy, they were offended by their lack of characterization or the ability to play a female character at all. This took me aback a little; surely these women weren’t saying it was okay for Lara croft’s gigantic breasts to be growing with each game while her outfits get smaller and smaller. But they were. They praised Dead or Alive: Xtreme Beach Volleyball for having a huge roster of female characters, excusing the fact that every one of those female characters wears skimpy bikinis and has animated breasts. At least they had a selection of female characters. It was as though the entire panel had set out to prove that they weren’t jealous of the female characters, a claim often thrown around to dismiss accusations of objectification, so therefore you could trust the rest of the feminist things they were going to say.
Objectification is a point on which feminists could easily find common ground. Games are fantasies of control, and there is a difference between being sexy and being objectified. When role playing, people like their avatar to be ideal: attractive, powerful, cool. If the character is too sexually focused, the character is no longer realistic enough for the player to attach consciousness to it. The problem is not that women in games are sexy, it’s that the sexiness is not styled in a way that women are meant to relate to.
In most games, excessive sexuality is either offensive or comedic. Either way, women are on the butt-end. It’s not hard to make a character attractive without making her grotesque. These are not instances of women actively using their sexuality for empowerment. Players are able to manipulate oversexualized characters into submissive positions or subvert their agency in other ways.
Laura Mulvey’s idea of the subjective male gaze posits a power difference, inherent in film, that derives from the subversion of the female who is styled by males as an object for male pleasure. In games, this is the equivalent to male developers creating a half-naked female character with grotesque proportions and physics-defying jiggle mechanics, but the difference between a sexist character and a non-sexist one is more complicated than how naked she is; the issue comes from why she is half-naked.
Characters like Kratos or Conan run around shirtless, but their bodies are not framed or able to be manipulated in a way that is designed to make them them the object of sexual fantasy. Their purpose, rather, is to enact male power fantasies. They are the character a male wants to identify with. Naked female characters, on the other hand are generally not there there to for women to identify with their sexual power, they are there to pleasure and serve the (heterosexual) male audience. Overly stylized sexuality prevents any audience—male or female—from being able to relate to the character.
Stay tuned for more on agency and identity in female characters.
We weren’t entirely sure what to expect, and initially tried to closely model E-LitCamp. However, the group was very active and participatory, and the space was very good for fostering more informal discussion. Influenced by Alan Dix’s description of Tiree Tech Wave, it soon became clear that enforcing a session structure would only dissuade productive conversations.
The weekend opened with Bill Bly’s excellent presentation of We Descend Volume 2, which set the tone for interesting discussion that continued through the weekend. Bly’s demo confirmed that hypertext literature has indeed come a long way from the StorySpace works of old, while still embodying the very essence of hypertext literature. The text, like the first volume, is very much exploratory, and the spatial relationships of the interface encouraged an excellent discussion of authorial process—should form follow content or vice versa?—that became a recurrent theme throughout the weekend. It also raised questions of what should be shown to the reader, which bits of text are reserved for the author’s personal notes; if hypertext affords some overlap, how does it shape the work?
MIT’s Angela Chang demoed an interactive iPad narrative for children that encourages parent-child reading and interaction. The work met much adoration from the group, and spawned an interesting of how interactive narratives and interfaces encourage different reading and thought patterns, particularly in young children. Chang explained that children were able to recognize a relationship between text and meaning from a much younger age while engaging with the work.
Jonathan Brandl and Nick Apostolides starred in a dramatic reading of Mark Bernstein’s hyperdrama The Trojan Girls, an interesting spin on The Trojan Women that takes place in the not-so-distant future during the second American Civil War. The work argues that hypertextual recombination of dramatic dialogue can yield identical plots while changing other facets of the text (adding subtext, changing inter-character relations, etc). A fruitful discussion followed the reading, which examined the nature of reordering plot events and the relationship of constraints and narrative building.
Remix aesthetics and cross-media adaptation were recurring themes of discussion, seen in informal presentations of Meanwhile for iOS, Steve Meilleux’s 100 Days work, and The Trojan Girls. A reader takes a unique pleasure in recognizing familiar elements in a remixed work—discovering and recognizing implicit links—and there was much discussion over how much an understanding of context adds to a work. Other recurring themes included publishing models, sources for new work, the role of the institution in fostering creation, the value of criticism, and anticipating reader experience in the writing process.
Though clearly written in a pre-9/11 bubble of optimism, David Brooks’s idea of The Organization Kid has been particularly resonant in a recent revival of generation wars. So what happened to the hyperachieving students, those energetic meritocratic elite? Many of them became disillusioned 20-somethings. And somewhere underneath this tired, we-have-it-worse generation war, there’s a question of the role of the institution.
American universities are good at churning out walking resumes, but isn’t the role of academe to teach students how to question, how to think critically, how to analyze? Should universities be expected to undertake such issues of character building? Or is this the natural byproduct of letting everyone into the sparkling pearly gates of College with promises of middle-class security on the other side?
This overlooks Jay David Bolter’s Writing Machines, George P. Landow’s Hypertext, Michael Joyce’s Of Two Minds, Silvio Gaggi’s From Text To Hypertext, Jane Yellowlees Douglas’s The End of Books, and I shudder to think what I’m forgetting.
Of course books on computer-mediated literary works – especially those on hypertext – existed before Loss Glazier’s Digital Poetics. However, what did not exist until the founding of the Electronic Literature Organization in 1999 (thanks to Scott Rettberg, Robert Coover, and Jeff Ballowe) is a name, a concept, even a brand with which a remarkably diverse range of digital writing practices could identify: electronic literature. Moreover, it’s not simply that writers had something by which to bind them together and identify with but it’s also that increasingly e-literature became known as something of a coherent field with a wide, yet still bounded spectrum of means by which critics, teachers, students, scholars could talk about their work. In other words, e-literature became something much more than just hypertext, as valuable as that particular mode of writing may be.
Emerson is probably right that the branding of the “electronic literature” field widened the scope from the perceived limitation of “hypertext” and made it more attractive to digital poets, animators, and game developers.
However, I don’t believe anyone would argue that hypertext literature doesn’t at least fall under the umbrella of “electronic literature” even if she doesn’t believe it serves as a synonym. To defend that Digital Poetics—as influential as it was—as the first book on electronic literature implies that either (1) hypertext literature isn’t eLit or (2) eLit didn’t exist before we bequeathed it with that particular name and brand.
In response to this criticism, Emerson added some comments on institutionalization:
That said, I do think there’s a lively discussion to be had about the potential drawbacks to institutionalization – about how e-literature is in in the unusual position of coming into being at the exact moment that critics, all of whom are contemporaneous to the writers themselves, are attempting to define and delineate the field. There must be something to the fact that we, critics, may be over-determining the field at the same time as we’re helping to support and give shape to it.
A lively discussion indeed; not only does this institutionalization make the field more intensely political and personal than it already was, it shapes the scope of research and creative endeavors to fit within this political framework. In this environment we, as a community, must be vigilant in making sure that history is recorded correctly and that politics don’t overshadow the work.
Ford Maddox Brown, Work (detail), Manchester Art Gallery
Lorie Emerson tweets, “Hypertext is the new realism,” a reflection that deserves some thought. Hypertext has long been associated with postmodernism, and occasionally with modernism, but realism has never really crossed my mind. It’s an interesting claim, one that probably needs clarification in a field of mixed ontologies and contended vocabularies.
First, what do we mean by “hypertext?” HTLit (and our sponsor Eastgate Systems) takes a broad view of “hypertext,” encompassing any kind of linked structures or interactive work. Links don’t have to be underlined blue bits of text; they are evident through a reaction within the work that is triggered not just by clicking, but by any interaction that the author specifies should cause something to happen. However, the idea of “hypertext fiction” put forth by Kate Hayles in Electronic Literature is probably more widely used within the field. She breaks electronic literature into “genres,” distinguishing “hypertext fiction” for its lack of sound, image, and video as the “first generation” or “classical” works, (echoing Coover’s Golden Age) thereby limiting the scope of what “hypertext literature” actually includes.
By “realism” I assume we’re talking about the 19th century artistic movement that focused on capturing an objective truth. But how is hypertext (either as a subset of eLit or as an encompassing entity) realism or something like it?
If we limit the scope of hypertext to Hayles’ definition, many of these pieces were direct in their representation of the structure of the work. Readers could see the entirety of a work’s structure more clearly than is possible in interactive fiction, for example. However, this is not true for all of them, especially once we started viewing hypertext work on the Web. Likewise, if we’re interested in visual aesthetics of hypertext pages and their stripped-down appearance, surely IF is a closer fit, especially works like Dan Dan Shiovitz’s Bad Machine, though arguably their awareness of this fact puts them more in line with moderism or postmoderism.
In visual art, the practice of realism was influenced by the camera, a revolutionary technology affording new artistic possibilities and in turn influencing aesthetic sensibilities. Camera’s could produce images that seemed to captured the objective truth. Perhaps like the camera, hypertext has not only acted as a new technology that allows for innovative forms of electronic literature, but it has also changed our sense of what makes a work of eLit good. Surely, though, every artistic movement has faced similar changes in critical practice, and technology has always shaped art.
If we limit the scope of “realism” to literature, the search for the objective truth manifested a focus on gritty realistic depictions of life and hardship. Hypertext is perhaps analogous, as it reflects intricacies of connectivity that have become more and more foregrounded in 21st century life. If moderism’s stream of consciousness is a romantic impression of the mind at work, perhaps hypertext’s fragmented and sporadic paths and its dead ends are the reality of the contemporary experience, a gritty representation to overtake that romantic ideal.
We might have come to a new objective truth here, except that many of these works actually deemphasize the idea of an objective reality. Instead, they highlight the fact that truth, meaning, and interpretation of experience are created by context; the reader imposes them onto the artistic vessel. Although hypertext may seek to capture a “realistic” view of connectivity and intertwingledness, its heart still lies with postmodernist relativism.
HTLit spent the last couple of weeks in a whirlwind of travel and conferences, but we have finally returned with much to report. There’s a lot of great work going on out there, and the conferences led to many new ideas and fruitful discussions.
Traveling directly from Hypertext 2011 in Eindhoven to Web Science ’11 in Koblenz, though demanding, provided a solid couple of weeks of stimulating ideas. There are several good writeups of the conferences: Clare Hooper gives an interesting impression of the overlaps between the two conferences, David de Roure gives a good introduction to Web Science as a discipline, and Jean-Rémy Duboc offers great observations as well.
The main thing we realized at both conferences is that there’s a lot of interest in computers and narrative (whatever we’re calling it). Several groups need to be talking to each other but are only peripherally aware of the others’ existence. People want to make things happen but aren’t sure where to start. Over two weeks I heard murmurs of no fewer than 4 ideas for future meet-ups, unconferences, workshops, etc. that would focus more on discourse and collective creation than presentation. People want more discussion.
Blogs and Twitter will never replace a conversation over a coffee or glass of wine, but we, as a community could be doing more to foster discussion in online environments. I’m not talking about building another directory or repository for work. Even just linking to each other, discussing each other’s ideas, and using the familiar hashtags to have better conversations would be a start.
With the buzz of tweets from the Electronic Poetry Festival (#epf11), the #elit hashtag has seen more attention today than it has in several weeks. Much of today’s discussion has focused on the representation of women within the field of eLit, with some prominent women arguing that women are not recognized sufficiently within the field.
What I think is the problem here is a limitation of shared language. Saying “eLit underrepresents women” is not necessarily true, if your definition of eLit is narrative-based digital works. If, however, you expand your definition to include other fields that the narrativist is not considering, it might be. Women are certainly underrepresented in computer sciences in general, but let’s not be too quick to make a claim without proper qualification.
On the other side of the debate, being able to list several names and even to say that the top writer or researcher in the field is female doesn’t necessarily make the field gender-balanced. There can still be an underrepresentation of women even if the women you have are extraordinary.
The danger with a claim like this is that, without proper qualification, a generalization makes its way into the collective consciousness of a group. Perception is reality; what someone perceives to be true is necessarily the truth to that person (or group). This becomes dangerous, as women who are aware of a stereotype might feel immense pressure that only results from that consciousness. From personal experience, I found myself afraid to ask questions and participate in male-heavy programming classes lest I reinforce a stereotype that women don’t understand programming. We don’t want to instill the same anxieties in our young female eLit writers.
However, if there are inequalities, ignoring them will not make them go away. In this case, we must properly identify if and where the problem exists. The most productive way to do this seems to be to identify a benchmark and assess how we as a field are measuring up to it. We need to figure out what we’re trying to achieve before we can discuss whether we have or have not achieved it.
I understand this from personal experience,. As a computer engineering student, I was one of a handful of girls in an auditorium of a couple hundred. I never asked questions. I felt that I needed to prove that I belonged there. The stereotype that the male teacher and students believed that I wasn’t as good as they were was reinforced by a particularly embarrassing session with an impatient male teaching assistant, the first and only time I sought after-class help. Asking questions was a sign of weakness that I wasn’t willing to allow in what seemed like hostile territory. I would have felt much more comfortable asking questions to a female professor or TA.
These findings don’t seem like rocket science to me. Ofcourse the 5 female students surrounded by male students and teachers probably feel out of place. The same is probably true of all the minorities in the class. Ofcourse they would feel better if they didn’t feel judged by the people in charge as well as their peers. Has nobody ever thought to ask them?
The Online Education Database recently published a list of the 50 best blogs for humanities scholarship. Most of the literature blogs listed are in fact, the literature sections of online newspapers and magazines. However, these do serve as good springboards to other sources, and there are a few independent literary blogs listed.
There are also several very interesting weblogs for other disciplines including philosophy blog Think Tonk , Games With Words – a linguistics blog concerned with the connection between words and cognition, and the edgy art blog Juxtapoz .
The Association for Computers and the Humanities asks “What’s the difference between Digital Humanities and New Media?” It’s an interesting question, but one that perhaps cannot be answered. Not only do academics and professionals use the same terminology to mean wildly different things, but even the academic disciplines can’t seem to agree on what constitutes “new media” or, for that matter, “literature.”
We may be witnessing the process of disciplinary formation, but perhaps we’ve merely stumbled upon a turf war. An interesting discussion is unfolding in the comments, and the limitations of our nomenclature is a problem we should not ignore.
This might appear to be just another expression of vaguely Marxist sentimentality so common in criticism that’s intended to impress without meaning much, but (as usual) Moulthrop’s doing real work here.
The central argument calls for a shift from simple transmission(writing) toward a literacy of potentiality (programming), and for grasping the difference between content (which Moulthrop assumes to require witholding) and data (which becomes itself by being given out to your colleagues). Perhaps for content we might instead read “melodrama” – the canned narratives of Hollywood that keep us amused, contrasted to the things we create ourselves.
Jill Walker posts an interesting YouTube video that crosses several levels of diegesis. The beginning of the narrative features something of a CYOA-style interaction, but after a choice is made, the character breaks the fourth wall and the video allows the viewer to type in how she would like the action to continue.
I had never seen this sort of prompt in an interactive video, presumably because this sort of interaction is expensive. Still, I was very impressed to find that not only did the parser recognize my request (for the hunter to tickle the bear!), but that a short clip had been prepared for just such a request. Still, because the parser is not perfect and lacks a “we didn’t understand your input” response, the viewer will often see puzzling, unrelated clips.
Nonetheless, the video is an impressive bit of interactive narrative.
It’s no secret that kids are practically born with iPhones in their hands and are tweeting by the age of 2. My 15-year-old sister can text message at twice my speed.
Talking with Greg Ulmer about teaching electronic literature, I asked what he thought was the most important thing to learn before starting to read new media. We’re used to thinking of technology as the source of novelty and the obstacle to comprehension, but understanding technology might not be for them the most pressing issue. His response was interesting:
“Technology is only one piece of what is a complex or compound aesthetic informing hypermedia… Makers of hypermedia works may come from a variety of different backgrounds, that within literacy were isolated from one another: literature, fine arts, design, computer science, among others…In addition there is the context of multimedia, with the convergence of narrative, photography, music, performance in cinema. The digital convergence of media and convergence of forms and convention have not yet been matched by convergence of study in education.”
Having studied under Ulmer, I realized how true this is. We had talked in the past about the need for student participation, but how were they to understand the form if they didn’t understand the transition to that form from other media? The “Internet Literature” course I took focused on graphic design as a supplement to literary aesthetic sand principles. The graphic design knowledge added another dimension of understanding that the narratological approach other new media courses had offered didn’t cover.
Tunxis Community College presents its New Media Communication program , an interdisciplinary program that awards an Associate in Science degree with students using hands-on new media and hypertext creation practices to engage three potential research areas: digital storytelling and interactive narrative; games and simulations; social media and new media culture.
Rutgers Assistant Professor Christina Dunbar-Hesterr shares the syllabus for her PhD-level course on technology and new media. The course offers an introduction to the idea of technology and how it shapes society. Posting her syllabus online and explaining how each reading and assignment relates to the next gives the course a narrative framework and allows other instructors to learn from the way she structures the course.
Janneke Adema offers a trip report from a recent roundtable meeting at Kingston University about the Future of Electronic Literature, focusing on a keynote talk by Jay David Bolter.
For a new media theorist, Adema seems strangely fond of the ghastly “snapshots” link annotations with their obnoxious pop-up thumbnails. A reader who is interested in “literature in the era of social and locative media” already knows what Wikipedia looks like.
“What unites creative practitioners and researchers," Adema argues, "is their exploration of the word and the abstract character of language and its materiality in different media in an experimental practice. The main question remains: why isn’t this work part of a more mainstream platform?"
The question answers itself: experimental practice in different media is, by definition, outside the main stream. If it were part of a more mainstream platform — the stuff that Pepsi does, the stuff my 13-year-old niece does — it wouldn’t unite creative practitioners and researchers in an experimental practice.
Bolter’s key concern here, in Adema’s account, is the centrality of literature in the humanities. At the Future of Digital Studies last Winter, he memorably asked whether, in the future University, the English department will decline to the place Art History occupies today.
Wojcicki cites an Internet analyst who offers the opinion that the students “…are just wrong. … just plain wrong. They don’t know because they can’t even conceptualize what is coming.” The implication is that these devices will in fact revolutionize the textbook market, but the students are not able to understand that.
Students do understand their own work practices and can offer a better understanding of annotation practices. This understanding should lead to better reading and annotating software. However, the unnamed internet analyst is probably right too. Golovchinsky writes,
Electronic reading devices will only be successful if their designers pay attention to what students do with textbooks, and design tools to support and augment their work practices. If that is done well enough, then the analyst’s prediction will be correct; if, however, the hardware and software combination fails to support active reading well, then students will continue to reject the medium as inadequate to their task.
It’s important to remember that learning doesn’t only come from the text. Several of my courses had online or software textbooks. These “textbooks” were not only texts, but also practice quizzes, linked references, and platforms to connect with other students. If these activities are incorporated into textbooks for platforms like the iPad (perhaps as apps, perhaps as part of a larger program for handling works like this), the analyst’s prediction will be true, regardless of the annotation capabilities of the software.
Of course, this isn’t an excuse for the lack of decent active reading programs available, but being able to highlight or mark up a page is not the only thing keeping students from accepting eBooks. The real issue is that eBooks are still trying to limit themselves to imitating paper.
Kirschenbaum argues that requiring students to learn programming provides the same benefits as learning a foreign language: the ability to analyze and question existing translations without being dependent upon a text which is one level removed from its author’s creation. Especially for scholars of digital texts, understanding the mechanisms that drive the art surely provides a deeper understanding and closer analysis.
Such an education is essential if we are to cultivate critically informed citizens — not just because computers offer new worlds to explore, but because they offer endless vistas in which to see our own world reflected.
In preparation for the Computers and Writing Conference for 2010 , Ryan Trauman put together some interesting thoughts on mentorship as it relates to scholarship.
There are plenty of aspects of working productively within our discipline that have nothing to do with scholarship. How do you know when it’s time to take a break? What do you do when you’re at the end of your rope? What are the dangers of dating someone within the discipline? Are there any shortcuts on the way to tenure? … And these sorts of questions are really just the general questions. Anyone might be interested in these conversations, and they really are most productively engaged with a mentor who is honest and trusts whoever might be listening.
Trauman also examines the role of intimacy in mentorship and concludes that perhaps it's a combination of obscurity and privacy—even when the conversations are not particularly secret—that are important to fostering a mentorship.
Still the question remains. What does it matter that Larkin sneered in his letters and conversation (fearfully and fretfully, it seems to me) about foreigners and women, that Naipaul made selfish use of people from the beginning of his life, and no doubt continues to do so now? What does it matter that Dickens knew what it was like to be dependent and abandoned as a boy, but made sure that his wife would suffer the same fate? It is this. The weakness of character of Dickens, Larkin and Naipaul comes from the same source that drives their art.
The Digital Fiction International Network is a group of scholars dedicated to more consistent approach to the study of of digital research. The group hopes define digital research on terms that are “accessible to the wider research community (e.g. stylistics, narratology, literary theory, media studies).”
Chief Investigator Alice Bell writes,
The co-investigator, Astrid Ensslin, and myself noted that the study of digital fiction had undergone a significant paradigm shift from a 'first-wave' of theoretical debate to a 'second-wave' of stylistic, narratological and semiotic analysis. While there was some important work going on across the world, we felt that there was a need to draw people together. Principally, therefore, we wanted to foster a collaborative international network of academics working on similar projects. Secondly, we had a methodological motivation. We wanted to define an area in digital fiction research which was devoted to methodological transparency. Each member of the DFIN takes a slightly different approach to digital fiction, but we all analyse digital fiction using a transparent and replicable methodology.
Members of the DFIN have recently published a manifesto in the Electronic Book Review, though they are reluctant to call it that:
A manifesto is political; a mission statement is corporate. We deleted these concepts from our discussion. We were not etching a scroll of aesthetic edicts for the digital ages. We were not carving a credo in the sense of monologic truth or certainty, of liberation, of democratization, of salvation, of renewal. […]
What emerged was a creed for the screen. In a word, a [s]creed.
Their [s]creed addresses their approach to research and analysis, promising a “body of exemplary analyses of digital fiction.”
So what was I doing? My born-digital thesis was not a scholarly resource: I wasn't and never intended to present or curate a collection of digital artifacts for others to browse. My work was critical and individualistic, conscious of its methodology and historical moment. It strove for self-awareness. In this respect, it had more in common with the essays on Kairos than with the work of NINES; yet it never emerged from the disciplines of rhetoric and composition. I was more interested in challenging notions of "old media" literacies, or even "literacy" itself, than exploring those of "new media."
I was positioning my work as Digital Humanities, but Digital Humanities didn't really want to claim it.
I have noticed a tendency in digital humanities to focus on the archive. This fits nicely with disciplinary convention, but digital content delivery may change scholarship in a profoundly. At times, the humanities need a reminder that born-digital works have a place in their digital collections.
The iPad has been heralded as the savior of newspaper and magazine publishing, but much of the hype has been focused on consumption of media, not creation. Indeed, the input methods for generating substantial quantities of text on the go seem cumbersome.
Tablet computing pioneer Gene Golvochinsky examines the tablet computers of 1999, which were heavily dependent on a stylus, and compares the iPad’s touch controls. He concludes that the stylus is superior for applications that simulate digital ink, while the fingers are great for manipulating data once it exists.
For writing on the go, having the keyboard input (external and screen-display), stylus input, and touch input would be ideal. I can imagine a word processor that would let the user type in text with the keyboard, edit with a stylus, and highlight, cut, or move text as—well as inserting links and other forms of media—with the fingers.
The closest I have seen to this ideal is the demo of SketchNotes that Golovchinsky references, but as he notes, the motor control over the touch-input markup is noticeably inferior to that of the stylus.
Responding to a recent Seton Hill initiative to give every student an iPad , he also reports that the iPad is also not capable of multi-tasking—like running reading and writing programs at the same time—so the iPad may prove to be less desirable than a laptop for many students. (Of course, most students will also have a laptop, too.)
HTLit is back at home from the Future of Digital Studies Conference at the University of Florida. In addition to an impressive list of invited speakers, the organizers were able to bring together several of the field’s most prominent scholars through teleconferencing.
This final session was wrought with technical failure, but as several conference attendees pointed out, if there was a crowd in all of academe that could appreciate and analyze this failure, it was digital humanists. Mark Bernstein tweeted to ask whether this was failure or just “excessively ergodic” interaction. After the session, I had a lovely talk with Brian Greenspan discussing how Rita Raley’s digital disfiguration was like an uncanny bit of art—her face blurred beyond the point of being humanly identified, leaving only the clear image of her eyes floating above the pixelated canvas where her face should be.
Though the video sessions certainly had their difficulties, the malfunctions were more of a launching point for interesting discussion than actual failures. As with the rest of the conference, there were many interesting ideas introduced and discussed.
Over the next few days, HTLit will be reporting from Future of Digital Studies 2010 at the University of Florida. Mauro Carassai, a graduate student at UF, has organized an event which brings together an impressively strong program, including some of hypertext’s most esteemed authors and critics.
Mark Bernstein mentions his upcoming talk on NeoVictorian New Media and the problems with criticism and promises more information to come.
Norwegian PhD fellowships are renowned for paying as well as a normal job rather than exploiting graduate students: The fellowships are 100% positions with standard Norwegian health, social security and pension benefits (including, say, parental leave, a topic near to my heart these days) and they pay 355,400 kroner (US $55,000/€40,000) a year. You’re an employee, not a student, which gives you far better rights than a student has. You’ll have some travel/research funding assigned to you automatically - I think about 20,000 kroner ($3000/€2200) a year - and the opportunity to apply for more. These are three-year fellowships, where you do about one semester’s worth of coursework (attending conferences and seminars and writing a paper or two) and the rest of the time is reserved for dissertation research and writing. They’re open to applicants from anywhere in the world. You are required to have an MA in a relevant discipline, with a final grade of A (preferred) or B (acceptable if your dissertation proposal is excellent), or equivalent.
The deadline for applications is January 31. Prof. Walker’s recent lecture for Wikipedia Academy Bergen, “Has Wikipedia Grown Up”, is online.
The collection also includes Larsen’s many early computers. MITH will be opening the collection to scholars on a limited basis. Researchers interested in visiting Maryland to work with the Larsen materials on site should write apply to firstname.lastname@example.org.
In my first undergraduate semester at UF, a business professor asked a packed auditorium of students to look their left and right, laughing that one of those students would drop or fail out that year, but it was likely that both would not graduate. He was probably right.
The Chronicle of Higher Education consulted a “panel of experts” on whether too many students are attending college. The consensus seemed to be, “Yes.” This oversimplifies things a bit, but there are certainly good arguments to be made on both sides of the debate. On one hand, the number of students applying to college right out of high school is very high, and many of those students will fail or drop out. This wastes both the students’ time and money and deprives the work force of capable resources while perhaps soaking up tax dollars in the process. On the other hand, American culture greatly values the idea of not restricting opportunity based on socioeconomic class, and not encouraging students to attend college limits the opportunities of the under-privileged.
Whichever side of the debate, the panel seems to focus completely on return on investment, a metric which many of the article’s comments disapprove.
We’re working on extending these wiki resources. We'd would love help! Send your suggestions to email@example.com. And if you'd like to contribute a sentence or two to point out interesting or unique hypertextual features of any of the sources already listed, that would be terrific.
Want to help paint the fence? We’re restricting write access to keep out the cranks and spammers, but send us an email and we’ll tell you more.
My path through college was neither straight nor narrow. I switched majors a couple of times—a practice strongly discouraged at the University of Florida—from accounting to computer engineering to English, Each switch felt like an upgrade. I didn’t take my first business class on a campus until my third semester, since classes were mostly online or broadcast via cable. I never met any of my professors in person until I switched to computer engineering, but even then all of my classes were in an auditorium of hundreds.
Then I switched to English.
In English, classes held a no more than 30 students. I even got into one department seminar which was limited to ten. I was shocked during my first semester as an English major when I was slow to pack up my stuff after class one day, and the professor starting asking me questions about what I had thought about the lecture and even asked questions about myself. As a person! Professors knew my name, and could remember it a couple of semesters after I took their courses. This was what college was supposed to be.
At the time, I thought smaller classes and incredible professors were just a testament to the English department’s ability to schedule a sensible number of classes per student. Perhaps it was also because discussion is more important to the humanities than to calculus or financial accounting. However, as I started noticing the same students in most of my classes, I realized that there were simply fewer of us than there were Business majors.
William M. Chace calls this “The Decline of the English Department” and explains there are so many business students and so few liberal arts students. He recalls the boom of the humanities in the 1950s:
“Finding pleasure in such reading, and indeed in majoring in English, was a declaration at the time that education was not at all about getting a job or securing one’s future. In comparison with the pre-professional ambitions that dominate the lives of American undergraduates today, the psychological condition of students of the time was defined by self-reflection, innocence, and a casual irresponsibility about what was coming next.”
Chace believes that cost of education, the comparative youth of English as a discipline, the lack of external grants and sources of income compared to the sciences, and the lack of definition as a discipline have all contributed to the fall of the English department.
And, indeed, there is a prejudice against “soft” degrees. My parents were furious when I decided to abandon a stable future as a programmer to pursue English. Luckily, that programming background has served me well in the pursuit of electronic literature, and these days I’m proud that I ended up with an English degree.
Sometimes, new findings appear that make one wonder how anyone could have believed otherwise. Research on hypertext fiction has revealed students’ anxiety and apprehensiveness, but Hans K. Rustad of Hedmark University College in Norway believes that this aversion demonstrates only that they are unfamiliar with the form, not that the form itself is flawed. His essay argues for an understanding of hypertext from four different reading approaches: semantic orientation of reading, gaining experience, self-reflection, and absorption.
His essay could prove to be key in unlocking some of the mysteries of teaching hypertext, and it refutes the belief that students inherently find hypertext reading difficult and overwhelming. Indeed, how heavily can we rely, when thinking about the future of literature, on observations of subjects who have never before read much literary hypertext?
Golovchinsky mentions the ever-present copyright issues and the durability of the files, but his main concern is the ways in which eBooks are inferior to print books. They cannot, for example, show color or images. He disputes that the book is an obsolete tool.
One reason eBooks loose the drag race with paper is that they don’t yet take much advantage of hypertextuality. The ebook simulates paper. It is produced in advance, linear and rigidly structured, and one organization must fit everyone.. It must be read in a particular order, and it can’t be added to once it’s complete. Perhaps this comforts the completionists, but it’s not the way information works.
“Whether the Google books settlement passes muster with the U.S. District Court and the Justice Department, Google's book search is clearly on track to becoming the world's largest digital library. No less important, it is also almost certain to be the last one.”
The article points out errors in the scanned books’ metadata including category listings, publication dates and even titles. Google has already faced criticism for the legal and ethical aspects of creating a digital library, but this is the first article I’ve seen that addresses the quality of the project’s execution. Thanks, TiltFactor.
Mark Sample of George Mason University speculates at netpoetic.com about “Teaching Electronic Literature as a Foreign Land”. (An earlier version of the essay appeared on his blog .) The essay asks, “would the same process by which a stranger in a strange land grows accustomed to foreignness and even appreciates and incorporates cultural difference into his or her own life — could that process apply to e-lit?”
The essay discusses how the six stage model of intercultural sensitivity, designed by Milton J. Bennett, seems to apply to his students’ reactions to their first encounter with electronic literature.
Student reactions to specific hypertexts are often surprising. In Reading Hypertext, Michael Joyce mentions that students often seem to dislike Mary-Kim Arnold’s masterful “Lust”, though Rich Higgason’s study of “Lust” certainly demonstrated that students have plenty of opinions on the work.
Roger C. Eddy is a practicing psychoanalyst/psychiatrist who teaches psychotherapy to M.D.s using Tinderbox. In his recent post to the Tinderbox forum, Eddy describes how he uses the map view to teach theories introduced in Mind Over Machine
by Hubert and Stuart Dreyfus. He writes:
“the visual guide allows a quicker grasp for the students of the rather complex arguments of the book. In other words they get the "gestalt" and then can nibble away at the author's' examples.”
Eddy also uses the map view to create hypertexts with his students, building the map together over the course of several lectures. He believes that “the students have found it useful.”
Exploring the idea of students learning by creating hypertexts, I asked my former professor Greg Ulmer how he came up with the idea of the assignment that he calls the “learning screen” in which students collaborate to understand hypertexts and then create a website to share their ideas. He explained:
The learning screen is for networked classrooms what the research paper is for book classrooms. The context of my pedagogy is my research goal of helping to invent “electracy.” Electracy is to digital technology what literacy is to alphabetic technology: an apparatus that includes not only equipment but institution formation and related practices, and identity experience, individual and collective. The methodologies used in the research paper were invented by the Classical Greeks, as part of their creation of a new institution, school. Plato’s Academy is the first school as we know it. Rhetoric, logic, poetics came into being in the context of figuring out what to do with writing. The equivalent today is to figure out what to do with hypermedia. In fact, the practices of electracy are being invented within Entertainment as an institution. We are in a period of transition, from one apparatus to another.
The challenge for school is how to bootstrap into electracy, what sort of relation we are going to have with Entertainment. Hopefully it will be more productive than was the long struggle between religion and science (science being the practice created within school and the alphabetic apparatus in general). When students come to my class they are familiar with both School and Entertainment. They have internalized both the skillset of literacy and the attitude of electracy. The difficulty is that the skillset of electracy has not yet been developed, neither at the level of equipment nor of logic. The learning screen is an assignment in the spirit of bootstrapping, transitional towards electracy.
I developed a method of invention called “heuretics,” a term that is in the O.E.D. but listed as “obsolete” or “rare.” It is derived from the same root as “Eureka” and “heuristics,” and is paired with “hermeneutics.” “Hermeneutics” is the use of theory to interpret existing works. “Heuretics” is the use of theory to invent new forms and practices. One of the heuristic devices of heuretics is to invent by contrast. The research paper is the Contrast for my pedagogy.
The character of our hypermedia practice is already outlined in principle as the opposite of the research paper. The very name of the pedagogy is generated in this way. We do not do “research,” but still “learn.” We work not with “paper” but “screen.” This way of generating the practice continues by noting some of the key features of a “paper,” listed in the handbooks used to teach literacy. “Papers” are supposed to be objective, third person, persuasive to others and the like. “Screens” are subjective, first person, persuasive to the author (reflexive). Part of the learning experience of the learning screen is due to the fact that students are familiar with the paper. Composing a learning screen should have the added value of making salient the differences between the two styles of education: learning screens are not better than research papers, but are specific to the apparatus of electracy.
So in order for students to exist in the world of hypertext, they must be able to not only read it, but write it as well. For this reason, many professors are assigning hypertext projects to students. Steve Ersinghaus writes,
At Tunxis Community College we do a lot with hypertext. At the moment we have two active courses, New Media Perspectives and Digital Narrative. Linking techniques are important to these courses and we use Tinderbox as the key tool. In NMP, student acquire basic understanding of Tinderbox and link techniques and aesthetics and in DN, they produce a more extensive project with the software. In both courses students read and study existing web hypertexts. In our New Media Communication program, students will use Tinderbox for note taking, project organization, and for producing complex hypertexts and spatial structures, with code manipulation and attribute play.
In researching the ways in which hypertext is taught, I noticed that although more and more programs are offering new media courses, many instructors tend to focus on a limited number of fictional pieces, especially Patchwork Girl by Shelley Jackson, and afternoon, a story by Michael Joyce. Curious as to how teachers were selecting their pieces, I started asking around. Greg Ulmer of the University of Florida explained how he arrived at focusing his curriculum on the Electronic Literature Collection Vol. One:
When I first started teaching Internet Literature my approach to choosing the examples was to let the students create an anthology collaboratively. They were to browse the Internet, using some starting points I provided… The results were unsatisfactory, because students insisted on selecting Google, Amazon, eBay, YouTube (not individual videos but the site as a whole) encyclopedic information sites, various production tools or widgets for drawing, and the like. These were literature majors, and they agreed that print dictionaries, how-to books, catalogs and the like were not literature. But when similar supports or services were delivered digitally, students considered them to be "art" or at least representations of "new media". The next step was to use a list recommended by Christy Dena.
This list served as the "measure" of what was acceptable, but I still allowed browsing, with the idea that there was no "canon" of E-lit. When Hayles et al published the E-lit anthology, I welcomed it as a short-cut solution. The anthology is recognizable as being the equivalent of print anthologies with which students are familiar. It is convenient in providing a representative sample of a manageable size.
by Mark Wernham. Machine #69 recalls Ryman’s 253, and especially Bob Arellano’s Sunshine ’69 both in its embrace of arbitrary connection and its fond nostalgia for the era when cheap booze, good drugs, fast cars and hot guns seemed to offer everything worth wanting and when nothing was worth wanting very much.
A new hyperromance for the Web. Sparsely linked, La Farge’s new hypertext nods at Stephanie Strickland’s design and to Michael Joyce’s direct address to the reader. but brings a new voice and sensibility to Web fiction.
Multimedia notes from underground, where a traumatized girl furnishes a cozy space in an underground tunnel. Script by Lynda Williams, music and code by Andy Campbell and Matthew Wright. A web work that’s especially nice on the iPad. (The floor lamp is a nice allusion. Get it?)