Wednesday, July 4, 2012

“Dear lost companions of my tuneful art”: Gamer Culture and A Life on Video (Part II-At Home and Abroad)

WYSIWYG
One of the ways Nintendo was able to rebuild the viability of home video games after Atari imploded and took the whole industry with it in 1983 was to reconceptualize video games as children's toys. It may seem strange to look back from our vantage point and observe how bizarre and radical a shift this was, but it was a very deliberate and tangible change in direction with repercussions. Nintendo had been expressly warned that launching a new game console in the wake of Atari guaranteed them failure as Atari's collapse had tainted the entire industry. Therefore, Nintendo consciously marketed the NES in the US as a hot new toy for kids in an attempt to avoid comparisons with the failed Atari 2600 and other personal computers that were starting to dominate the consumer electronics market. Afterward, and once it was a reasonable assumption that the majority of youngsters had access to an NES, it became very easy to take this fact and build a huge merchandising empire out of it. There was Nintendo breakfast cereal, Nintendo bedsheets, Nintendo clothes, Nintendo action figures, Nintendo stuffed animals, Nintendo sports equipment (*yes*) and even Nintendo Saturday Morning Cartoon shows (they were all crap, in case you're wondering, but this really isn't the place to talk about Saturday Morning Cartoon shows in any detail). But that's far from all that was going on with NOA and the NES: the clever idea they had was this. Once all the parents bought their kids Nintendo consoles and games for the Holidays, they'd be able to subtly start showing off the machine's true potential and capabilities. It worked, and worked amazingly well.

The conflation of games with toys that really started with the NES may be where some of the stigma gamers claim to experience originated from, as it could be said they never “outgrew” their “toys”, but it's telling no similar strategy was used in Japan, where the thing was simply dubbed the Family Computer and even had a floppy disk drive and a planned rudimentary Internet-like system. It's also crucial to note that Shigeru Miyamoto's early arcade games, Donkey Kong and Mario Bros. were expressly not designed to be solely kids' fare, meant to to be played as they were in bars. But of course we know better: NOA's little scheme to sneak the NES into households under the radar was absolute genius, and it was a common occurrence in many homes for parents to put their kids to bed after an evening of watching them play Mario and then, after the kids were asleep, to creep back downstairs and spend the rest of the night playing Nintendo themselves. Many grandparents were enthralled by the new games too and would discuss them just as earnestly and fervently with each other as their grandchildren did.

This was of course Nintendo's goal from the start: To make games anyone could enjoy, appreciate and share. This is an intellectual tradition of theirs that dates back to Donkey Kong, the Game and Watch series and their skeet shooting ranges. I surely need not remind my readers their first console was called the Family Computer. This is why I'm so continually baffled by people who bemoan the Wii and it's abandoning of the “hardcore” to court the “casual, social crowd”. As if Nintendo had ever catered exclusively to the “hardcore gamers” (a term of self-definition I personally find incredibly intellectually and historiographically lacking). Nintendo's games have always been for anyone who possessed a child's unconditional love of life, no matter what the biological or calendar age. No-one is entitled to a monopoly on them.

In any case, given the Nintendo saturation that was prevalent in the late '80s and early '90s, and even after Nintendo of America made a concerted effort to make video games synonymous with kids' stuff, which the medium absolutely had never been before 1985 or so, I can say from my experience at least that neither myself nor anyone I knew was ever persecuted or bullied for liking video games. It was just something kids did: You talked about Nintendo on the playground, went home, ran around outside for a bit, played some video games and watched TV. If it was the summer, you'd split your time between traipsing around forests building forts and playing kick-the-can and eating lunch with the Nintendo. To be perfectly honest and fair I was homeschooled for a lot of my childhood so the interaction with friends and the use of the playground as a social centre was a bit limited for me personally a lot of the time, though I did witness it and have comparable experiences with the friends I did make and members of my own family. In a previous post I've already recounted my memory of playing Super Mario Bros. at my cousins' house: Playing video games together with them was a regular occurrence, either at our place or theirs and was usually done after coming in from having adventures outdoors for the day and before dinner (and after at times as well) and those evenings spent playing local multiplayer on the NES or passing a Game Boy back-and-forth with my cousins are some of my most treasured.

As I got older and travelled elsewhere, home video game consoles continued to be a source of social bonding for me. For a time in my teenage years I attended a school where Olympic-bound athletes would take their lessons during their training period. The campus during that time was a weird confluence of extremely determined professional athletes, serious academic-minded scholars, artists and conservationists. It was an utterly unique and valuable experience and I even got to train with the local snowboard team for a bit while I was there. Video games were a big part of my social life there too, as me and the rest of the trainees would often hang out in the lounge after school taking turns on the communal NES and various SEGA consoles (there really were quite a lot of those, weren't there? And all out at the same time too). One of the at once coolest and strangest things I remember happening was hanging around the lounge alone after hours one evening playing Super Mario Bros. and having my science teacher, a charming young lady fresh out of college who split her time between teaching at the school and working as a tour guide and ecologist on the summit of the tallest mountain in the state, walk in and, seeing what I was playing, excitedly sit down next to me and jump into the game with me. She even taught me some of her signature tricks to navigate the underground levels-it was awesome.

Every new place I go games have traditionally been a kind of glue that held my various social connections together. From joining my whole dorm in an impromptu Halo: Combat Evolved tournament on the lounge XBOX, throwing LAN parties with my suite-mates over a local network we Gerry-rigged together for us and us alone to late-night Super Smash Bros. jam sessions with the guys from the common house, video games have continued to frequently go along with some of the best memories I've shared with my friends and family. And, just as always, everybody played: Girls, boys, athletes, alpine tundra ecologists, creative writing students, artists, frat guys, humanities scholars and astronomers. There was never anything strange or unusual about that: Video games were a universally shared cultural experience, just like any other kind of media, hobby or activity. I'd never thought of them otherwise.

Because of this, I've been presented with quite a few large and confounding philosophical problems these last few years. Things like “When did video games ever fall out of the 'mainstream'?”, “Where did the self professed gamers come from, why are they apparently so horribly persecuted?” “Why does playing video games constitute a subcultural lifestyle any more than watching movies, reading books or listening to music does?” and perhaps most annoyingly “Where are all these big mean jocks who are supposed to bully me for playing video games? I mean, I went to a jock school forchrissakes: If anything qualifies for that label it's a school for Olympic athletes in training. The only jocks I knew were the ones playing Nintendo with me in the lounge. Does that make me a big mean jock too? Do gamers just watch too many John Hughes movies?”.

In spite of the attitude my above glibness might imply, I'm not intending any of this to belittle anyone's personal experiences with video games as a medium or in anyway claim their positionality is imagined or fallacious or that mine is more valid. What I am saying is that my positionality is radically different from theirs, to the point of complete incongruity, and I don't understand why. There certainly does seem to have been a shift in the way video games were generally perceived and a clear-cut turn to a presumption that the medium has always been underground and fighting for legitimacy, I'm just not sure when and why that happened or where it came from. I certainly remember some half-baked moral panics in the mid-1990s to early 2000s over games like Mortal Kombat, Grand Theft Auto III and some of id Software's early games and that some especially obnoxious politicians were complaining games were corrupting the youth, but I honestly remember that never being something anybody really took too seriously (the Columbine school shooting aside, of course, but even that seemed to resolve itself rather quickly) and the sort of thing that happened to all young media, like rock music and witchcraft. There is of course the omnipresent existential nightmare of how video games handle narrative and how they compare to other forms of media in terms of artistic expression and value (goodness knows I've written enough on that already and I'm far from through yet), but that's a problem of self-reflection the industry brought on itself, not had imposed on it from the outside.

I'm also not saying bullying and bigotry isn't a problem for far too many people: In fact, the older I get the more convinced I am it's the defining aspect of Western society. I was picked on as a kid too, though not nearly to the same extent as others and nowhere near as much as I would have had I not been an extremely private person, and only so far as the fact my elementary school was full of egomaniacal bastards who were horrible to everybody. In other words, of all the things I was made fun of for growing up, playing video games was not one of them. And while I never experienced any video game-driven harassment as a kid, or at any other age for that matter, this does segue nicely into my final point which is, if video games had never been a harsh, exclusive, unwelcoming environment in the past, they bloody well are NOW and most of the vitriol is coming from within the so-called “gamer” community and is being perpetuated by the industry itself.

While I of course am going to avoid generalizations and stereotypes and know full well not all self-professed gamers are angry, bigoted people there's no way to put a fine point on this: The games industry as it stands today is a damn scary place as far as I'm concerned and honestly makes me ashamed to be involved with it at times. The treatment of women is particularly unforgivable: Just this past year we've had competitors at a Capcom-sponsored Street Fighter tournament sexually harass a female player to the point she was forced to drop out, a sizable army of cyberbullies send incalculable amounts of appalling hate speech and threats of violence to a female producer attempting a startup campaign for a Web TV show about Male Gaze in video games and not one but TWO high profile games journalists getting fired for verbally abusing female personalities over the Internet in clearly sexualized ways. I'd go into these cases more and scores others to boot, but I'll instead link to this article which summarises them all nicely.

It's not just from the gamers: Our industry thrives on sexualized violence, and why not if that's what its consumers want? I briefly mentioned the male domination subtext in the blockbuster hit Batman: Arkham City in the past and the questionable gender politics inherent in franchises like Call of Duty and Gears of War should hopefully require no elucidation. This year we even got a lovely trailer for the new Hitman game, Absolution, which took great pains to show protagonist Agent 42 brutally slaughtering a group of supermodel female assassins dressed in fetish nun gear. However, for my money there's no better example than the games proudly showed off to legions of jubilant games journalists at E3 2012. The Far Cry 3 demo began with a minigame where the player gets to grope a submissive female NPC's blocky breasts and proceeds into a horrific bloodbath of explicit and brutal violence. Assassin’s Creed 3, the latest entry in a franchise already about creative ways to murder people, clearly revels in the new elegant ways to dispatch the many representations of living things contained within. New entries in the Splinter Cell and Tomb Raider series were no different, the latter even being an exciting reboot where pioneering video game leading lady Lara Croft gets a new violent and depraved backstory and gets threatened with rape at numerous points in the game to make her feel more “vulnerable” and “realistic” so players would want to “protect” her. Hell, even Watch Dogs, arguably the most interesting reveal at the show, has one or two fairly disturbing moments.

I'm not one to defend censorship or claim violence has no place in fiction, but there's a difference in the way the violence was treated here. If you must use overly violent content in your work, it ought to have a purpose, usually to show off how serious and disturbing the setting has become. None of these games do any of that, and it seems more and more like gamers don't care. This violence is sexualized and celebrated in a way that alarms me (especially given this industry's obvious problems with women in general), and both the producers and critics seem on the whole fine with this. This editorial from GamesRadar adequately relates the sickening feeling and impending sense of dread I got sitting through this year's E3 and the problem is so pervasive and evident that both Warren Spector and Shigeru Miyamoto have publicly come out to condemn it. What this year has ultimately shown me is how out-of-touch the games industry has drifted from reality and I from it. Simply put, this medium is unrecognizable to me and both the latest crop of games and the sentiment I'm getting from the “gamer culture” these past few years is most assuredly not why I fell in love with video games. Should the industry continue down this path, I'll eventually, and sooner rather than later, run out of positive things to say about it or really anything to say about it at all.

It seems to me, to quickly and unfairly psychoanalyse a ludicrously large swatch of people, that these most vocal aspects of “gamer culture” seem terrified their Old Boys Club might actually be split open to allow other people entry, even (gasp) women. This is positively ludicrous. As I've spent the last obscenely long tract arguing, video games are, and always have been, for everybody. They are, at least from my experience, a unique and intimate way to connect people separated by distance, worldview or even (and especially) in the same room. If “gamers” continue to complain about being unfairly marginalized and this is the best they can come up with to argue their case, well then frankly they ought to be ignored. Any creative outlet or culture this insular and hateful has no right to any kind of voice at a public forum or really even to exist. I still may not know who these gamers are and where they came from, but to be perfectly honest if this is how they represent themselves I don't want to know, nor do I want anything to do with them and I'll be damned if I let them selfishly hoard our shared cultural traditions and experiences. What I'd rather learn is what happened tho the kind of communal spirit and attitude towards video games I remember from my childhood and I know for a fact existed thanks to historical record. More to the point, I'd like to know if there's any way to ever get it back or if it too is forever doomed to be an artefact of history alongside my NES, the Pong cabinet at my local pizza parlour or the video arcades that used to be on every kid's street corners.

Monday, July 2, 2012

“Dear lost companions of my tuneful art”: Gamer Culture and A Life on Video (Part I-Arcade Memories)

Pictures of the past...
“Social game” is not a genre. It is a redundancy. All video games are, by definition, social. All video games are, in some form or another, fundamentally about interaction between two or more human agents, even single player games: The experience is simply shared between the player and the designers, not multiple players. As I have mentioned before, in terms of media, video games are most similar to theater and music, both of which also rely very heavily on aspects of performativity and agency. As this begins to segue into territory I'd like to cover sometime in the future, let's for now just say that this is due to the nature of certain kinds of media and the way they utilise narrative (or the lack thereof) because a more sweepingly general and relevant statement to make for the time being is that all of art itself is fundamentally a social thing. I mean, why wouldn't it be, given that its entire purpose is to facilitate humans sharing very human ideas and experiences? No art, no game, exists in a vacuum and neither does any one player, artist or patron of the arts.

Anyone who has been paying the slightest bit of attention to this blog ought not to be surprised by my arriving at this conclusion, but I bring it up because of some observations I've made about video games, the industry that makes them and the kinds of people who play them. Some of them are very recent, some are trends that I have seen linger and evolve over the course of several decades, and not all of them are all that inspirational to talk about. This is going to be a slightly different series then is the norm here; a bit less academic and a bit more personal. I've always been unapologetically subjective on this site and in my writings in general, but this topic necessitates a bit more self-examination and reflection than usual as explaining my positionality is rather crucial to the argument I'm going to try to make. Also because, frankly, I cannot begin to understand the positionality of some of the people involved in this industry, I would like to, and tend to feel the best way to start to come to a kind of understanding is to articulate one's own perspective as meticulously and as clearly as possible. It is also more than likely going to come out a little tetchy and bitter. With that disclaimer, please allow me the rare luxury of diving into my life story, as it were.

It may come as a surprise to my readers, but one of the most confounding and baffling terms I've ever encountered in my travels is “gamer culture”. I mean, theoretically I understand what it means-A group of people with games, presumably (given the context) video games, as a commonality. As someone with a background in both sociocultural anthropology and social studies of knowledge (SSK) I'll be glad to debate the meaning of the term “culture” with anybody. But see to me, a culture has to have more then an affinity for a specific kind of media or artistic expression in common. I have a hard time seeing how video games alone can provide the foundation of an entire societal structure. But I'm being willfully thick and obstinate: Of course “gamer culture” means more than just video games. Let's play along, to torture a metaphor, and try and discern a quick-and-dirty general conception of “gamers" given different cultural patterns and stereotypes I've been able to pick up through years of working in, studying and observing the games industry and give a horrible name to my entire field by trying to cram this into the introductory section of a blog post instead of dedicating a 700 page ethnography to the subject:

“Gamer culture”, in its loosest and most superficial terms, can be defined as a group of socially marginalized, though curiously by-and-large still white, middle-class straight male, individuals who have been brought together in solidarity due to their communal appreciation of video games, Japanese animation, horror movies (especially super-gory slasher films), professional wrestling, heavy metal music, tabletop RPGs, computer science, camp cinema and an overwhelming dislike of physical activity, especially athletics. Another thing near and dear to the hearts of many gamers is a feeling of constant persecution and a strong desire to be validated by others, often explained by a lifetime, and more commonly especially a childhood, of being bullied or ignored for their interests. They claim their interests in general, and video games in particular, have never gained mainstream approval and they have been forever shunned because of it. Those who claim to be a part of it also have a tendency to be socially awkward and express a dislike of unnecessary social interaction. I hasten to add I'm not trying to be intentionally mean, sarcastic or snarky here: This is exactly the way the vast majority of self-professed gamers I've met or read in my life describe themselves and these shared truisms form the basis of a large amount of reflexive, self-deprecating humour.

Clearly I've simplified and stereotyped the situation quite a lot here, but I maintain there's a kind of truth in that last paragraph. Hang around enough gamers or read enough of what they write and I have a feeling you'll find similar motifs, undercurrents and trends. But this is the thing; the inevitable conclusion of the above train of thought: I play video games, many video games. They have indisputably changed and shaped my life. I write about them constantly and follow the industry incessantly. And I relate to absolutely nothing in what I just wrote in the paragraph above. I am not a “gamer”, nor have I ever been one.

My history and experience with the video game medium has been so radically different from those who call themselves gamers I can't even really talk to them intelligently, though I clearly have had one of some kind. What's more, “gamer culture” seems to be a relatively new phenomena from my perspective, appearing on the scene for the first time in the last decade or so (although I freely posit the possibility I just never met any proper gamers before then). One of the biggest cognitive dissonances I seem to have with gamers is over the conception of “social games”. At the moment it's a relatively hot topic in gamer circles, though not nearly as much as it was a few years ago. According to the typical account, “social games” are a new subgenre of video games brought on by the popularity of the Nintendo Wii and its me-too console motion control imitators, as well as smartphones and social media like Facebook, that's designed to be more physical, more based on local human interaction and to provide simpler, more accessible, (though shallower), experiences than “traditional” games. This is a very large debate, with some gamers claiming it's good to get the medium more “mainstream” exposure and others worrying its diluting and cheapening video games and mainstream exposure isn't worth it if this is how the industry is going to go about getting it.

We should all know the answer to this debate by now: There is none, because the debate is pointless. Games have always been social-It's in the fundamental structure of the medium. I would take this statement even further, however, and claim the essential social structure of video games goes beyond the medium's inherent performativity, playerXdeveloper interaction and the basic fact all art has has a necessary human component. No, video games have always been social because from the very dawn of the medium they fostered social interaction and powerful bonds of friendship and communal solidarity. And here, at long last, is where I come in.

When I was young, I never remember video games being a marginalized and taboo thing. I remember video games being for everyone; consciously, intentionally and always. The Magnavox Odyssey, the first home video game console, was expressly marketed as a mass-market consumer electronics device. When Pong kickstarted the Golden Age of the Arcade of the 1970s and 1980s, the new machine debuted in bars, pubs and later found a place in dedicated arcades. Far from the current conception of gamers as being primarily adolescent (or with adolescent interests and mindsets) shut-ins, these early arcade games seem to me to have been made for adults and installed in places adults would congregate. Very quickly Pong, and most importantly its descendants, earned its place alongside the jukeboxes, pinball machines, dancefloors, bar counters and wall-mounted televisions broadcasting football games as iconic aspects of the world's eating and drinking establishments. As the '70s blurred into the '80s and video game-centric amusement arcades began to spring up to became respected community fixtures in their own right, but the original and natural home of the arcade game remained the townhouse. I can still remember walking into my local establishment, grabbing some pizza for lunch and then my friends taking me into the bar to fire up a game of pinball or whatever Midway or Atari cabinet the place had in the back corner. It was as natural and expected a thing as popping a quarter in the jukebox or on the bar for a drink.

Bars and arcades are social places. They're businesses people go to with the express intent of meeting and spending time with actual, flesh-and-blood human friends. What's even better is that they're places anyone can go-People from all walks of life go to bars: Men, women, people of all different cultures and creeds. I would never be so reductive as to declare any cultural artefact universal across all societies, but if any one of them had any sort of claim to that stupidly overreaching title it would be the bar. And guess what? All those people used to play video games too. A game of Ms. Pac-Man, Defender, Donkey Kong or any one of those numerous light game outfits (I seem to remember at least one for every blockbuster movie that came out, though the only ones that immediately jump to my mind were the Jurassic Park and Terminator 2: Judgment Day ones, not to mention any number of the ones not based on movie licenses) was a great way to bond and make memories with your friends. If you were a kid and too young to drink, you could patronize any of the dedicated video arcades, and there were a ton of them throughout the Long '80s. They were a kind of community centre and a favourite hangout for any kid after school, and even for adults who were perhaps more interested in beating their personal high score than drinking. Many an afternoon, or evening rapidly turning into night was dedicated to friends challenging each others' high scores and teasing each other in the process. Heading to the corner arcade or bar to play video games used to be just as accepted a way to socialize as going to a mall or coffee shop with friends. At least, that's how I always used to feel.

Near the end of the Long 1980s and into the early part of the 1990s, the omnipresence of arcade games in bars and the dedicated video arcade itself seemed to go into a decline, most likely as a result of the rise of home video game consoles (the Atari 2600, it must be noted, was originally just as much a consumer electronics product as the Odyssey with all that entails). Before they disappeared completely, however, they made one last stab at greatness with the legendary Super Street Fighter II, Capcom's Hail-Mary that completely redefined the fighting game genre gave the video arcade its one last blast of brilliance (well, in the US at least-It's well-known there are still many dedicated arcades in Japan. As great as that is though, this article is about me and my story, and I've never been to a Japanese arcade). Now I don't care how iconic the Super Nintendo version of the game is, to me Super Street Fighter II is an arcade game born and bred and the purest way to experience it is at an actual cabinet. I mean, at the very least they were complimentary experiences; I have fond memories of both, for example practicing at my friends' houses and then trying to take on the arcade. But Super Street Fighter II was originally an arcade game and it's in the arcades that it had its biggest impact and where its most powerful and resonate legacy lies.

I have very vivid memories of jockeying for position amongst the swarms of people hovering around the fight stick just to get in one quick match with Chun-Li, my favourite fighter, against someone from that frenzied ball of humanity. I sucked at the game and lost horribly all the time (I'm still not amazing at Street Fighter) but I didn't care: It was unbelievably fun. Not just the game, but the whole atmosphere and the experience. There's no feeling quite like the compounded communal enthusiasm of people enjoying each others' company in a shared activity: The air simply crackled with energy. Super Street Fighter II was a landmark in the medium as a work, but it was also the arcade's Indian Summer and that's how I'll always remember it.

All my nostalgia for the bars and arcades that helped shape my view of video games and their place in my life is not meant to marginalize home consoles or claim that their games are not as social as arcade titles. The contrary: They were just as reliant on communal bonding, just in a different way. The Atari 2600 originally built its name on making home versions of popular arcade titles for families to play together in the living room, or for bachelors and bachlorettes to hone their skills away from the bars and socialize one-on-one. It was a rousing success and paved the way for a whole new market in video games that were targeted just as much at people who stayed at home as they were at people who went out every night. However, after a series of blindingly poor business decisions at Atari and parent company Warner Brothers directly resulted in the infamous game industry crash of 1983, home video games no longer seemed like a profitable investment. But nobody counted on an unknown toy company from Japan entering the market seemingly bewilderingly late and changing the game, so to speak, forever.

Tuesday, June 12, 2012

“Seemed Like The Way Of The Future”: VR Mk. II and Video Game Narrative

Note: So clearly what I'm talking about here is the Oculus Rift. Equally clearly, I have absolutely no idea that's what it is in this piece and seem under the impression it's of John Carmack's design and not Palmer Lucky's and unaware it's an independent company, which it is. To be fair to me, I was far from the only journalist to make this mistake and the coverage of the Rift at E3 2012 was spotty at best, leading people to make posts like this. I'll leave this up as my larger points on VR are ones I still stand by, despite getting the details of the machine itself pitifully wrong. I hope to have a full review of the Oculus Rift once the consumer version ships, as well as a follow-up on the promise and potential of VR sometime later in the year (2013).

Header video is, of course from The Angry Video Game Nerd, one of my favourite shows ever. If you're in the small minority who still aren't familiar with James Rolfe and Mike Matei and would like to see more from them, please check out their site here.



Well, at this rate my rule about “not covering current events unless particularly relevant” is starting to sound downright farcical...

So E3 2012 was this past week as of this writing. I wasn't physically there thanks to prohibitive travel costs and lack of an official press certification, but I was able to follow the breaking news in real time via a massive amount of streaming content, on-floor video, LiveTweets, instant recaps and my own personal array of spy satellites controlled from the Moon Base where I secretly plot to take over the world. Oh wait, you weren't supposed to know about that last part. Anyway, despite this year's expo being an overwhelmingly tepid and lukewarm affair that showed the industry in a creatively bankrupt holding pattern, there were a few interesting reveals of note. First was Ubisoft's ambitious-sounding new IP Watch Dogs, which I'm sure you'll find is the darling of the games media right now and the talk of the Internet and which hints it might tackle some intriguing, sophisticated and timely themes about the world socio-technoscientific zeitgeist. It certainly deserves a closer examination, and I'll be happy to rise to the challenge when it's actually a thing.

The second, and more important, reveal in my book is one that has baffilingly received very little attention, even on the show floor itself. I find this inconceivable because for me it was the absolute highlight of the entire weeklong expo. Tucked away in a little alcove and separate from the main conference hustle and bustle, John Carmack, whom I've expressed quite a fondness for on this blog before, was demonstrating a side project he's been working on that showed off some truly astonishing technology and potential. With a funky looking pair of goggles literally held together with a duct tape headstrap, Carmack hopes to do no less than bring back that electronic pipedream of the early 1990s, Virtual Reality, and damn well do it properly this time. What's even more unbelievable, if my informants from Los Angeles can be trusted, is that he's actually managed to do it. Describing how he's pulled it off requires some rather muddling digital computer lingo that I don't entirely understand, so let's let the man himself explain in his own words here and here.

What I find so exceptionally laudable about what Carmack says is that, from the way he tells it, this new kind of Virtual Reality system finally does what everyone who was taken in by VR in the 1990s wanted it to do: Fully immerse the player in a realistic-feeling digital world. Even better than that is Carmack's drive to make the materials to build the headset freely available to actual players as a kit that can be purchased for a price that's frankly scarily affordable. This is a conscious, targeted return to the hacker and HAM Radio roots of computer hobbyism that makes up a not-insignificant part of the video game community: A breathtakingly forward-thinking approach to an intellectual tradition that dates back to the Altair 8800 and the BBC Micro. The contrast between Carmack's spirited little demo and the vapid sludge I saw on display at the EA, Sony and Nintendo press conferences was completely night and day. John Carmack is truly the real deal and seems to have the healthy future of both the industry and the medium square in mind. Frankly, where he goes I'll follow.

As excited as I am for the hacker-positive aspects of Carmack's new project and as much as its ramifications and what it reveals about the culture of video game fans deserves careful attention and analysis, what I find most promising about it is what it illustrates about the way video games handle narrative. I had my suspicions before, but this new project has made me certain: John Carmack is not just a whiz kid techie with a bottom-up attitude, he's the first developer since Shigeru Miyamoto to effortlessly grasp what makes video games unique amongst creative media and I'm overjoyed he now works for Bethesda. To explain a little bit more about what I mean and why I'm making this claim, I need to bring up some things Sony showed off at E3 this year.

Sony's press conference was essentially one great big party thrown for the so-called “hardcore gaming crowd” and the loyal PlayStation faithful. It seemed to go over rather well (and it damn well had to given what Sony's fortunes are right now), especially coming off of the sleazy EA show, the fever dream of a Ubisoft presser and the supposedly-disappointing Microsoft showing. I was personally entirely unmoved, but my reasons why are part of a larger attitude I see present in gamer culture and are better saved for another day. One thing both me and my colleagues were in complete agreement of however was that Wonderbook for the PlayStation 3 is an absolutely nonsensical and pointless waste of time. In case you are unfamiliar with the madness that is Wonderbook, allow me to illuminate you: Wonderbook is basically Sony's stab at an Augmented Reality programme where players stand in front of the PlayStation Eye, hold a book-like tablet and wave the PlayStation Move controller over it to make things appear on screen. This was demonstrated by a Harry Potter licensed game where aspiring wizards use the Move controller like a magic wand to turn pages and interact with words to do things like cast spells and trigger minigames. Sony claims this will revolutionize the act of reading and allow books to “transport you to other worlds”.

Let's leave aside for the moment the fact that books are already designed to trigger the imagination in their own way and in no way require motion control nor Augmented Reality to do this and the troubling notion that all of the Wonderbook titles seem to be written for kindergartners. What I instead want to focus on is the Sony's notion, very problematic by me, that video games and books somehow need one another. The existence of Wonderbook tells me that Sony thinks books would be improved by slathering on some token rote interactivity and, conversely, that video games are improved by being connected more intimately to books. By implying the mediums are in some way codependent, and logically in some way related, this cheapens them both because in my view books and games couldn't be more different in the way they interact with readers. Books, while dependent to some extent on readers using their imagination to flesh out the look-and-feel of the world, are generally passive media that rely on a linear form of narrative built around character development that is slowly and methodically revealed to the reader. The story has already been told, in other words, and the reader just has to guess how it will turn out. Whereas video games, as I've argued, are designed so players can generatively build the story in tandem with the developers (when they even have a story, which is not even a prerequisite). Given this logic, what is the purpose of Wonderbook? If the book's story is open to player agency, it's not much of a book and if the story is already mostly written the player's role is largely irrelevant. It's a bad game and a bad book all rolled into one: Like one of the potions in The Elder Scrolls V: Skyrim that gives you an error message and disappears when you mix incompatible ingredients, much less than the sum of its parts.

There's a bothersome habit amongst game developers and journalists that, because of the medium's troubled history with moral guardians and its general struggle for legitimacy, to attempt to justify its existence by comparing games to books and movies. If they could somehow show games can tell as good a story and provide as good an experience as the best novels and films, they reckon, this will finally prove video games are a valid and respectable form of art and entertainment. This logic is, simply put, wrong. Video games are most certainly a valid and respectable form of art and entertainment, but not for these reasons at all. When developers operate from this mentality we get calculated misfires like Wonderbook and, more to the point, Quantic Dream.

For those unfamiliar with Quantic Dream, they are a positively vainglorious development studio headed by the unflappable and controversial David Cage who has spent the majority of his career trying to beat Hollywood at its own game. Responsible for such massively-hyped hits as Indigo Prophecy and Heavy Rain, Cage's group has embodied better than any other studio the trend to make “cinematic” games. By hiring A-list actors and jockeying to remain on the cutting-edge of realistic motion capture technology, Quantic Dream consistently aims to provide the most realistic games with the most nuanced and sophisticated narrative. Cage often boasts the stories in his games are as good or better than those of the most dramatic movies and are singularly responsible for proving games are art, an argument that might hold water if any of his games were written well. If my bitterness and apathy hasn't already betrayed my positionality, Cage's is not a viewpoint I am especially fond of and I've found both of his previous efforts to be deeply unpleasant experiences. Both Indigo Prophecy and especially Heavy Rain are built around making decisions at crucial moments to activate different paths on a ludicrously branching narrative. It's definitely interactivity and the player is a crucial component to the game's story to be sure, but not in any meaningful way in my view. Mostly these games play out like really fancy, dolled-up choose-you-own-adventure novels to me and any investment to be gained from them comes from the themes discussed by the characters during the constantly playing narrative which is so carefully-constructed I feel distant from it. It's all the reasons I gave for not liking Mass Effect but taken to the logical limit.

The reason I bring up David Cage and Quantic Dream is because they were another developer to unveil a new project at Sony's E3 conference. Dubbed Beyond: Two Souls (or 3EYOND, I'm actually not sure which), it will apparently chronicle the story of Jodie Holmes, played by Academy Award-nominated actor Ellen Page and deal with Cage's own ruminations on death, mortality and the afterlife. The trailer revealed at E3 had the commendable gonads to be several minutes long, rendered entirely using the in-game engine and show off no interactive gameplay whatsoever to the packed auditorium of veteran games journalists. Never have I seen a finer demonstration of David Cage's opinion of players, interactivity and the basic fundamentals of the video game medium.

Both Wonderbook and Beyond: Two Souls are symptomatic of a big problem the games industry has in my opinion. This constant, insecure pursuit of comparisons to books and movies misses the core value and purpose of video games as a medium which is, as I've argued the ability to effortlessly share experiences and build a shared world through which an infinite number of singular stories can be told. This is something neither books nor movies can do in my view because they are passive media. Without that crucial element of player agency, there's no tangible connection to the fictional world (this is of course not to say one cannot get invested in a book or movie; there's a difference between relating to something and experiencing it). Books and movies are very good at telling stories other people have written and provocative themes can be explored through them both textually and metatextually, but they are at heart still based around linear narratives readers can look at and critique from afar. Video games are something completely different and to conflate the three of them is spectacularly wrongheaded in my opinion. It's like in the Uncharted series, which is designed from the ground up to be like an action movie. Players assume the role of an Indiana Jones analogue and move from setpiece to setpiece and magnificent cinematic cutscene to magnificent cinematic cutscene. Even the mechanics themselves belie the action movie influence: Players do not “die”, they “throw the take” and have to “redo the scene”. Everything about these games gives the impression developer Naughty Dog have a very specific story to tell and if the player deviates from the literal script it makes things inconvenient and annoying for them. Player agency has been compromised in favour of linear narrative, and I don't think there's anything more self-destructive than that for a game. I dread a world where every game is like Heavy Rain or Uncharted.

This brings us back to John Carmack's big idea. Carmack, and his partners at id and Bethesda, understand the importance of agency in a way Sony, Naughty Dog and Quantic Dream don't seem to be able to. It's telling Carmack and id made their name via seminal first-person shooters like Wolfenstein 3D, Doom and Quake, which were all groundbreaking early steps towards immersing the player in a game world in a way that had never been attempted before, at least not with any real success. The big revelation these games ushered in is right in the name of the genre they helped codify: “First-Person Shooter”. For the first time, there were games that put the player literally in the shoes of their character, making the proverbial link between them and the game even more direct and intuitive. It's difficult to overstate the effect this shift had on the way players interacted with games: No matter how seamless and intimate Super Mario Bros. was, there's truly something to be said for the ability to actually see through the eyes of a character. It simply adds a layer to the experience that hadn't been there before. What id's games did for the first time was change the role of the player from puppeteer to active agent within the game world itself (Miyamoto eventually found a way to change the paradigm here too, but that's another story). Now, with his new experiments with head-mounted VR displays, John Carmack may very well be taking this core concept to the next stage.

If video games are, as I've argued, fundamentally about sharing or creating a generative experience and the point of player agency is to blur the boundaries between the players, developers and game world than a viable first-person perspective is a logical way to facilitate this process. Likewise, competent and effective motion control and well-implemented stereoscopic 3D are further technological ways the medium of games can invoke to make provocative statements (the latter is also important for visualizing 3D space and kinesthesia, as I have of course argued before). Setting the accuracy of the statements aside for the moment, the intimate, pure sense of “I found this” or “I saw this” or “I made this happen” which dates back to at least Super Mario Bros. is integral to the way video games convey themes and I feel Virtual Reality could be the singularity at which technology will finally be able to demonstrate this in its purest form. In my opinion the more streamlined we make interfacing with our games, the easier it will become to connect and resonate with them because it will be easier for us to feel like the experiences onscreen are actually happening to us. This is exactly what Shigeru Miyamoto wanted to evoke through Super Mario Bros. and is the core difference between active and passive media and why trying to shoehorn a linear narrative onto a video game is such a poor choice in my opinion. The way I see it, games like Uncharted, Heavy Rain and Mass Effect series would all be far more evocative and powerful titles if, rather than pausing the game every few seconds to play a lovingly rendered CGI cutscene with professional actors angsting about choices the player has or has not made, players are confronted with those choices and the themes of the work at a visceral, personal level because they have been made to feel an intimate part of the world or narrative.

In the not-too-distant future I'll take a closer look at a game that shows how this alternative can work brilliantly, but for now I want to concentrate on the potential Virtual Reality has to be a perfect fit for this oft-overlooked model of critique and, frankly, revel in the knowledge I live in a world where it can not only exist, but exist in an open, generative, bottom-up form that's true to a pioneering ideal of the medium in a way unlike anything else I've seen in recent memory. There may very well come a time when this too is subsumed by the soulless corporate lowest-common-denominator chasers or misguided hipster artists that have the run of the industry today, but for right now let me bask in the afterglow of an unexpected victory for not only the video game spirit that caused people like me to fall in love with the medium a long time ago, but also the playful underclass that helped light the spark of that ideal in the first place. If this is where one of the medium's leading lights is pushing the medium to go, then I for one am behind him totally and completely and hope he gets all the support and resources he needs. I'd adore for this to be our future and can't wait to see how inspired developers utilize the potential of this new technology to make the most evocative artistic video game works to date. Just this once, I hope a so-called “innovation” lives up to its hype instead of becoming a last-minute cash grab in the dying throws of a generation desperately tossed out by publishers who know they can no longer compete, destined to be abandoned as quickly as they were introduced.

Please, oh please let me be right about this one.

Friday, May 11, 2012

“Wow! You really are Mr. Hero!”: World-Building and Role-Playing in The Elder Scrolls


 Note: Being a writer I'm eternally critical of my older work. However, this piece I feel especially hostile towards as there's quite possibly no single work of fiction I feel I've misread quite as catastrophically badly and unjustly as The Elder Scrolls series, in particular the two games I highlight in this post, Morrowind and Skyrim. Suffice to say my opinions on both games have changed dramatically since I originally wrote this and if ever there was a time I wish I could completely retract something I wrote this would be it. If you haven't already read this post, I'd first humbly ask you *not* to, but if you must I suggest you skip the entire section where I discuss them. There still may be one or two vaguely valid points I bring up in regards to gender roles and the story of Mjoll the Lioness, but mostly I talk absolute bollocks and get everything about the series' core philosophical tenets laughably, provably and factually wrong. As of this writing I'm in the middle of planning a major update that I hope will give The Elder Scrolls a much, *much* fairer and more appropriate evaluation. The bits about Tolkien and RPGs I stand by though.

 I'm leaving this post up as a historical record of my own stupidity, but please keep in mind I no longer feel comfortable defending it at great length.

***********************************************************************************


I'm going to make a confession. After I do so, you are free to revoke my Nerd Card (I haven't taken great care of it; haven't used it in years and the photo's out-of-date), though I do ask you at least bear with me for the remainder of this essay. That settled? Alright then.

I do not like J.R.R. Tolkien's The Lord of the Rings.

Now, when I say this I mean I am not especially enthralled by its merits as a novel-Tolkien *did* accomplish something truly incredible and I have a huge amount of respect for that. The Lord of the Rings does something very, very well, it's just that its strength is not in its ability to tell a story. What Tolkien did is craft an entire world from scratch, populating it with its own unique cultures, languages (complete with alphabets and primary sources) and thousands of years of intricate historical record. Granted, many of the rote building blocks of Middle-Earth were cobbled together from various Norse and other Northern European mythologies, but Tolkien's savvy move was to arrange them in such a way as to make the whole thing streamlined, cohesive and elegant to follow. As no-one had blended these literary traditions in this way before and enough of Tolkien's own imagination shown through, The Lord of the Rings still felt uniquely singular and remains so today. What The Lord of the Rings reads like, more than anything else, is a three volume set of historical records about a meticulously detailed fantasy world, and it's difficult to argue that isn't a remarkable achievement.

However, the other side of this is that it's a rare person who actually relishes the thought of settling down in front of a fireplace and eagerly digging into a history textbook. I will freely admit to not being that sort of person. History textbooks are notoriously dry, prosaic and full of questionable appeals to authority and objectivity-In other words, pretty much how The Lord of the Rings reads to a non-fan, or at least to me. More relevantly though, history textbooks do not tell stories: They're focused on crafting a linear, deterministic model of history and not the story arc of one or a handful of protagonists. The Lord of the Rings chronicles the stories of many different characters (Frodo's quest to return the Ring, thus fulfilling his destiny and embodying the spirit of hope and perseverance, Aragorn's quest to assume the Throne of Gondor and rally the Alliance of Men, Arwen grappling with the repercussions of her immortality) but no one character can be claimed to be the central focus of the whole saga. This has both positive and negative ramifications: On the negative end it's potentially laborious for a reader to get invested in a text more geared toward relating the historical record of a fictional world instead of chronicling the evolution of one particular character. On the positive, by contrast, it makes the world of Middle-Earth feel for more nuanced and alive than many other fictional settings owing to the staggering amount of lush detail that went into creating and documenting it.

If The Lord of the Rings falls down as a novel though, there's one genre it fits very comfortable as a trailblazing member of and its impact makes it arguably exceedingly ahead of its time: The tabletop Role-Playing Game, or RPG. In his overview of the FASA Doctor Who RPG from the mid-1980s, the meritorious Phil Sandifer compares that game to one based on The Lord of the Rings, making the claim that the latter property is “in many ways a world in search of a story. (Or, actually, a language in search of a world in search of a story. Which is even better, really.)”. This is, to me, a perfect reading of the series and a great summation of why it makes a great RPG and something like Doctor Who doesn't: Like Sandifer argues, as the framework of Doctor Who relies to an extent upon The Doctor being the central figure and, in one way or another, upsetting the establishment into which he's cast, it's difficult to map out a cohesive, three-dimensional world for the franchise to be set in. In other words, and the way I tend to read it, Doctor Who is about inserting The Doctor into other people's stories so he can give them a little kickstart. Since he is simultaneously the one irreducible part of the series and an ability to transgress narrative boundaries is tacitly written into his character, it's essentially impossible to build a universe around this.

The Lord of the Rings is different, however: Where Doctor Who is a framework for reshaping existing stories, Middle-Earth is first and foremost a world; a setting. It's a place where stories happen, not an agent for creating stories or a story in and of itself. This is what makes it such great fodder for RPGs, because an RPG depends fundamentally on taking a pre-built setting and finding new ways to tell engaging stories with original characters that take place there and are lore-friendly. Because the source material has a big enough scope and the focus was never on characters (no matter how iconic they may be) it makes it very easy to roll up an Elven battlemage in a Lord of the Rings game where it might be tough to convince a Doctor Who fan to play a game where they aren't allowed to play The Doctor or one of his cool companions.

All of this is a roundabout way of getting at my main point, which is to analyse and re-examine the legacy and influence of the RPG as a framework in contemporary fantasy video games. I mean at first glance the footprint is pretty clear: A great many video game fans are also fans of pen-and-paper RPGs and a great deal more game designers got their start crafting intricate and unique fantasy realms through traditional RPG rulebooks. We even have a genre of games called “RPGs”, which immediately imply that these titles are drawing on a very long and very specific intellectual heritage. It should almost go without saying-Video game fantasy owes a tremendous debt to pen-and-paper RPGs: Our medium's very oldest fantasy epics, games like the Ultima series, were overtly based on their creators specific flavour of Dungeons and Dragons. Richard Garriot's Brittania is literally his old D&D map and Ultima was his exercise in realising that world in a new medium. Games like Neverwinter Nights are explicitly designed around translating the rules and game mechanics of D&D to video games. Dungeons and Dragons has been with us from the very beginning and its presence is still heavily felt today, as even recent AAA blockbusters like BioWare's Dragon Age games can trace an easily noticeable lineage back to D&D. Pen-and-paper RPGs are in the very blood of video games.

I personally feel there are other models of narrative structure video games can (and actually should) take advantage of more often that aren't descended from RPGs (anything by Shigeru Miyamoto and his school, for the most obvious example), but that's a treatise for another day. Right now I want to look at the RPG model itself and the very beneficial effect it can, and has, had on game design via my favourite series to invoke it: Bethesda's The Elder Scrolls. Like in Ultima, the world of The Elder Scrolls, Tamriel, is based on the original designers' custom D&D map. Where it differs from its predecessor, however, is that while Ultima followed the story of one hero who, through a series of epic quests and tribulations becomes the Avatar of the Eight Virtues of Brittania tasked with bringing spiritual enlightenment to the people, The Elder Scrolls is expressly the story of Tamriel itself, with each of its as of this writing five main games (Arena, Daggerfall, Morrowind, Oblivion and Skyrim) chronicling the events of a specific era in Tamriel's history with an emphasis, more often than not, on a a specific region. The choice to write the games this way is nontrivial: While each installment can (and does) feature a different, specific hero, the focus remains squarely on the world itself and the state it finds itself in at the time any given title begins. This focus on the state of Tamriel itself, coupled with Bethesda's famous penchant for obsessively meticulous attention to detail (each game has hundreds of fully written books on millennia of Tamrielic history that even account for authorial biases and misinformation; every character, player or otherwise, has a specific backstory and several branching character arcs for starters) means the world of The Elder Scrolls comes alive in a way very few other video game settings are able to.

It's at this stage I'm careful to avoid too much dialogue about “immersion”, “escapism” and “suspension of disbelief”, however, because these can be very loaded terms. A strong argument, and one I'm not entirely inclined to contest, can be made that the goal of fictional storytelling should not be these things, but instead on an emphasis that fiction is not reality but merely resembles reality and showing how that can be used to explore specific themes and get readers invested. I'll freely admit this is an effective model for explaining how stories work. That said, here's the real rub: I don't feel either RPGs or video games should be, at least primarily, about telling stories. I've previously argued that the purpose of video games is to serve as an outlet for a designer to share an experience with players, and explicitly so owing to the irreducible factor of player agency. In this sense, and in terms of comparisons with other expressive media, I liken video games most closely not to books, movies and television, but to music and theater (an analogy that warrants further study at a later date). RPGs, by contrast, I am currently arguing are designed to build a detailed, cohesive world in which stories can take place. This is what makes the two genres a natural fit for one another: From RPGs, a game gets the ability to craft an expansive, intricately detailed canvas on which to tell a story and the power to write any kind of story. From other video games, it gets the ability to intimately link authors and readers through shared experience. When put together, a good high fantasy video game can give players the power to explore and craft their own story in tandem with developers, hence making each playthrough of a game such as those in The Elder Scrolls series potentially unique. And this is what Bethesda does really well from my perspective: Effortlessly fusing the two styles to create games that both clearly reflect themes the designers wish to explore whilst at the same time providing players a level of freedom to run around and help shape a shared world that's almost unprecedented.

If we take as read the goal of The Elder Scrolls is not to tell a story, but instead serve as the generative vehicle, framework and setting through which stories can evolve, what consequences might this have for the way the narrative of any given entry plays out? There are, I argue, some peculiarities of this arrangement that ought to be addressed. The most important of these is that in order for The Elder Scrolls to successfully be a video game based around RPG-style world-building, the one kind of story it absolutely cannot entertain is, unfortunately, the kind that the main quest of nearly every single Elder Scrolls title has been thus far: Epic poetry about a chosen hero destined to deliver the world from evil. The protagonist of Arena traverses the world to retrieve the broken pieces of the Staff of Kings and reunite a war-torn Tamriel, the Emperor himself chooses the PC in Daggerfall to soothe the restless spirit of a ghost terrorizing the region's capital and navigate the rapidly deteriorating political situation, and Oblivion's main character becomes the beloved Champion of Cyrodiil by closing the gates to a chaotic Otherworld and preventing all of Tamriel from falling under the rule of the very embodiment of destruction and conquest. This becomes especially egregious in Skyrim (cribbing from testosterone-laden Norse mythology as it does): Not only do players assume the role of the prophesied reincarnation of the “Dovahkiin”, one who is born with the soul of a Dragon who is destined to prevent the Tamrielic equivalent of Ragnarok, but who also conveniently becomes the destined heroic saviour of any in-game faction they choose to join. The game is arguably a modern classic, but the amount of beardy macho bravado its plot exudes actually makes me physically gag a bit sometimes.

This is even more incredulous if you happen to come off of Morrowind, which actually very cleverly deconstructed the Chosen One archetype effectively. Throughout the game, the player character is referred to as the “Nerevarine”, an alleged reincarnation of a literal God who will lead the Dunmer people to salvation. However, as the game progresses, players get conflicting information that casts doubt as to whether or not they truly are the Nerevarine, before it's ultimately revealed that the title is essentially meaningless and is used primarily so external forces can use the player as a political pawn. In an absolutely delicious bit of irony, the character who would be traditionally held up as the most powerful and given the highest accolades and adulation winds up potentially the most marginal and easily exploited. Morrowind leaves readers with the underlying theme, especially evident in its final moments, that destiny, fate and the future are ultimately only what each individual person chooses to make of it. No-one, it would seem, is predestined to be a legendary hero or given a divine right at birth that would set them above everyone else. In other words, the Chosen One archetype is so ancient The Elder Scrolls itself already picked it apart to a satisfying extent almost a decade before Skyrim even came along and it most certainly wasn't the first.

I'm not going to continue going on about how dull and hackneyed this storytelling framework is, dearly loved by hack fantasy writers since the days of Beowulf it might be, because it actually has different ramifications for this kind of video game that are a bit more interesting to talk about. Put simply, an intellectual framework that sets world-building as its primary concern and a story where one character, even if it is the player character, is the centre of the universe is completely incongruous. It runs completely contrary to the very thing that keeps The Lord of the Rings, and by extension RPGs, afloat: Namely, that the worlds of these stories are places where heroes live and heroic deeds are done, not where one character is the most important entity in existence. An argument could be made Lord of the Rings falls into this trap more often than strictly necessary given how all of Middle-Earth seems at times hinging on whether or not Frodo makes it to Mount Doom, but there's enough else going on and the world is developed enough it diverts our attention from the painfully pedestrian nature of the basic plot. In the case of Skyrim, however, if the Dovahkiin is the most important being in the lives of everyone in the game world at this particular time, this dehumanizes the other characters and ultimately reduces the world the game is trying to build to little more than a series of levels and checkpoints the player passes through.

I've seen quite a few fans come out to defend the decision to write The Elder Scrolls stories this way: A common argument seems to be that it is “empowering” to be given such “special” treatment and that it's a good way to get players to feel like their actions have consequences. This is often used as a way of defending such “massively single player games” and contrasting them with “massively multiplayer online games”, (MMOs) from which they derive their epithet. Now it is true that some MMOs have serious issues in terms of handling individual players, but this is the thing: A game series like The Elder Scrolls should not be about making the player feel special. Turning the player into a legendary unstoppable exalted demigod is absolutely the last approach to take to this kind of scenario for it does nothing but cheapen the world and the stories it's supposed to be fundamentally about building because in RPGs nobody is special. What makes RPGs unique is that the stories of countless heroes that walk the game world and interact with it in entropic ways help shape and mould the world and allow it to come to life. When the world itself becomes subservient to one hero, it ceases to feel singular and cohesive. As Skyrim is explicitly the story of the Dovahkiin's heroic deeds and destiny and there are no other characters cast at a remotely comparable level, the effectiveness of the game's setting as an entity unto itself suffers mightily.

The other side of this issue, which we can find if we approach it from a strict video game perspective in addition to an RPG one, is equally as troubling. This justification language of “empowerment”, “player autonomy” and “massively single player” is unfortunately intrinsically linked to some truly disquieting patriarchal power structures. The plain, simple and sad truth is that video games, as a business and not an art form, are still targeted fairly heavily towards adolescent males and the vast majority of our designers are either also straight males or tend to write from that perspective for commercial reasons. As a result, an uncomfortably large proportion of games are meticulously designed to play into and feed adolescent male power fantasies. An obvious example would be the borderline hilarious levels of manly bravado in military shooters like the Call of Duty series or the depressingly well documented male domination subtext and rampant Male Gaze in Batman: Arkham City, but sticking to the case study of this article lets take a look at Skyrim. One of the most popular features in the game is its “Follower” mechanic, wherein a friendly NPC will happily set aside all her (and the vast majority of these characters are female) goals, aspirations and responsibilities to travel all over Skyrim with the Dovahkiin and “carry” the player's “burdens”. Followers can even be married, at which point they can be ordered (yes, ordered) to literally stay at home, cook dinners, watch the house and open up a shop to manage the couple's finances. This should without doubt already be making my readers uncomfortable, but let's take an even more specific example to illustrate how this can go wrong.

My follower for the first playthrough I undertook of Skyrim was a character named Mjoll, a retired explorer who dedicated her life to fighting crime and corruption in the city of Riften after she was nearly killed in combat. However, she gladly set all that aside (and by extension all the people in Riften counting on her to help them) to go travelling with me after I found the trusted sword she lost in an ancient underground city and returned it to her. Now, my Dovhakiin is a woman, because Bethesda most appreciatively allows players to customize every aspect of their characters, from race to appearance to most welcome of all gender. The subtext of the characters' relationship thus becomes one of rapport and camaraderie: One can assume that two female friends who share a love of travel and adventure might bond over shared and similar experiences and thus get on quite well and relate with one another. Imagine however, for a moment, that I'd chosen to play a male Dovahkiin. That, by contrast, would dredge up centuries of patriarchal power structures and lend the entire relationship a very different, and altogether more awkward and distasteful, implication: The story then ceases to be one about sisterly bonding and becomes one about a heroic, powerful, dominant male coming in, sweeping Mjoll off her feet and carrying her away from the wicked streets of Riften. In one reading, Mjoll and the Dovahkiin are relative equals. In the other, that becomes far more difficult to claim.

For another, perhaps more vivid, example, look at Aela, a wild and keen huntress who manages the Companions, Skyrim's warriors' guild. Should the Dovahkiin choose to join them, Aela will happily step down and let the player character take command of the group at the end of the quest line because the game casts the Dovahkiin as the destined saviour of the Companions who will restore the group's honour and greatness. Additionally, Aela can also become a follower and a candidate for marriage after the player completes this particular story. Similar to before, this is merely a puzzling state of affairs for a female Dovahkiin and seriously problematic for a male one: The former simply raises the question of why a relative newcomer would be entrusted with such power and responsibility, her status as Dragonborn notwithstanding, while in the latter case it seems awfully like Aela is submitting to male authority and retreating towards a more “proper” supporting role in the epic tale of the virile male lead.

This is the fundamental problem with doing a Chosen One story in an RPG video game. Because of the medium's historical connection to adolescent male power fantasy, not only is it counter-intuitive to tell this kind of a story with this framework, it's downright self-destructive, potentially running the risk of scuttling the entire game in a mire of tangled patriarchal power structures and traditional associations. No amount of “escapism” or “empowerment” protestations and defenses will extirpate this particular narrative archetype from its web of problematic implications. It's perfectly possible to tell a different kind of story than this with this setup, I'd simply point to the nigh-infinite amount of variety to be had in traditional pen-and-paper RPGs. In fact the genre is essentially designed for it. Perhaps one solution is to bring other players back into the equation (As I mentioned in my Mass Effect piece I hate talking about current events, but I couldn't help noticing this got announced as I was writing this essay) but MMOs come with their own suite of problems I'm not prepared to discuss right now and I would rather hate to see single-player RPG-influenced games completely disappear. But as the clever subversion of Morrowind shows, there's a certainly mileage to be had in crafting an exquisite single-player RPG experience that doesn't play into generations of male dominance and oppression. I certainly don't have all the answers (perhaps they're to be found written in The Elder Scrolls themselves), but I do know that if our medium is ever going to continue to evolve, mature and sustain, it absolutely must shed the childish and antiquated power structures it mires itself in with frustrating frequency. This then, must become its quest.

Friday, May 4, 2012

“Filled was the air with a dreamy and magical light”: Mí Bhealtaine and the Dream-Child

"A cat may look at a king..."
A pet hobby of mine has always been studying the often-uncanny reoccurring patterns that surround different facets of history, mythology, folklore and language. Call it the result of synchronicity, “name games”, the emanation of a mysterious Fortean psycho-social “Twilight Language” phenomenon or simply the human brain attempting to make sense out of randomness, such patterns seem omnipresent and have always caught my interest. One of the key premises of this school of thought is that certain names, dates, places and events, via coincidence and written historical narrative, become “vortexes” for an unusually high concentration of sociological artefacts and are transformed into, in a sense, pseudo-sentient forces that inspire mimicry and happenings.

Take May 4 for example: This date has born witness to a number of intriguing historical events and movements over the years, particularly those involving politics and power structures. A cursory look at Wikipedia reveals that on this date in 1493, Pope Alexander VI split the Western Hemisphere over the Line of Demarcation, a key moment in the development of colonialism; in 1776 Rhode Island rejected the authority of King George III, the first American colony to do so; Napoleon began his exile on Elba on this date in 1814 and key battles in the War of the Roses, the Fourth Anglo-Mysore War, the American Civil War and World War II also all took place on May 4. In addition, Margaret Thatcher first took office as Prime Minister of England on a May 4, a date which, perhaps ironically, had in years past also seen two infamous civil uprisings: The Haymarket Square Riot in 1886, in which a bomb was leveled at police trying to break up a labour protest in Chicago, and the Kent State Massacre in 1970, in which peaceful Vietnam War protesters were gunned down by riot police at Ohio Kent State University.

Perhaps the most important happenings on this date, at least the two most relevant to the purposes of this article, were two events a world apart geographically and decades apart temporally. On May 4, 1919, students gathered en masse in Tienanmen Square, Beijing to protest the Chinese government's response to the Treaty of Versailles. By this point China was governed by a fragmented group of warlords, the Qing Dynasty having been overthrown during the Xinhai revolution in 1911. The fractured government was easily taken advantage of by influential Western powers, who attempted to divide China into spheres of influence. The tipping point came when the province of Shandong, annexed by Germany during World War I,was transferred to Imperial Japan as a result of the signing of the Treaty of Versailles, resulting in large-scale student revolt. This protest, dubbed the May 4th Movement, could be seen as the political climax of a larger cultural movement known as the New Culture Movement, typically seen as lasting from 1911-1921. The New Culture Movement was initially born out of young scholars' reaction against traditional Chinese values and society, seen as having failed after the fall of the Qing Dynasty, and was guided by a desire to explore and embrace concepts such as democracy, globalisation, cosmopolitanism, individual liberty and sexual equality.

Meanwhile, thousands of miles away and decades earlier, a young girl named Alice Pleasance Liddel was born on May 4, 1852, a date which a mathematician friend of hers who went by Lewis Carroll would later use as the birthday for another Alice, and the setting for one of the enduring classics of literature.

Speaking more broadly, the first week of May has traditionally held a very unique power over people, dating back to some of the earliest historical records we have. The Celtic peoples of Ireland, Scotland and the Isle of Man referred to May 1 (another date with a history of political turbulence) as Beltane-Technically Lá Bealtaine, literally “The Day of Beltane”, as opposed to Mí Bhealtaine, or “The Month of Beltane”. As it marks the halfway point between the Vernal Equinox and Summer Solstice, this was considered the start of summer and one of the most important times of the year: A time for purification, transition and reverence. Beltane is in many ways a mirror image of Samhain on October 31, the day now known as Halloween: Both are liminal dates between seasons set aside for festivals and introspection and times at which the boundaries between the physical and spiritual realms are particularly loose and permeable.

The Celtic people believed that, in addition to the domain of the living, there existed a separate plane where the spirits dwelt: Sometimes described as lying across the sea on uncharted islands to the west, underground within the barrows that dotted the landscape or simply in a related, but separate dimension alongside the physical world (or perhaps some combination of all of the above), the Otherworld was the domain of the aos sí. Comparable perhaps to modern conceptions of fairies and elves and indeed the basis for many of these beliefs, the aos sí were mysterious, unpredictable nature spirits, deities and deified ancestors. Sometimes they were the descendants of the Tuatha Dé Danann, the original inhabitants of Ireland forced underground by invaders who mastered spiritual and arcane arts to become beings of mysticism. Beltane and Samhain were two of the dates it was said the aos sí could walk freely among the mortals, and the mortals could walk freely among them owing to their liminal position on the calendar. Liminality on the calendar, it would seem, translates to liminality in our lives.

It is perhaps worth noting that modern astronomical research actually places the precise date for the solar midpoint between the Vernal Equinox and the Summer Solstice closer to May 5 or May 7 (the former of course being celebrated today as Cinco de Mayo, the commemorative date of the unprecedented Mexican victory over occupying French forces in the state of Peubla, Mexico in 1862), but this can of course change year to year.

The start of summer then, the time of Mí Bhealtaine, would appear to be in general a time of year marked by transition, change, upheaval and liminality. It is also a deeply spiritual, mystical time according to the ancient Celts, the melding of the human realm with the Otherworld offering a profound, if temporary, restructuring of philosophy and cosmology. It's perhaps fitting then that summer is the time of year most beloved by Lewis Carroll, whose fond memories of summers gone by inspired his greatest work, the diptych Alice's Adventures in Wonderland and Through the Looking-Glass and What Alice Found There. Wistful, nostalgic yearning for the enchantment and spirit of the season is the dominant theme of the prefatory and closing poems of both Alice books, plays a significant role in the closing words of the first book and, in his Easter Greeting dedication to the 1876 edition of his oeuvre, Carroll encourages his readers to remember the dreamlike atmosphere of a summer morning and to hold on to the knowledge that, no matter how dark life may get, we will all “one day see a brighter dawn then this”.

It's clear that summer was a special time for Lewis Carroll. He seems to have associated it with childlike innocence and strength, two virtues he treasured and valued almost above all else, certainly helped by his summer boat trips with Alice Liddel and her sisters during which he would tell them stories, one of which became the first draft of Alice's Adventures in Wonderland and provided the inspiration for the poems book-ending both works. For Carroll then, summer represented a time when we could let go of our vice-grip on the dull, pragmatic realities of this world and allow our imaginations to view the universe with rapturous, if sometimes fleeting, sense of wonderment. Perhaps curiously, both parts of the story are set during Mí Bhealtaine: The first book is set on May 4, while the second is set on November 4, marking a key moment in the development of that book's theme of mirrored reversals: Alice leaves her house in early winter through the Looking-Glass and comes out the other side to find a sunny summer day. Also, November is, of course the start of summer in the southern hemisphere.

Despite this however, the most engaging connection between Carroll and the season may actually come from the pseudo-sentient life force his own text has acquired, and, in particular, that of its heroine. Perhaps contrary to popular belief, it's quite clear the Alice in Alice's Adventures in Wonderland and Through the Looking-Glass and What Alice Found There is not meant to be anything so dull as an insert avatar of Alice Liddel. Although he initially told an early version of the story to Alice Liddel and her sisters, the books' protagonist does indeed share her name and undoubtedly owes much else of herself to her, Carroll went out of his way to remove or bury personal signifiers, identifiers and in-jokes in the versions that made it to print. John Tenniel's illustrations of Alice look nothing like Alice Liddel (Apocryphally she's supposedly based on Mary Babcock, another friend of Carroll's, but this has yet to be fully confirmed). Indeed, an oft-overlooked fact about Through the Looking-Glass is that many of its core themes and motifs, most notably those of mirrors, opposites and inverses, was inspired not by Alice Liddel at all, but by discussions Carroll had with another Alice: His distant cousin Alice Raikes.

No, the Alice in Alice's Adventures in Wonderland and Through the Looking-Glass is something far more subtle and fascinating. She is, as often described by Carroll himself, the “dream-child”: She represents the idealized pinnacle of Carroll's worldview and embodies the purest form of the ideals he saw in his friends and wished to cultivate in all of us. As Carroll himself said, in the dramatis personae for Alice on the Stage:



"What wert thou, dream-Alice in thy foster-father's eyes? How shall he picture thee? Loving first, loving and gentle: loving as a dog (forgive the prosaic simile, but I know no earthly love so pure and perfect) and gentle as a fawn: then courteous—courteous to all, high or low, grand or grotesque, King or Caterpillar, even as though she were herself a King's daughter, and her clothing wrought of gold: then trustful, ready to accept the wildest impossibilities with all that utter trust that only dreamers know; and lastly, curious—wildly curious, and with the eager enjoyment of Life that comes only in the happy hours of childhood, when all is new and fair, and when Sin and Sorrow are but names — empty words signifying nothing!"


Alice is the dream-child. The dream-child, through an open mind, strength of character, curiosity, love and retention of the wide-eyed inspiration of a child, becomes privy to the myriad worlds and possibilities that exist alongside and around ours, but that are conceivable only to the mind of a dreamer. As she states in a famous (and often misinterpreted) quote from Through the Looking-Glass, Alice “ca'n't believe impossible things”. Logically, however, this means what she sees around her are possibilities. She might not always accept them at face value (Carroll once said that trying too hard to believe things that couldn't be true wore one's mind out), but she won't write them off as “impossible” either because impossible things can never be true and are therefore not worthy of consideration. As Alice is considerate to and of everyone and everything, she naturally has no room in her intellectual toolbox for impossibilities: The ultimate declaration and fusion of both childlike trust and mature open-mindedness.

The dream-child is, by necessity, liminal. Because Alice can conceive of and accept the possibility of anything, if not believe the utter certainty of anything, she is the only character in any of the Wonderland works to be able to freely travel between worlds. The most obvious example of this is in the beginning and end of the books themselves where Alice enters and leaves Wonderland. The famous “all just a dream” reading of the climax of Adventures in Wonderland is in fact loaded and has the potential to be dangerously misleading: Alice wakes up at the end of her adventure, yes, but what's overlooked about the scene is that it seems to be meant as a telling bit of role reversal-After retelling her escapades in Wonderland, Alice happily scampers off thinking of them all the way and her older sister falls asleep and begins dreaming herself. However, her sister dreams of Alice's dream and, critically, remains conscious all throughout. She “half-imagines” herself in Wonderland and can appreciate the beauty and magic of this Otherworld, but only on a theoretical level because she remains grounded in pragmatism and knows as soon as she opens her eyes it will all be revealed for the make-believe is truly is. The implication, meanwhile is that Alice is off having more adventures even awake, while her sister can only conceive of the lands she visits with her eyes closed. This is the difference between Alice and her sister: As the dream-child, Alice can effortlessly transgress the boundaries between worlds at will, an ability her sister has lost, having used maturity as an excuse to allow herself to be subsumed by the mundane.


"Remember, remember..."
Alice's liminality, and her ultimate destiny as the dream-child, are major plot points in Through the Looking-Glass and What Alice Found There. Once she crosses the boundary into the mirror world, the story soon becomes (put most basically) about a chess game played out with all the characters representing specific chess pieces and Alice's desire to become a Queen. This is most commonly read as Carroll's way of dealing with his estrangement from Alice Liddel and the inevitability of her growing up and losing her innocence, especially when taken in the context of the rather dark and funereal poems he placed at the beginning and end of the book. However, if we take that Alice in Carroll's story is the dream-child and not simply a rote stand-in for Alice Liddel (not to mention the recorded fact part of the book's impetus came from a different Alice entirely), we can approach this part of the narrative from a fresh perspective: I posit that the reason Alice is trying to become Queen is not because she wants to grow up and leave her wandering days behind, but because she knows the rules of chess. Alice explicitly starts the game as a White Pawn (she's told inasmuch by the Red Queen) and knows, just as the clever mathematician and chess expert Carroll knew, that there is a rule where a Pawn can become any piece the player chooses if it reaches the last square of the board. Naturally, the obvious (dare I say logical) choice would be the Queen: The piece with most freedom and range of movement, and thus the most powerful. This, I argue, is what Alice desires: Not the stuff and fussiness of mundane adulthood (it's telling once she returns to her house at the end she resumes, and with abandon, the curious self-addressing nonsense logic that's been creeping into her vocabulary over the course of both books), but the freedom to control her own destiny, to transgress against and explore both worlds (thus also mastering them both) and to keep her mind open to possibilities.

It may also be worth noting that Through the Looking-Glass takes place on November 4, the day before Guy Fawkes Night in England. This date, commemorating a failed anaracho-terrorist plot to destroy the House of Lords (while it was in session), seems of particular interest to Alice, who asks her cat if she knows “what to-morrow is”. It would seem unusual to me that someone who is drawn to that kind of history and who has, in the course of two short stories, overthrown two major monarchies (for one of which she essentially spearheaded a revolutionary coup d'etat), is gravely concerned and interested in the plight of a working-class wasp, has frequent discussions with herself about ego death, carries on stimulating two-way conversations with herself and her cats, worries about breaking rules she just invented and enjoys pretending to be a hyaena and that her nanny is a fresh kill would be particularly concerned about emulating royalty and fitting in with social convention of the time. It would seem, if anything, that dream-Alice is an almost willfully marginal, anarchic character. She's surely not going to grow up to fit in with Victorian elite, and she doesn't seem to completely fit in with the denizens of Wonderland either as she is in frequent conflict with them (although this becomes more tempered and academic as of Through the Looking-Glass). She is truly liminal then; someone who can transgress and explore both worlds and, by remaining marginal, master them both as well.

This would most likely please her foster-father because, whether it be owing to his dualistic life as simultaneous childrens' book writer, friend to young girls, esteemed mathematician and college dean or the somewhat awkward way he apparently carried himself Lewis Carroll was most certainly a marginal figure himself who didn't quite fit in with the world of Victorian elite he surrounded himself with. This becomes most clear knowing a great deal of both Alice books is actually little more than thinly veiled political satire skewering Victorian cultural norms and the elite and that John Tenniel, whom he hired to illustrate them, was famous as a political cartoonist at the time. More evidence can be found in the fact Carroll found the courage to troll Queen Victoria herself: After reading a fan letter of hers in which she praised Alice's Adventures in Wonderland and requested a signed copy of his next book, Carroll proudly sent her majesty a special print of a mathematical proof. This becomes arguably textually overt during the infamous Mad Tea Party chapter where, in one scene, The Mad Hatter criticizes Alice for wearing her hair too long. This is a critique that would never be leveled against a young girl like Alice Liddel, but it is most certainly one that would be leveled against a man of status like Lewis Carroll, who was known to keep his hair longer and shaggier than was customary. Knowing all this, it becomes becomes even more difficult for me to read dream-Alice as a straight representation of Alice Liddel and much easier in fact for me to read her as the liminal, anarcho-feminist dream-child who bears as much influence from early postmodern ideals and Carroll himself as she does from Carroll's most famous friend.

Liminality, transition, change and upheaval. These are all themes and motifs the Celts believed surrounded the time of Mí Bhealtain and seem to have been the trend of history. They are also ideals Lewis Carroll's dream-Alice seems to embody enthusiastically and triumphantly. For him and her, as them, summer was a magical time indeed. The most important festival for the Celts also took place during the summer: During Midsummer, the summer solstice, it was almost a guarantee the boundaries between our world and the Otherworld would finally dissolve and the aos sí were free to roam uninhibited, their celebrations being plainly visible from atop the hills and barrows. In Forteana, it is said an unusual uptick in sightings of mysterious or unexplained phenomena coalesce around this date. For Lewis Carroll, everything that makes life worth living is on display this time of year.

It is perhaps very appropriate that the liminal, anarchic Alice is intrinsically linked to Beltane; the summer's spirited and emotionally charged beginning. Her birthday is May 4, the same as Alice Liddel's to be sure, but also a day of extreme significance to both history and the texts at hand. May 4 is also the date on which Adventures in Wonderland takes place, as we can discern from Alice's dialog with the Mad Hatter. That same Adventures in Wonderland that begins by Alice entering a doorway in a hillside and falling out of this world into another one. That same Adventures in Wonderland that ends with Alice maintaining the ability to walk between realms. That same Adventures in Wonderland that was originally titled, among other things, Alice's Adventures Under Ground, Alice Among the Elves and Alice's Doings in Elf-Land. A remarkable coincidence, to be sure, or perhaps not, depending on one's perspective. Deeper and deeper still goes the rabbit hole of Twilight Language.

The dream-child's influence on subsequent fiction has been substantial, but perhaps nowhere does it seem to have had the most dramatic effect then on the medium of video games. The easiest example to cite would be American McGee's 2011 offering Alice: Madness Returns. McGee, a veteran designer on such groundbreaking works as the Doom and Quake series for id Software, set about to reboot Carroll's oeuvre drawing in elements of Gothic psychological horror. The result was the cult favourite American McGee's Alice in 2000, which reconceptualizes Alice as a young mental patient institutionalized after her family burned alive in a house fire when she was a child. The game centres around Alice's attempts to rebuild Wonderland, here cast as her mental “happy place”, and thus cure her illness. Madness Returns is a further reboot, adding in the twist that [spoilers to follow] Alice is not mentally ill at all, but is being treated that way and manipulated into believing it by opportunistic forces in Victorian society who have it in their best interests to keep her marginal.

While not necessarily my favourite interpretation of the character, Madness Returns does echo a few themes and concepts I feel are intrinsic to her: McGee casts Alice as explicitly liminal here, a number of times having his supporting cast point out she's easy to silence because she is an orphan, a loner, an outspoken rebel, a woman and someone who has been labeled insane. A major plot point in the game involves Alice coming to terms with this fact and learning how to take decisive action to use this liminality to her advantage and turn it around against her oppressors. In the end, she overthrows an authoritarian, controlling power and takes charge of her destiny, much as she did in Adventures in Wonderland and Through the Looking-Glass.

Perhaps more relevantly to our purposes though, are the following: Firstly, Alice: Madness Returns launched on June 14, 16 and 17, 2011, depending on the region. While none of these dates are Beltane or Midsummer, the season and month of June are perhaps worth noting (June having previously been mentioned in Walt Disney's 1951 film adaptation)-Lewis Carroll would certainly approve, at least. Secondly is Madness Returns' curious focus on eastern spirituality: An entire chapter of the game takes place in a surrealistic world based on Buddhist parables and dynastic Chinese history. Buddhism in fact becomes a major theme in the plot, as the Caterpillar takes on the role of a Tibetan monk who sets Alice on a path to a kind of Nirvana: Literally, freedom from her mental suffering brought on by her subservience to Victoriana. Recalling Alice's interesting fixation on identity and ego-death in the original books (she in turn forgets and renounces her name and famously says “I'm not myself, you see?”) it seems obvious McGee is taking this thread to its logical conclusion.

While the actual level contains elements from Indian and Japanese culture as well (most likely as a comment on and indictment of the crass Orientalism of the time in which the story is set, or perhaps that McGee still observes in our society today; it's literally titled “Mysterious East”) the underlying spiritualist metaphors remain fairly clear to me. This becomes especially noteworthy when taken in the context of Alice's previous flirtation with Chinese culture and philosophy: Recall both Alice Liddel and the dream-child have May 4 as their birthday, the date that is famously associated with the New Culture Movement. The very movement that sought to re-examine or tear down the existing dynastic order and replace it with an emphasis on cosmopolitanism, material social progress and gender equality. Most deliciously of all, one of the leading lights of the New Culture Movement was Liang Shuming, who was a Buddhist scholar (and also noted critic of Marxism and Westernism, believing them limiting and misguided and doomed to failure, respectively) and another, Cai Yuanpei, was an anarchist. I can't think of synchronicity more beautifully elegant than this: American McGee using the language of Buddhism to tell a story casting dream-Alice as an anarchist revolutionary that invokes the New Culture Movement and is seeped in a linguistic, intellectual and historical tradition of upheaval, change and transgression.

As intriguing as all this metatextual synchronicity might be, I feel the single most important thing Alice: Madness Returns contributes to the discussion is actually by virtue of its in-jokes. Peppered throughout are subtle references and shout-outs to other groundbreaking games: There are three levels in which Alice enters a two-dimensional world on a canvas by leaping into it and must jump from platform to platform, at once invoking the original platforming adventure game Super Mario Bros. in its gameplay and its sequel Yoshi's Island in its visual style, evocative of hand-painted tapestries. Additionally, the act of Alice leaping into a painting is clearly reminiscent of Super Mario 64, in which it is a core game mechanic. Alice has a dodge move which involves her turning into a flock of butterflies and traversing great distances very quickly, much as protagonist Amaterasu does in Ōkami. Certain bonus stages involving manoevering a spheroid through a twisting, multi-tiered maze evoke both the arcade classic Marble Madness and certain parts of Sonic the Hedgehog, as well as SEGA's own ball-rolling puzzler Super Monkey Ball. Finally, in the bottom depths of Queensland, the ruins of the Queen of Hearts' domain in Madness Returns, Alice can find a skeletal effigy of a small figure wearing large goggles and apparently poised as if he were focusing his mind on something. Fans of the game Psychonauts, by former LucasArts visionary Tim Schafer and his team Double Fine, will immediately recognize this as Razputin, that game's protagonist. What makes this especially clever is that Psychonauts itself features a scene where Raz follows a rabbit down a hole into a universe his mind creates.

What makes these endearing asides interesting, aside from them being affectionate nods to games and designers American McGee likes and was inspired by? I think a very compelling case could be made that acknowledging these specific works, and with such clear focus and intent, is actually part of a larger phenomenon that reinforces the legacy of the dream-child in modern fiction, particularly video games. In pulling together this reference pool of beloved and hugely influential games, what McGee has essentially done is to firmly place dream-Alice, via his own personal interpretation of her, into a tradition of writing and game design expressly drawn from the raw creative energy of liminality, imagination and dreams. There is a sense that not only are we bettered by our visions, but that our visions and worldviews are what gives us strength and allow us to interact with all planes in a healthy and engaging manner. And furthermore, that despite the physical, emotional and psychological battering she takes in McGee's games, dream-Alice is still a role model to strive for because only by channeling her can we unlock the true potential of ourselves, our minds, and the worlds around us. Layers and layers of Twilight Language take us back to the New Culture Movement, to Lewis Carroll and that sacred Celtic summer, to eternally revolutionary ideals such as feminism, equality, liminality, cosmopolitanism, open-mindedness and the desire and power to bring down corrupt institutions and power structures through logic, sense, love and dreams.

It makes a great deal of sense, and is very fitting, that American McGee should reference Shigeru Miyamoto's games to the extent he does and cite him as a major influence. For in regards to video games, he is perhaps the earliest, and certainly the most appropriate, luminary to trace this particular intellectual lineage back to. I've mentioned it before in passing and it's a matter of public knowledge, but now it's time to seriously analyse it: Miyamoto was strongly inspired by Lewis Carroll, and in particular, Alice's Adventures in Wonderland and Through the Looking-Glass. Knowing this, it's easy to spot its influences everywhere in his games-The Legend of Zelda: Ocarina of Time, for example, deals overtly with Link travelling between two worlds; one the domain of childhood and one of adulthood. The climactic scene, after all, involves Link being sent back in time to experience childhood as it is something Zelda feels has been robbed from him by his obligations as Hero of Time. This is very reminiscent of at least a common reading of Through the Looking-Glass, but more overtly of Carroll's admiration for the strength and wisdom he found in youth. Many Zelda games also begin with Link “falling out” of one world and into another-This is not as overt as in Alice as for him it usually involves leaving his isolated, insular village and discovering the expanse of Hyrule (evidence of Miyamoto's adoration of exploration and discovery), but the metaphor can still be read I argue.

Despite some clear Carrollian influences in Zelda, it's still Super Mario Bros. that for me most vividly reflects Miyamoto's debt to the dream-child. The homages are everywhere: From the obvious (mushrooms making one shrink or grow depending on which one is eaten) to the slightly more subtle (Mario and Luigi travel underground and overthrow a tyrannical despot). What's more, the basic plot is almost a complete reiteration, as Mario and Luigi enter a strange new world and traverse its length as part of their quest. In fact, Super Mario Bros. can be argued to have a double invocation of the dream-child: Mario and Luigi to be sure, but more importantly the *players themselves*, who Miyamoto is trying to reach and share his experience with. It's only logical-it makes perfect sense for someone whose most treasured memories involve exploring the forests and tunnels in and around his house and imagining them as epic adventures would be drawn to dream-Alice and her planewalking powers. It's clear that Miyamoto believes, just as Carroll did, that a childlike zeal and enthusiasm for the world, tempered with travel, wisdom and above all else a healthy respect for the power of dreams and visions, is the key to unlocking the secrets and mysteries of life and attaining enlightenment. A kind of Nirvana, if you will.


The influence Shigeru Miyamoto and Super Mario Bros. had on the development of video games as an artistic medium has been incalculable, and the motifs and concepts introduced with it have become so ingrained in the nature of what games mean to us that we oftentimes take them for granted. But if we pick up the ball American McGee seems to have started rolling and take this in the context of a larger metafictional ontological force, it becomes clear that it most certainly does not end with Miyamoto as Genesis because he, just like McGee, has positioned himself on a wondrous, branching path that stretches out from them both in all directions. The dream-child walks this path because everyone else on it invokes her in some form or another, but it doesn't even truly end with Alice either. Lewis Carroll gives us the language and the intellectual framework to have this discussion, but only because his dreams and experiences gave him the inspiration to codify them. A warm, mystical haze surrounds this path reminding us that there is magic all around if we know where to look: Over the hills, across the sea and within the barrows, perhaps, but dreamy summer mornings, Golden Afternoons and Twilight Language may forever allow us to visit them, so long as there is life left in the virtues and ideals they embody.

For me, it's times like this that cause me to reflect upon my own happy summer memories, where each day was filled with the warmth and freedom that seemed to assure anything was possible. Many a day did I spend doing my best to channel the spirit of dream-Alice, effortlessly immersed in the worlds to be discovered and the ones I could create on my own. Perhaps my very early exposure to Carroll's words has coloured the way my worldview has developed. The estimable Phil Sandifer uses “rabbit hole” as a metaphor for the transfixing hold certain works can have over us and how they can draw us in, seemingly forever. If that's the case, I've no shame in being a “rabbit hole” hipster and claiming the original as my own. As a child during those treasured summers, whenever I admired an especially sunny day or observed the magic in the wind rustling in the trees, I thought of Alice. Whenever I travelled far and away by myself to be alone with my thoughts and imaginations, which, as an only child living in the rural countryside I frequently did, I was Alice. Whenever I looked up and everything I saw was more than what I saw, it was through Alice's eyes I saw and to I still promise myself to make the time to think of her and keep her in my heart and soul. After all, one ca'n't remain a child forever, but the moment one loses a sense of wonder and hope, there's really not much use going on living at all. That the lesson I choose to take away from the dream-child of summer: Fairies are real, magic is everywhere, love is for everything, change will happen and believe possibilities. And, should darkness and pain find you, a brighter day then this will dawn on you someday.

Still she haunts me, phantomwise

Alice moving under skies

Never seen by waking eyes

"How do you get to Wonderland?/Over the hills and here at hand..."