FIVE
To learn what the video-game industry at large thought of itself and where it believed it was going, I went to Las Vegas, a city to which I had moved two years earlier for a ten-month writing fellowship. I had not expected to enjoy my time in Vegas but, to my surprise, I did. I liked the corporate diligence with which upper-tier prostitutes worked the casino bars and the recklessness with which the Bellagio’s fountains blasted the city’s most precious resource into the air a dozen times a day, often to the chorus of “Proud to Be an American.” Some days I sat on my veranda and watched the jets float in steady and low over the city’s east side, bringing in the ice-encased sushi and the Muscovite millionaires and the husky midwesterners and the collapsed-star celebrities booked for a week at the Mirage. I even liked the sense I had while living in Las Vegas that what separated me from a variety of apocalyptic ruins was nothing more than a few unwise decisions.
Las Vegas itself is as ultimately doomed as a colony of sea monkeys. One vexation is water, of which it is rapidly running out. Another is money, of which it needs around-the-clock transfusions. The city’s murder-suicide pact with its environment and itself is in-built, congenital. Constructed too shoddily, governed too erratically, enjoyed and abused by too many, Las Vegas was the world’s whore, and whores do not change. Whores collapse.
Collapsing was what Las Vegas in the winter of 2009 seemed to be doing. The first signs were small. From my rental car I noticed that a favorite restaurant had a sign that read RECESSION LUNCH SPECIAL. Laundromats, meanwhile, promised FREE SOAP. More ominously, one of Vegas’s biggest grocery store chains had gone out of business, resulting in several massive, boarded-up complexes in the middle of stadium-sized parking lots, as indelible as the funerary temples of a fallen civilization. Entire office parks had been abandoned down to their electrical outlets. Hand-lettered SAVE YOUR HOUSE signs marked every other intersection, while other signs, just below them, offered FORECLOSURE TOURS. At one stoplight a GARAGE SALE BEHIND YOU notice turned me around. I found a nervous middle-aged white woman selling her wedding dress ($100) and a small pile of individual bookcase shelves ($1). She smiled hopelessly as I considered her wall brackets ($.15) and cracked flowerpots ($.10), all set out on an old card table ($5).
This was not the Vegas I remembered, but then most of my time there was spent playing video games. A game I played only because I lived in Vegas was Ubisoft’s shooter Rainbow Six Vegas 2, one of many iterations of a series licensed out in the name of the old scribbling warhorse Tom Clancy. Rainbow Six Vegas 2 is mostly forgettable, though it is fun to fight your way through the Las Vegas Convention Center and take cover behind a bank of Las Vegas Hilton slot machines. It was also fascinating to see the latest drops of conceit wrung from Rainbow Six’s stirringly improbable vision of Mexican terrorists operating with citywide impunity upon the American mainland. The game’s story is set in 2010. While no one will be getting flash-banged in the lobby of Mandalay Bay anytime soon, driving around 2009 Las Vegas made the game’s casino gunfights and the taking of UNLV seem slightly less unimaginable.
Out at Vegas’s distant Red Rock hotel and casino, the Academy of Interactive Arts & Sciences was throwing its annual summit, known as DICE (Design Innovate Communicate Entertain), which gathers together—for the purpose of panels, networking, an awards show, and general self-celebration—the most powerful people in the video-game industry. With the Dow torpedoed, layoffs occurring in numbers that recall mass-starvation casualties, and newspapers and magazines closing by the hour (including the game-industry stalwart Electronic Gaming Monthly), DICE held out the reassurance of mingling with the dukes and (rather more infrequently) duchesses of a relatively stable kingdom—though it, too, had been bloodied. Electronic Arts, the biggest video-game publisher in the world, lost something like three-quarters of a billion dollars in 2008. Midway, creators of the sanguinary classic fighting game Mortal Kombat and one of the few surviving game developers that began in the antiquity of the Arcade Age, had been recently sold for quite a bit less than a three-bedroom home on Lake Superior. One of the still-unreleased games Midway (with Surreal Software) had spent tens of millions of dollars in recent years developing is called This Is Vegas, an open-world game in the Grand Theft Auto mode that, according to some promotional material, pits the player against “a powerful businessman” who wants to turn Vegas “into a family-friendly tourist trap.” The player, in turn, must fight, race, gamble, and party his or her “way to the top.” In today’s Las Vegas, the only thing one could hope to party his way to the top of is the unemployment line, and the game’s specter of a “family-friendly” city seemed suddenly, even cruelly, obtuse.
Upon check-in every DICE attendee received a cache of swag that included a resplendent laptop carrying case, an IGN.com water bottle (instructively labeled HANG OVER RELIEF), a handsome reading light–cum–bookmark, the latest issue of the industry trade magazine Develop (President Obama somehow made its cover, too), and a paperback copy of a self-help business book titled Super Crunchers. Shortly after receiving my gift bag, I ran into a young DICE staffer named Al, who responded to my joke about the Obama cover (“Yes Wii Can”) by reminding me of the Wii that currently occupied the White House rec room. “By 2020,” Al told me, “there is a very good chance that the president will be someone who played Super Mario Bros. on the NES.” I had to admit that this was pretty generationally stirring. The question, I said, was whether the 2020 president-elect would still be playing games. Maybe he would. The “spectacle” of games, Al told me, was on its way out. Increasingly important, he said, was “message.”
Many have wondered why a turn toward maturity has taken the video game so long. But has it? Visual mediums almost always begin in exuberant, often violent spectacle. A glance at some of the first, most popular film titles suggests how willing film’s original audiences were to delight in the containment of anarchy: The Great Train Robbery, The Escaped Lunatic, Automobile Thieves. Needless to say, a film made in 1905 was nothing like a film made twenty years later. Vulturously still cameras had given way to editing, and actors, who at the dawn of film were not considered proper actors at all, had developed an entirely new, medium-appropriate method of feigned existence. Above all, films made in the 1920s were responding to other films—their blanknesses and stillnessess and hesitations. While films became more formally interesting, video games became more viscerally interesting. They gave you what they gave you before, only more of it, bigger and better and more prettily rendered. The generation of game designers currently at work is the first to have a comprehensive growth chart of the already accomplished. No longer content with putting better muscles on digital skeletons, game designers have a new imperative—to make gamers feel something beyond excitement.
One designer told me that the idea of designing a game with any lasting emotional power was unimaginable to him only a decade ago: “We didn’t have the ability to render characters, we didn’t know how to direct the voice acting—all these things that Hollywood does on a regular basis—because we were too busy figuring out how to make a rocket launcher.” After decades of shooting sprees, the video game has shaved, combed its hair, and made itself as culturally presentable as possible. The sorts of fundamental questions posed by Aristotle (what is dramatic motivation? what is character? what does story mean?) may have come to the video game as a kind of reverse novelty, but at least they had finally come.
At DICE, one did not look at a room inhabited by video-game luminaries and think, Artists. One did not even necessarily think, Creative types. They looked nothing like gathered musicians or writers or filmmakers, who, having freshly carved from their tender hides another album or book or movie, move woundedly about the room. But what does a “game developer” even look like? I had no idea. The “game moguls” I believed I could recognize, but only because moguls tend to resemble other moguls: human stallions of groomed, striding calm. A number of DICE’s hungrier attendees wore plush velvet dinner jackets over Warcraft T-shirts, looking like youthful businessmen employed by some disreputably edgy company. There was a lot of vaguely embarrassing sartorial showboating going on, but it was hard to begrudge anyone that. Most of these people sit in cubicle hives for months, if not years, staring at their computer screens, their medium’s governing language—with its “engines” and “builds” and “patches”—more akin to the terminology of auto manufacture than a product with any flashy cultural cachet. (In actual fact, the auto and game industries have quite a bit in common. Both were the unintended result of technological breakthroughs, both made a product with unforeseen military applications, and both have been viewed as a public safety hazard.)
There was another kind of DICE attendee, however, and he was older, grayer, and ponytailed—a living reminder of the video game’s homely origins, a man made phantom by decades of cultural indifference. An industry launched by burrito-fueled grad school dropouts with wallets of maxed-out credit cards now had groupies and hemispherical influence and commanded at least fiduciary respect. Was this man relieved his medium’s day had come or sad that it had come now, so distant from the blossom of his youth? It was surely a bitter pill: The thing to which he had dedicated his life was, at long last, cool, though he himself was not, and never would be.
Like any complicated thing, however, video games are “cool” only in sum. Again and again at DICE, I struck up a conversation with someone, learned what game they had done, told them I loved that game, asked what they had worked on, and been told something along the lines of, “I did the smoke for Call of Duty: World at War.” Statements such as this tended to freeze my conversational motor about as definitively as, “I was a concentration camp guard.”
Make no mistake: Individuals do not make games; guilds make games. Technology literally means “knowledge of a skill,” and a forbidding number of them are required in modern game design. An average game today is likely to have as much writing as it does sculpture, as much probability analysis as it does resource management, as much architecture as it does music, as much physics as it does cinematography. The more technical aspects of game design are frequently done by smaller, specialist companies: I shook hands with the CEO of the company who did the lighting in Mass Effect and chatted with another man responsible for the facial animation in Grand Theft Auto IV.
“Games have gotten a lot more glamorous in the last twenty years,” one elder statesman told me ruefully. Older industry expos, he said, usually involved four hundred men, all of whom took turns unsuccessfully propositioning the one woman. At DICE there were quite a few women, all of whom, mirabile dictu, appeared fully engaged with rampant game talk. At the bar I heard the following: Man: “It’s not your typical World War II game. It’s not storming the beaches.” Woman: “Is it a stealth game, then?” Man: “More of a run-and-gun game.” Woman: “There’s stealth elements?” The industry’s woes often came up. When one man mentioned to another a mutual friend who had recently lost his job, his compeer looked down into his Pinot Noir. “Lot of movement this year,” he said grimly. Fallen comrades, imploded studios, and gobbled developers were invoked with a kind of there-but-for-the-power-up-of-God-go-we sadness.
Many had harsh words for the games press. “They don’t review for anyone but themselves,” one man told me. “Game reviewers have a huge responsibility, and they abuse it.” This man designed what are called “casual games,” which are typically released for handheld systems such as the Nintendo DS or PSP. In many cases developer royalties are attached to their reviewer-dependent Meta-critic scores, and because game journalists can be generally relied upon to overpraise the industry’s attention-hoarding AAA titles (shooters, RPGs, fighting games, and everything else aimed at the eighteen-to-thirty-four male demographic—a lot of which games I myself admire), the anger from developers who worked on smaller games was understandable. Another man introduced himself to tell me that, in four months, his company would release its first game on Xbox Live Arcade, the online service that allows Xbox 360 owners access to a growing library of digitally downloaded titles. This, he argued, is the best and most sustainable model for the industry: small games, developed by a small group of people, that have a lot of replay value, and, above all, are fun. According to him, pouring tens of millions into developing AAA retail titles is part of the reason why the EAs of the world are bleeding profits. The concentration on hideously expensive titles, he said, was “wrong for the industry.” (For one brief moment I thought I had wandered into a book publishing party.)
Eventually I found myself beside Nick Ahrens, a choirboy-faced editor for Game Informer, which is one of the sharpest and most cogent magazines covering the industry. “These guys,” Ahrens said, motioning around the room, “are using their childhoods to create a business.” The strip-mining of childhood had taken video games surprisingly far, but childhood, like every natural resource, is exhaustible.
DICE’s first panel addressed the tricksy matter of “Believable Characters in Games.” As someone whose palm frequently seeks his forehead whenever video-game characters have conversations longer than eight seconds, I eagerly took my seat in the Red Rock’s Pavilion Ballroom long before the room had reached even 10 percent occupancy. The night before there had been a poker tournament, after which a good number of DICE attendees had carousingly traversed Vegas’s great indoors. Two of my three morning conversations had been like standing at the mouth of a cave filled with a three-hundred-year-old stash of whiskey, boar meat, and cigarettes.
“Believable characters” was an admirable goal for this industry to discuss publicly. It was also problematical. For one thing, the topic presupposed that “believability” was quantifiable. I wondered what, in the mind of the average game designer, believability actually amounted to. Oskar Schindler? Chewbacca? Bugs Bunny? Because video-game characters are still largely incapable of actorly nuance, they frequently resemble cartoon characters. Both are designed, animated, and artisanal—the exact sum of their many parts. But games, while often cartoonish, are not cartoons. In a cartoon, realism is not the problem because it is not the goal. In a game, frequently, the opposite is true. In a cartoon, a character is brought to life independent of the viewer. The viewer may judge it, but he or she cannot affect it. In a game, a character is more golemlike, brought to life first with the incantation of code and then by the gamer him-or herself. Unlike a cartoon character, a video-game character does not inhabit closed space; a video-game character inhabits open situations. For the situations to remain compelling, some strain of realism—however stylized, however qualified—must be in evidence. The modern video game has generally elected to submit such evidence in the form of graphical photorealism, which is a method rather than a guarantee. By mistaking realism for believability, video games have given us an interesting paradox: the so-called Uncanny Valley Problem, wherein the more lifelike nonliving things appear to be, the more cognitively unsettling they become.
The panel opened with a short presentation by Greg Short, the co-founder of Electronic Entertainment Design and Research. What EEDAR does is track industry trends, and according to Short he and his team have spent the last three years researching video games. (At this, a man sitting next to me turned to his colleague and muttered, “This can’t be a good thing.”) Short’s researchers identified fifteen thousand attributes for around eight thousand different video-game titles, a task that made the lot of Tantalus sound comparatively paradisaical. Short’s first Power-Point slide listed the lead personas, as delineated by species. “The majority of video games,” Short said soberly, “deal with human lead characters.” (Other popular leads included “robot,” “mythical creature,” and “animal.”) In addition, the vast majority of leading characters are between the ages of eighteen and thirty-four. Not a single game EEDAR researched provided an elderly lead character, with the exception of those games that allowed variable age as part of in-game character customization, which in any event accounted for 12 percent of researched games. Short went on to explain the meaning of all this, but his point was made: (a) People like playing as people, and (b) They like playing as people that almost precisely resemble themselves. I was reminded of Anthony Burgess’s joke about his ideal reader as “a lapsed Catholic and failed musician, short-sighted, color-blind, auditorily biased, who has read the books that I have read.” Burgess was kidding. Mr. Short was not, and his presentation left something ozonically scorched in the air. I thought of all the games I had played in which I had run some twenty-something masculine nonentity through his paces. Apparently I had even more such experiences to look forward to, all thanks to EEDAR’s findings. Never in my life had I felt more depressed about the democracy of garbage that games were at their worst.
The panel moderator, Chris Kohler, from Wired magazine, introduced himself next. His goal was to walk the audience through the evolution of the video-game character, from the australopithecine attempts (Pong’s roving rectangle, Tank’s tank) to the always-interesting Pac-Man, who, in Kohler’s words, was “an abstraction between a human and symbol.” Pac-Man, Kohler explained, “had a life. He had a wife. He had children.” Pac-Man’s titular Namco game also boasted some of the medium’s first cut scenes, which by the time of the game’s sequel, Ms. Pac-Man, had become more elaborate by inches, showing, among other things, how Mr. and Ms. Pac-Man met. “It was not a narrative,” Kohler pointed out, “but it was giving life to these characters.” Then came Nintendo’s Donkey Kong. While there was no character development to speak of in Donkey Kong (“It’s not Mario’s journey of personal discovery”), it became a prototype of the modern video-game narrative. In short, someone wanted something, he would go through a lot to get it, and his attempts would take place within chapters or levels. By taking that conceit and bottlenecking it with the complications of “story,” the modern video-game narrative was born.
How exactly this happened, in Kohler’s admitted simplification, concerns the split between Japanese and American gaming in the 1980s. American gaming went to the personal computer, while Japanese gaming retreated largely to the console. Suddenly there were all sorts of games: platformers, flight simulators, text-based adventures, role-playing games. The last two were supreme early examples of games that, as Kohler put it, have “human drama in which a character goes through experiences and comes out different in the end.” The Japanese made story a focus in their growingly elaborate RPGs by expanding the length and moment of the in-game cut scene. American games used story more literarily, particularly in what became known as “point-and-click” games, such as Sierra Entertainment’s King’s Quest and Leisure Suit Larry, which are “played” by moving the cursor to various points around the screen and clicking to the result of story-furthering text. These were separate attempts to provide games with a narrative foundation, and because narratives do not work without characters, a hitherto incidental focus of the video game gradually became a primary focus. With Square’s RPG–cum–soap opera Final Fantasy VII in 1997, the American and Japanese styles began to converge. A smash in both countries, Final Fantasy VII awoke American gaming to the possibilities of narrative dynamism and the importance of relatively developed characters—no small inspiration to take from a series whose beautifully androgynous male characters often appear to be some kind of heterosexual stress test.
With that, Kohler introduced the panel’s “creative visionaries”: Henry LaBounta, the director of art for Electronic Arts; Michael Boon, the lead artist of Infinity Ward, creators of the Call of Duty games; Patrick Murphy, lead character artist for Sony Computer Entertainment, creators of the God of War series; and Steve Preeg, an artist at Digital Domain, a Hollywood computer animation studio. The game industry is still popularly imagined as a People’s Republic of Nerds, but these men were visual representations of its diversity. LaBounta could have been (and probably was) a suburban dad. The T-shirted Boon could have passed as the bassist for Fall Out Boy. Murphy had the horn-rimmed, ineradicably disgruntled presence of a graduate student in comparative literature. As for the interloping Preeg, he would look more incandescent four nights later while accepting an Academy Award for his work on the reverse-aging drama The Curious Case of Benjamin Button.
LaBounta immediately admitted that “realistic humans” are “one of the most difficult things” for game designers to create. “A real challenge,” he said, “is hair.” Aside from convincing coifs, two things video-game characters generally need are what he called “model fidelity” (do they resemble real people?) and “motion fidelity” (do they move like real people?). Neither, he said, necessarily corresponded to straight realism. Sesame Street’s Bert and Ernie, for example, had relatively poor model fidelity but highly convincing motion fidelity. As for the Uncanny Valley Problem, LaBounta said, “just adding polygons makes it worse….The Holy Grail in video games is having a character move like an actual actor would move. We’re not quite there yet.” Getting there would be a matter of “putting a brain in the character of some intelligence.” I was about to stand up and applaud—until he went on. One thing that routinely frustrated him, LaBounta said, was when a video-game character walks into a wall and persists, stupidly, in walking. Allowing the character to react to the wall would be the result of a “recognition mechanic,” whereby the character is able to sense his surroundings with no input from the player. Of course, this would not be intelligence but awareness. The overall lack of video-game character awareness does lead to some singularly odd moments, such as when your character stands unfazed before the flaming remains of the jeep into which he has just launched a grenade. What that has to do with character, I was not sure. If “personality is an unbroken series of successful gestures,” as Nick Carraway says in The Great Gatsby, the whole question of believable characters may be beyond the capacity of what most video games can or ever will be able to do—just as it was for James Gatz.
In Boon’s view, when talking about believable characters, one had to specify the term. Were you talking about the character the gamer controls or other, nonplayable characters within the game? A great example of the former, Boon believed, was the bearded and bespectacled Gordon Freeman from Valve’s first-person shooter Half-Life. Part of the game’s genius, Boon said, is how Gordon is perceived. In the game’s opening chapters “everyone treats you as unreliable, and you feel unreliable yourself. By the end, people treat you differently, and you feel different.” Gordon’s journey thus becomes your own. (Also, throughout the game, Gordon does not say a word.) As for believable nonplayable characters, Boon brought up two of the most memorable: Andrew Ryan from BioShock and GLaDOS from Valve’s Portal. Both games are shooters, or neoshooters, in that BioShock has certain RPG elements and the “gun” one fires in Portal is not actually a weapon; both characters are villains. While the villains in most shooters exist only to serve as bullet magnets, Ryan (a sinister utopian dreamer) and GLaDOS (an evil computer) are of a different magnitude of invention. The gamer is denied the catharsis of shooting either; both characters, in fact, though in different ways, destroy themselves. For the vast majority of both games, Ryan is present only as a presiding force and GLaDOS only as a voice. These are characters that essentially control the world through which the gamer moves while raining down taunts upon him. In GLaDOS’s case, this is done with no small amount of wit. In her affectless, robotic voice, GLaDOS attempts, whenever possible, to destroy the gamer’s self-esteem and subvert all hope of survival. “GLaDOS is so entertaining,” Boon said, “I enjoy spending time with her—but I also want to kill her.” The death of Andrew Ryan, on the other hand, is one of the most shocking, unsettling moments in video-game history. It has such weird, dramatic richness not because of how well Andrew Ryan’s hair has been rendered (not very) but because of what he is saying while he dies, which manages to take the game’s themes of control and manipulation and throw them back into the gamer’s face. These two characters have something else in common, which Boon did not mention: They are written well. They are funny, strange, cruel, and alive. It is also surely significant that the controlled characters in BioShock and Portal are both nameless ciphers of whom almost nothing is learned. They are, instead, means of exploration.
Patrick Murphy jumped in here to say, “It’s not whether the character is realistic or stylized; it’s that he’s authentic.” In illustration he brought up Kratos of his company’s own God of War series. Kratos is a former Spartan captain who, after being slain in combat by Ares, manages to escape Hades and declare war on the gods. Among the most amoral and brutal video-game protagonists of all time, Kratos, in Murphy’s words, “doesn’t just stab someone; he tears him in half. That helps sell him. Veins bulge out when he grabs things. It gives him an animal feeling that’s really necessary.” The narrative of the God of War games is set on what game designers refer to as “rails,” meaning that Kratos’s story is fixed and the narrative world is closed. The gamer fights through various levels, with occasional bursts of delivered narrative to indicate that the story has been furthered. It probably goes without saying that no one plays the God of War games to marvel at the subtlety of their storytelling, which is pitched no higher than that of a fantasy film. It is a game that one plays to feel oneself absorbed into a malignant cell of virtual savagery. Kratos’s believability is served by the design and effect of the gameplay rather than the story. In short, he has to look great, which provides a fizzy sort of believability. If Kratos does not look great in purely creaturely ways, the negligible story will be dumped into the emotional equivalent of a dead-letter office. This is one of the most suspect things about the game form: A game with an involving story and poor gameplay cannot be considered a successful game, whereas a game with superb gameplay and a laughable story can see its spine bend from the weight of many accolades—and those who praise the latter game will not be wrong.
Steve Preeg, by now wearing a slightly worried expression, opened by admitting that he was not a gamer and professed to know very little about games. But he knew a bit about believability and character. To explain the difficulty he had with animation in The Curious Case of Benjamin Button, he showed us “draft” shots from the digital process by which Brad Pitt’s character was aged. In the earliest attempts Pitt looked undead—utterly terrifying. Just shifting the width of his eyes a tiny bit, Preeg said, made the difference between “psycho killer” and “a little boy who just got home.” He showed us how he did this, and the difference was indeed apparent. Preeg then turned philosophical. In Hollywood, he said, “we have very clear goals.” He worked under a director, for instance, had a clear idea of the script, and knew whether sad or happy music would be playing under the scenes he was required to digitally augment. Every eye-widening and face-aging task he was given as an animator had a compelling dramatic context attached to it, which he used to guide his animation decisions. His art was always guided. “Your characters,” he said, turning to the panel, “have to be compelling in very different ways, depending on what the audience wants to do.” Preeg was silent for a moment. Then he said, “You guys are going to have a very, very difficult time.”
After the panel, I sought out the man who, during its EEDAR portion, turned to his colleague and said, “This can’t be a good thing.” His name was John Hight, and he was the director of product development for Sony Computer Entertainment, Santa Monica Studios. One of the projects he was currently overseeing was God of War III, a game whose budget was in the tens of millions of dollars. Yet he was no pontiff of the AAA title. Hight had also greenlit and helped fund thatgamecompany’s downloadable PlayStation 3 title Flower, a beautiful and innovative game—a stoner classic, really—in which the player assumes control of a windblown petal and floats around, touching other flowers and gathering their petals and eventually growing into a peaceful whirling versicolor maelstrom. (When faced with releasing the tranquilizingly mellow Flower, no one at Sony could think of an apt category under which to market it. Hight called it a “Zen” game, and that was how it was shipped. Only later did anyone realize that the category was Hight’s invention.) Hight, who was in his forties, had worked on dozens of games over the course of his career, from RPGs to shooters to flight simulators to action-adventure games. When I asked him why he had scoffed during the EEDAR presentation, he said, “The scary thing is that someone is going to enlist that data to find the ideal game that hits all the proper points, and they’re going to convince themselves—and a lot of bean counters—that this is a surefire way to make money.” When I asked if he could imagine any circumstances in which such data would be useful, he said, “I’m sure that our marketing people will at some point be interested, and if it helps them have the courage of my convictions, that’s okay.”
Hight’s first title was a video-game version of the old Milton Bradley tabletop classic Battleship for the Philips CDI (a doomed early attempt at an all-purpose home-entertainment center that was launched in 1991 and discontinued seven years later). As the game’s producer, coder, animator, and writer, Hight had no legal claim to develop Battleship when he began his work on it. When the time came, he simply crashed a toy expo, walked up to the Milton Bradley booth, and, after a short demonstration, strolled away with the rights. The entire game cost him $50,000. (“Very exciting,” he said.) He had also been around long enough to remember an argument he had in 1994 with a colleague about whether this new genre known as the shooter was “here to stay.” Hight told me, “Doom II had come out and done pretty well, but there really weren’t many companies doing first-person shooters. I think it was seen as a novelty.” Hight’s colleague had insisted to him, “What more can you do? You’re sort of just pointing a gun and shooting it.” Hight was able to convince the man otherwise and proceed with his shooter. It became Studio 3DO’s Killing Time, an innovative shooter for its day in terms of its gameplay (it was among the first shooters that allowed the player to crouch), setting (the 1930s), and relatively knowledgeable employment of an outside mythology (namely, Egyptology).
“When I first got into the industry,” Hight said, “there were a lot of really hardcore gamers, and we were basically making games for us. We weren’t making games for an audience. It was for us. And we got so specialized and so stuck in our thinking.” These men’s minds were typically scattered with the detritus of Tolkien, Star Wars, Dungeons & Dragons, Dune—and that was if they had any taste. Many of the first relatively developed video-game narratives were like something dreamed up by an imaginative child (a portal to Hell…on Mars! Hitler…as a cyborg!), with additions by an adult of more malign preoccupations. The writing in such games was an afterthought. For Killing Time, Hight told me, he “was literally writing the dialogue the day” of the recording session with the actors, one of whom approached Hight after the session was over and asked, “Do you guys ever just write a script and give it to the actors ahead of time?” A decade and a half later, Hight was still abashedly shaking his head. “Back in the day, most designers insisted writers really couldn’t understand how to develop good, interactive fiction. So there was this designer–writer divide that the game industry sort of started out with.”
When I asked how it could be that a panel putatively devoted to believable characters did not manage to discuss writing even once, Hight gently averred: “That panel was mostly composed of technical artists. The roots of games are in technical people. My background is computer science. I was a programmer for ten years. That’s kind of how we approached it. How can we make this thing run faster? How much more can we put into the game? How can we make the characters look better?”
With its origins in the low-ceilinged monasteries of computer programming, video-game design is, in many ways, an inherently conservative medium. The first game designers had to work with a medium whose limits were preset and virtually ineradicable. There were innumerable things games simply could not do. In this sense, it is little wonder that the people who were first drawn to computers were also drawn to science-fiction and fantasy literature. As Benjamin Nugent notes in his cultural history American Nerd, sci-fi and fantasy literature is almost always focused “on the mechanics of the situation. A large part of the fun of reading a sci-fi series is about inputting a particular set of variables (dragon-on-dragon without magic) into a model (the Napoleonic Wars) and seeing what output you get.” A video game is first and foremost a piece of software (which is why many magazines do not dignify games with italicized titles). The video-game critic Chris Dahlen, who by his own admission comes out of both a software and an “artsy-fartsy” background, argues that games “don’t pose arguments, they present systems with which to interact.” In this view, games are not and cannot be stories or narratives. Rather, some games choose to enable the narrative content of their system while others do not. A disproportionate number of game designers at work today come out of a systems, programming, or engineering background, which has in turn helped to shape their personalities and interests. One result of this is that it forces designers to imagine games from the outside in: What variables do I inject into the system to create an interesting effect?
For any artist who does not sail beneath the Jolly Roger of genre, this is an alien way to work. As someone who attempts to write what is politely known as literary fiction, I am confident in this assertion. For me, stories break the surface in the form of image or character or situation. I start with the variables, not the system. This is intended neither to ennoble my way of working nor denigrate that of the game designer; it is to acknowledge the very different formal constraints game designers have to struggle with. While I may wonder if a certain story idea will “work,” this would be a differently approached and much, much less subjective question if I were a game designer. A game that does not work will, literally, not function. (There is, it should be said, another side to the game-designer mind-set: No matter how famous or well known, most designers are happy to talk about how their games failed in certain areas, and they will even explain why. Not once in my life have I encountered a writer with a blood-alcohol content below .2 willing to make a similar admission.)
When I asked Hight about these systemic origins of game design, he added that the governing systems of design have, as time has passed, become less literal and more emotional. “I think,” he said, “the system’s there because too many developers have failed miserably in the chaotic pursuit of something new. They have this fear of failure, so it’s like, ‘Okay, let’s fairly quickly figure out this system.’ We typically call it the ‘game mechanic,’ or the ‘pillars’ of the game. Those are our constraints, and from that we’ll build around it. It’s tens of millions of dollars for people to make a game. You might be the person who wasted twenty million dollars, and this is the end of your career! So you do something that’s based on a proven design or proven gameplay. Why do we have so many first-person military shooters? Because it’s proven those things can sell.”
Video games, I told Hight, are indisputably richer than they have ever been in terms of character and narrative and emotional impact, and anyone who says otherwise has been not playing many games. Unfortunately, they began in a place of minus efficacy in all of the above, and anyone who says otherwise has probably never done anything but play games. After a kleptomaniacal decade of stealing storytelling cues from Hollywood (many games are pitched to developers in the form of so-called rip-o-matics: spliced-together film scenes that offer a rough representation of what the game’s action will feel and look like), games have only begun to figure out what it is they do and how exactly they do it. Hearteningly, there seems to be some industry awareness that writing has a place in game design: One DICE presentation listed the things the industry needed to do, among them the “deeper involvement of virtual designers (and writers) into the game creative process.” Alack, this banishment of the writer to the parenthetical said perhaps too much about game-industry priorities. As to whether developers could put aside their traditional indifference when it came to writing, I told Hight that I had my doubts. At nearly every DICE presentation, matters of narrative, writing, and story were discussed as though by a robot with a PhD in art semiotics from Brown. Perhaps, though, this was being too hard on the industry, which began as an engineering culture, transformed into a business, and now, like a bright millionaire turning toward poetry, had confident but uncertain aspirations toward art. The part of me that loves video games wants to forgive; the part of me that values art cannot.
Hight agreed that the audience “won’t be forgiving forever,” allowed that the dialogue in many games was “pretty tedious,” and admitted that almost all games’ artificial intelligence mechanisms delivered only half of what that term promised. But what game designers were trying to do was, he reminded me, incredibly difficult and possibly without parallel in the history of entertainment. The “weird artificial setups” of video-game narrative would begin to fade as AI improved, and already he was seeing “more emphasis” put upon writing in games. “But at the same time,” Hight said finally, “our audience is saying, ‘All right, what else? We’re getting bored.’”
A few nights later, at DICE’s twelfth annual Interactive Achievement Awards, which are the closest equivalent the industry has to the Oscars, several interesting things happened. The first was watching the stars of game design subject themselves to a red-carpet walk, most of them looking as blinkingly baffled as Zelig in the glare of the assembled press corps’s klieg lights. The second was the surprisingly funny performance turned in by the show’s host, Jay Mohr (“There are a lot of horny millionaire men not used to the company of women here. If you’re a woman and you can’t get laid tonight, hang up your vagina and apologize”). The third was the fact that Media Molecule’s LittleBigPlanet, a game aimed largely at children, the big selling point of which is its inventive in-game tools that allow gamers to design playable levels and share them with the world, and which has no real narrative to speak of, won nearly every award it was up for, including, to the audible shock of many in the audience, Outstanding Character Performance.
The character in question is a toylike calico gremlin known as Sack Boy. There is no question that Sack Boy is adorable and that LittleBigPlanet is a magnificent achievement—weird and funny, with some of the most ingeniously designed levels you will find in any game—but it was also indefatigably familiar in terms of its gameplay, the most interesting feature of which is the application of real-world physics to a world inhabited by wooden giraffes, doll-like banditos, and goofily unscary ghosts. LittleBigPlanet’s Mongolian domination of the awards became so absurd that, by show’s end, Alex Evans, Media Molecule’s co-founder, needed a retinue of trophy-shlepping Sherpas to hasten his exit from the stage.
The titles it bested for Console Game of the Year—Fallout 3, Metal Gear Solid 4, Gears of War 2, and Grand Theft Auto IV—were warheads of thematic grandiosity. The bewitching but more modest LittleBigPlanet’s surfeit of awards felt like an intraindustry rebuke of everything games had spent the last decade trying to do and be—and a foreclosure of everything I wanted them to become. The video game, it suddenly felt like, had been searching for a grail that was so hard to find because it did not actually exist.