The days of the PC-only game developer are about as alive as the days of the dinosaurs -- if you're a PC-only game developer these days, the chances are good that you're making casual games, not full-on triple-A shooter or RPG titles. Even RTS games, the genre with devotees that cling doggedly to the "PC is the only viable platform for RTS!" mantra, have transformed with the recent release of Halo Wars by Ensemble.
And that brings us to today's Good UI Principle:
Principle 1: It is far easier and cheaper to develop a multi-platform UI for the console version first and use it as the basis for the PC version than it is to develop the UI for the PC version and back-develop it for the console version.
Principle 2: In the era of multi-platform (and thus multi-input-device) games, UI coding language and algorithms need to be changed from language that uses words like "mouseover" and "rollouts" to "on-actions", "on-exits", and "on-selects."
Let's talk about the first principle, well, first. Over the course of my career I've worked on three titles that were multi-platform with different platform foci: Quake 4, whose pedigree was an incredibly hardcore PC shooter fan base but which would be an Xbox 360 launch title; Marvel: Ultimate Alliance, a superhero comic title with a decided focus on the console version with a PC port; and Raven's currently-in-development project Wolfenstein, in which the console version would be the emphasis but the PC platform would be viewed as no slouchy port.
Quake 4 was a wild UI ride in which we learned, like green soldiers on a battlefield, just how different developing UI for a console would be. We naively assumed that we could easily make one UI and simply have switches for the different versions, making any necessary art swaps for things like the buttons of the 360 controller and pushing things out or in on the screen to handle the safe area if we were running on the console instead of the PC. Wow, were we wrong.
It turned out that developing the UI for Quake 4's 360 version would be such a monumentally different task than the PC version that we ended up having to devote one person to the PC UI full-time and one person to the 360 UI full-time, with the latter person being your humble author. There were whole swathes of the 360 UI that needed to be created whole cloth that would never appear in the PC version, such as the Xbox Live matchmaking functionality. We knew we didn't want the PC version to "look too consoley" since the primary audience would be the hardcore fans of the Quake series, so we had to create entirely different art and navigational flow for the 360 version. The result was that we branched the versions at a mid-point in development, and it turned out to be a pretty good solution.
The drawback was that any core game functionality that was added that affected UI had to be duplicated by hand in both UI versions. Additionally, branching the UI had the psychological effect of making us think of them as two separate projects, so we began developing art assets independently until each version's UI was developed in such a way that any section that the two UI might have been able to share if we'd kept them together wouldn't actually look the same anymore.
I look back on our work on Quake 4 with a lot of pride because, with Raven having been a PC-only developer to this point, I felt like we handled our first console title -- a significant launch title, to boot -- really well and amassed an incredible amount of knowledge about cross-platform UI in the process that we could use for future titles.
I tried to apply those lessons to the work I did on Wolfenstein (and just for reference, that title has been in development enough since I've left that my work is probably no longer visible in it, so applying anything from here to the title when it's released is probably not going to translate). This time, I was the only UI designer on the project and wanted to keep the amount of work I'd have to do to maintain cross-platform UIs to a minimum.
One of the first lessons I tried to apply was to get rid of the notion of a "consoley UI" versus a "PC-looking UI." My feeling is that good UI is good design, and good design is universal. Good design transcends these outdated notions we have of UI needing to look a certain way simply because you have a controller in your hand and not a mouse. (How the UI actually functions, on the other hand, will become part of Principle 2 momentarily.) If the UI has good aesthetic design and functions smoothly, the player isn't going to think about whether it looks like a console UI or a PC UI.
The second lesson I learned was that the game's design is directly proportional to how miserable or pleasant the UI designer's job is going to be. (I'm pretty sure there's an equation that can be derived here, but I'll leave that as an exercise for the reader.) What's the limiting reagent in a game design as it relates to the UI? The answer is the input mechanism -- if your primary input mechanism has eight buttons and two thumbsticks, that's all you have to work with, and your game will be designed to fit that. If your primary input mechanism is a mouse and keyboard, you have whole vistas open to you.
So a corollary to our Principle needs to be stated:
In any title being developed simultaneously for multiple platforms, your limiting input reagent must be the input device with the least amount of buttons.
You may want to think of your game being played via a mouse and a keyboard first, maybe because that's your preferred method, or maybe because as a developer you've always worked in the PC medium, but you can't, even if your main SKU is the PC version. Once you do this, the game will be designed to use more buttons and input mechanisms than you'll have on your limiting-reagent input device. And once you've done that, you'll be forced down one of two roads:
1. Spend time and money you don't have to modify your game design so it plays differently on two different platforms.
2. Force the UI to do weird and strange things it's not designed to do to handle the fact that you have more input requirements than you have buttons to support.
The first one is just going to lead to complaints by players that one platform is clearly superior to the other and that you rushed out a cheap, shoddy port to make a quick buck, even if that's not what you did. The second one is going to break a cardinal rule I have in the Good UI Principles list, which is that you can never, ever get good UI design or good game design by forcing the UI design to make up for bad decisions in the game design. Both the UI and the game design will suffer for it. The shorter and sweeter version is this: UI is not a gameplay Band-Aid. (This is where the aforementioned equation comes in.)
The end result of this was that by the time I'd left Raven for something new I'd managed to create a single UI for both PC and console platforms that could switch easily depending on which platform you were running, with few areas that needed special treatment just for a specific platform. But we began running into another issue, one that highlighted the second part of today's Good UI Principle.
We realized that the development language our UI was being coded in was filled with references to mouseovers, mouseouts, escape key presses, and the like. Every time a button in the UI was created we had to do a mental switch to flip between the association of, say, the press of the D-pad down to a mouse rollover on a button. Handling the concept of "going back" was an issue -- does that always equate to hitting Escape, which we had a function for, or can you ever use Escape in a different context? Is the function mapping of the "go back a screen/widget" always one to one between the action and the Escape handler, or can Escape ever be mapped to multiple functions? This was just one example of many we came across.
The language wasn't initially mis-designed, it was simply a product of the lengthy transition that game technology took between PC-only titles and PC-plus-console titles. UI had always been the forgotten child of game development already, often getting pushed to the last moment and handed off to whichever artist and junior programmer was free at the time. It was the only development tech we had.
But even today I see the same mouse-centric language being used in the latest version of Actionscript for use in Flash, and in the GUI tech of non-early-FPS games. It surprises me, since even Flash-based games these days are appearing on console and cellphone platforms. It seems to me that one of the best ways to develop good cross-platform UI with good design that successfully trancends the notion of a platform's aesthetic is to create a UI development tool that removes any association with an actual input device and works in the more higher-level abstract terms of actions, selections, and navigation trees. Once you've put those types of functions into the hands of a UI developer they stop thinking of rollovers and mouseouts and start thinking, "what has to happen visually when we want to select this element?" Not how they're actually selecting this element. And those design decisions can begin to trancend the notion of platform and input device, leading to good, tight, solid UI design that's not only accessible for the player, but infinitely cheaper, easier, and faster for the UI designer to work on.
Okay, I admit that I've only just started these essays and I've already blatantly lied to you. Not only did I not post my first one over the weekend like I promised, I'm also doing a bait and switch on the topic. Instead of talking about the development path between console and PC UI first, there's something that came up in conversation today that I'd rather devote the first entry to, and its position on my list is 2b (because it kind of dovetails into a couple of other points I had made earlier on my list). Here's the tenet:
Giving the player all information at all times is not only not advantageous, but actively damaging to the game experience.
It's a subpoint on my list because it dovetails into a larger point, which is that a UI's job is to present only the information you need when you actually need it, and that information shouldn't crowd your screen at any other time.
Gears of War, both one and two, by Epic Games serve as a fantastic example of this point. In terms of UI the screen has practically nothing on it -- and very frequently has actually nothing on it -- and elements only fade in when you need the information. Even your weapon and ammo information will fade out if you're walking through an area of the world where you won't be experiencing any combat. There's no minimap at all. (And thank God for that -- I'll save my minimap rant for another essay.) The UI for Gears is so minimal that you'd probably forget it even had one, giving you an unfettered view to the actual game and helping to break down that barrier to full immersion that a UI can so often be.
At the other end of the spectrum is the example of a game I once worked on (a past project that will remain anonymous so as to cover my ass) in which the designers and programmers insisted on putting everything you could possibly want to know at all times in the game's HUD, all visible at once. The weapon information was required to be up at all times, along with your secondary weapon information, and a request was even made of me to put five concentric directional indicators around the crosshair icon right in the center of the screen with large timers for the bombs you had to disarm, and these went with the three grenade indicators that were already added despite my protests that rotated around the central crosshair, rendering the entire central area of the screen -- a rather important area in a shooter -- completely unusable when it came to seeing the actual game. There was a minimap chock full of ten or more different icons, one of which even told you which direction a person you could sneak up on was facing so that in case you were around the corner and couldn't actually see them you wouldn't accidentally sneak up on them if they were facing you. (Now tell me, what's the point of a stealth mechanic if you're just going to give the player all of the information they need to bypass the actual stealthing?)
Too. Much. Information.
As a UI designer I frequently encounter heavy resistance when I try to strip a UI down and make it lean and mean and contextually visible, and this is because there's a switch that tends to happen in these game designers when they go from players to designers -- they forget how to look at a game through a player's eyes and instead view it through the eyes of a designer. As designers, they want to give the player all the tools they think the player will need to enjoy playing the game, and they erroneously think that giving them lots of information at all times is helpful when in fact it can create confusion and clutter, rendering the experience far less enjoyable. Who wants to view the game world through a visible area the size of a postage stamp?
The wonderful side effect of going minimal and contextually visible is that when you hide the vast majority of your UI and you show only what the player needs when they need it, you change the level of importance of that information drastically -- what would have been lost in a sea of other UI elements on the screen (or what would have had to be made artificially larger, brighter, and thus more annoying) now becomes the singular focus, the most important thing the player needs to know right now. You automatically draw attention to something without having to do any work to set it apart from ten other pieces of information vying for the player's attention. And after all, which would you rather do: make one item visible on the screen, or have to add multiple layers of attention-grabbing art that takes up space and requires you to have to shuffle the rest of your UI jigsaw pieces just to fit it into your "640 by 480 minus the safe area and oh yeah how's it gonna look on the PC" UI?
You're likely saying to yourself, "it's all well and good to say that Gears did this, but they're a simple shooter." It's a fair point, but while I don't work for Epic and don't have any insight into their UI and game design process, I'm pretty sure that the simplicity of Gears' gameplay is not the reason for the simplicity of its UI -- it's because they made smart decisions about the flow of their gameplay and how many things would be thrown at the player at one time.
Take the lack of a minimap, for example. Minimaps, while very useful and often necessary for some games and genres (I recently had to admit that a minimap has to be in Demigod, the game I'm currently working on, and since we've put it in I've found it invaluable), are sometimes thrown into games, in my opinion, as a lazy design solution. Instead of structuring the gamplay so that the player wouldn't have to be forced to sort out which allies are where and also how much ammo he's got and also who's stealing what on the playfield, a minimap is thrown in as a way to just sort of vomit all of the information onto the screen at once (sorry about that metaphor) and let the player sort it all out.
Instead of a minimap to tell you where your allies are and where your threats are, Gears does two things: it lets you bring up a very, very simple directional indicator with the press of a button that shows you where your fellow Gears are, and it pops up a simple, small Y button with an eye icon that you can press -- only if you choose to -- to drag your camera to the single most important piece of information you need to know right now, which is sometimes a very large and recent threat or a door you need to blow up. Again, they give you an indicator of only the most important thing you need to know right now, and they don't have to do anything special to make it stand out against the noise of a UI because it's the only thing on the screen. The player doesn't need to know more than that because they're going to actually look at the game world for their information, not a UI that's blocking most of it.
And that's really the key: the game world needs to provide the majority of game information, not the UI. The UI is merely a helper for the few things we can't convey via gameplay. When I've tried on past projects to do something so simple as to hide the weapon information when you don't need it I've been told, "but I might need to know how many clips I have left before I get into my next firefight." Apparently you don't need to in Gears, and that wasn't a simple UI decision: observance of the gameplay flow shows that they carefully structure the gameplay so that you're never caught in a situation in which you don't have that information when you need it.
It can be done, and it should be done. We don't play UI, we play games. The more sophisticated our games and our technology to make them gets, the more we can let the game do the talking rather than the UI wall we're forced to construct in the meantime. And until the day a game can be made with no UI, the good UI designer -- and the game designers he or she works with -- should always be thinking, "how can this UI be reduced even further?"
I've spent the last five years of my game industry career making user interfaces for games of many types -- PC games, console games, games that ship on PC and multiple console platforms, first-person shooters, third-person action games, and strategy action games. And I spent the five years prior to that working in areas of game development that touched heavily on the user interface experience -- web design for some of the largest game web sites on the net, marketing materials for games, and more.
I've learned a lot over those many years, and about a year ago I decided it would be kind of cool to keep track of the things I've learned about UI design and the user experience in games in some kind of list. The list has since grown to include about twenty items so far and as long as I work in UI I can only assume that the list will continue to grow. I've taken to calling the list "Good UI Principles" and I've decided it would be a great idea to start expanding on each one of the items in my list in a collection of blog essays because discussion about good UI design is surprisingly hard to find. As to whether or not I'll actually contribute anything to a discussion about good UI design remains to be seen, but hey, what's the harm in trying?
I was about to start off with a disclaimer saying that I don't profess to be a UI expert, but I'm pretty sure that the people I work with would probably call out my repeated arrogance. I do tend to profess that, but it doesn't mean I fully believe it. I don't have a degree in UI or graphic design (my degree, for those that don't know, is in astrophysics); all of my knowledge comes from sheer hands-on experience. And it's precisely because of that that I want to begin these essays -- not because I want to claim to be an expert, but because I'm hoping it'll generate some discussion about good UI design in what appears to be a vacuum from people who are a lot smarter about the topic than I am. (In other words, I'm just doing this so I can steal your smarts with the thin ruse of "sharing knowledge." You see what I did there?)
So over the course of the next few months I plan to choose a topic from my list and devote a blog entry to it. As you can tell from the last dated entry I'm a fairly irregular blog writer these days, so I'm not promising any regularity in updates. But I'd love to hear back from people, and I'd especially love to get additions to the list. So if you keep your own, throw your suggestions into the comments.
There's no importance to the numbering in my list, so I won't necessarily be going in any kind of order. They're simply added as I think of them. And the first one I'd like to expand on is the following (and it happens to be number 11 in my list):
You cannot simply port a PC game's UI over to a console -- it must be redesigned from the ground up. Conversely, if you design UI for a console game from the start, porting a PC version or maintaining a PC version alongside it is much easier.
Since I wan't to devote an entire entry to this and not just cram it in with the intro to my stunning and sure to be award-winning new series, I'll leave this for the next blog entry, which I hope to bring you over the weekend.
I've been designing user interfaces for games for about five years now, and one of the things I decided to do about a year ago was to keep a list of "good UI principles," those things that people in other positions in the game industry often don't understand or realize about game UI development. This week I discovered one, what felt like a very important and revelatory one, and realized that it deserved more than just a two-line blurb in my text document.
For the past few months I've been working on the UI for Demigod (and by the way, we released the first official trailer today) and we reached a point a couple of weeks ago in which the UI needed a complete overhaul. The other leads on the project were worried about how I'd take this because they knew they were essentially telling me, "we're sorry, but you have to completely redo this, it doesn't fit with the game now." After discussing what changes we wanted to make one of the leads asked me worriedly, "are you okay with this? I mean, we're basically redoing everything." And when I said yes, explaining that this always happens in UI because there's a point you reach at which it just sort of...well, happens, he said, "it does? So what are we doing wrong, then?" Speaking in the broad, industry-wide sense of the word we.
Which was such a great question. No one had ever asked me that in game development before, and instead of having the snappy, sarcastic, and cynical comeback that I normally would reserve for questions like this about the often rancorous and mismanaged method by which many games are developed across this entire industry, I realized that the answer is this: nothing. This is actually supposed to happen. I saw at that moment that my answer of, "this always happens" deserved more than a shrug -- it was an identifiable pattern that I felt really nailed how game UI actually develops given a well-run project (which I believe our title to be). The heavens parted, angels descended with trumpets blaring, and a blinding light enveloped me. (I don't know, I guess God's a UI designer. Think about it.)
For most games, the UI should be developed in a set of stages that resembles this: a 0.1 prototyping pure whitebox phase in which you're merely prototyping functionality with raw, early gameplay; a 0.2 prototype phase in which you refine the whitebox and functionality; a 1.0 alpha prototype phase in which you now give the whitebox its art style. At this point you should be about halfway through your game's development phase, and you along with your designers and art lead will be pretty sure that your 1.0 is going to be the UI you ship with after you tweak and polish the art style a bit.
But you would be wrong. At this point your design team is still refining and tweaking gameplay and things are probably not fully settled. In fact, its at this stage where it seems most of the core gameplay is really iterated on and refined (or should be, for a decently-run project). Your whole team will be happily using the 1.0 UI that you created while you happily work on some art tweaks here and there, refine your UI's visual style, add some dialogs that the designers need, maybe add in things like your front-end movie and the like.
This is when UI entropy begins. I brought all of this up to Matt, who was also a game UI designer for a few years, and he was the one who used the term "entropy." UI entropy begins to affect your UI because as design elements change through the normal course of game development and gameplay design iteration they'll change in small enough ways individually that they likely won't cause anyone to say that the UI needs a revision, but they'll gather like a snowball, accruing more and more momentum as they go down the design hill.
And that's when they hit the tipping point, the point that hits two-thirds of the way through the game's development cycle where it's clear to everyone that the 1.0 alpha UI you built just doesn't work for your game anymore. The first few times this happened to me I felt disheartened. The process is always the same -- you're moving along in your own UI world, at this point likely mostly separated from the design team simply because there's no need to involve you much at this stage (I mean, you're just polishing art style at this point, right?), and then you get called into a meeting with the leads on the project, who hand you a sketch out of the blue outlining their new vision for the UI.
It can be debilitating. It feels like they're telling you that the work you did up to this point was for nothing, or that they're dissatisfied with the work you've done and are here to tell you how to do it the right way. But if you can get past all of that (it's hard, I know, I had to do it) you can see how this meeting is about the UI Tipping Point and what it actually means. Everyone to this point has been playtesting what has likely been a mostly solid and unchanging UI; it's given them time to really cement just what doesn't actually work in your HUD, and on the design side they've probably finally figured out what the actual focus is of the game and how the UI doesn't currently reflect it (which is a whole separate game development discussion -- sure, we're all supposed to know what our games set out to do right from the start, but even the best-run project goes through enough changes over its development life to change its focus a bit). So now they're telling you that they really feel that the weapon switching element is too small and made too unimportant over there in the corner of the screen, or that the game is about leveling up and your skill point indicators in the upper right just don't show up well enough, and this big element over here in the corner really needs to be a much smaller element, and these buttons here in the lower right are really important and should be front and center...
And it's okay. That's supposed to happen. The work you did to this point wasn't for naught, it was just another phase of prototyping. And I've found that once you've reached this tipping point, you have some early-phase stylization going on in the art passes you've done that still nag you as being just not quite right, and when you combine these with a completely new pass on the UI taking into account the things that the design team has been able to nail down as most important, wham -- you comp it all up in Photoshop (or Flash, as was the case for me on Space Siege since we used Scaleform) and you can see before you even start scripting it all into the game that it just works. The whole thing gels, the art style and the functionality. You've gone from a UI that might have felt on the surface like it worked by had slowly started nagging you as needing some kind of refinement or change that you couldn't quite pinpoint to a lean, mean, user interfacing machine.
And now you'll spend a few more months working that into the game, hopefully having had your time actually budgeted to handle this tipping point and subsequent redevelopment. Smart leads will learn to budget for this process, and smart UI designers will learn that it's just how UI development seems to work.
What do women want in a game? What they don't want, Magnus Bergsson of EVE Online developer CCP says, is to be spaceships. They want to be people, he said in a recent interview.
If you're an atypical gaming female like me, it's easy to let a knee-jerk reaction get the best of you and make you want to call him a misogynist or sexist, but if you have that reaction then you need to remember one thing: Magnus Bergsson isn't talking about you. He's talking about your average woman, and you know what? He's right.
Granted, his rather unscientific method of showing his game to "lots of girls" and developing a hypothesis based on that isn't going to be proving any demographic studies any time soon, but we know the truth, ladies, and we can't deny it: those of us who like to shoot things are a minority within our own gender, and there's no sense in jumping to the shaky conclusion that when someone says you don't want to be a spaceship they're actually saying that you're somehow inferior.
Bergsson's statement might initially come off as a sweeping sexist generalization, but it's a generalization that has some basis in reality, and the immediate assumption that it's sexist is more our own projections on what we deem to be "girly" and "manly" in video games. Women and men are different creatures, and on average, excepting the odd few like myself, most women prefer their games to be centered around one or more of the following gameplay mechanics: exploration, simulation, socialization, and evolution.
Of these mechanics, only socialization has an inherent gender bias. When we think of "girly" games we tend toward a visual representation of the term: cute art, pastel colors. Evolution, simulation, and exploration, however, are all mechanics that are commonly enjoyed by players of both genders. The difference is that women on average tend to place more of their enjoyment on these mechanics as the main gameplay, while men tend to see these as a means to an end that usually involves other goals: visceral action-oriented gameplay, strategy, or stats-centric roleplaying.
So the question isn't, "is this generalization true?" The question is, "why does this generalization ring true?" I'd like to put forth three possible answers to that question.
1. It isn't the gameplay that prevents women from enjoying hardcore games, it's the art and presentation.
Pick any game off the shelf and you can reduce it to an abstract set of gameplay rules in a generic environment. Shooters aren't shooters because they're bloody and violent; they're shooters because some kind of projectile weapon is fired at a target to eliminate it. Nerf Arena Blast is just as much of a shooter as Soldier of Fortune; the former is a far more abstract form of the genre than the bloody and more violent latter.
The most common reaction that women have when asked if they want to play a game like Soldier of Fortune is distaste for the level of violence, the gore, and the clearly testosterone-driven tone of the art. If, however, the shooter were made more abstract, its violent imagery removed, would women then be able to get past the barrier the art style put in front of them and enjoy the game for its adrenaline-driven, hunter-and-prey gameplay?
2. Women are not introduced to these types of games at the early age that boys typically are.
The most common method by which women are introduced to video games, it seems, is via a college boyfriend or a husband. By then the concept of the video game as a strictly male pastime has been ingrained, and if they've been introduced to them at all it wasn't until they met boys who owned them. It seems that girls are rarely seen by their parents as a primary user of video games, only ever coming to it as a secondary user through a brother or a male friend. Kid's video games offer a wide range of styles and gameplay that, for a young girl, could very well provide a better basis for an open mind about experiencing different types of gameplay that they've been told fall into the "boys' game" category.
3. Women are simply hardwired for social- and exploration-based entertainment, and men are hardwired for visceral, adrenaline-based entertainment.
As much as modern society tries to rank the sexes as the same in ever category, let's face it: men and women are different. Our brains approach problems and puzzles in different ways; why not accept then that women and men tend to enjoy different types of gameplay experiences with different reward paths?
This last question points to the heart of the issue. Too often we resort to trying to figure out what a "girly" game is, and both men and women have an immediate negative reaction to the label "girly game" -- both genders view it as separate and unequal.
But the type of games that women on the whole tend to enjoy don't have gameplay that is inherently inferior; it's merely different. Again, women tend to enjoy games that involve exploration, socialization, simulation, and evolution. This is a perfectly valid set of gameplay characteristics that have no inherent gender bias, we've simply associated games that have them (like The Sims) as being "girly."
The industry is finally moving from a position in which women weren't worth catering to at all to a position in which they're at least interested in finding out what makes us tick gaming-interesting-wise. And so if you're a woman it's a good time to be a gamer. But let's hope they deepen their focus and start looking into the whys as well as the hows.
I'll talk about this anyway, even though Bill O'Reilly doesn't need any help from me in exposing his screeching, shrill Chicken Little ignorance -- he's certainly done it quite well on his own in saying that video games are going to be the death of society as we know it. He believes that "a large portion" of people "under the age of 45" have no grasp on reality and no skillset to acquire a job, and that the launch of the Playstation 3 is going to change this country into some kind of unrecognizable monstrosity.
O'Reilly's comments focus a lot on the erroneous belief that so many anti-game people hold: that people who play video games lose the ability to socialize with people. They don't know their neighbors, he says. Nothing could be further from the truth. Video games have, especially in the last few years, become a foundation for vast social interactivity. Hardly anyone sits in their room anymore playing a game in the dark by themselves; they're adding people to their Friends Lists and playing games together, either against each other or cooperatively. Or they're playing World of Warcraft with a thousand other people and making friends with them. Or they're going to game conventions or E3 and meeting people in person who they've only ever known as a voice on the other side of the Xbox headset. Video gamers have grown up and are having children, and now parents spend time with their kids playing video games together. As a video game developer, I can say that the social interactivity factor of games and the new console systems is nearly more important than the game itself now.
O'Reilly talks about gamers as if his sole exposure to them comes from a poorly written TV stereotype. I'd wager he's never actually met one. A video game console in a home is about as common a site today as a toaster. Video games are so common place that MTV, that shining beacon of social trend prediction, now has its own video game show. (Which bears the unfortunate name of The G-Hole...but we'll save that for another entry.) There are socially inept and withdrawn people who play video games, and there are socially inept and withdrawn people who have no idea what an XBox or a Playstation is. There are surgeons who have been shown to have improved motor skills and coordination because they play video games. There are soldiers in O'Reilly's beloved Iraq War that are better UAV pilots because they play video games.
There have always been people like O'Reilly -- people who are fearful of some new technology or new fad and think that it will be the death of civilization as we know it. It was said that comics would ruin society (they didn't), and the same was said about television and radio, the very media that O'Reilly relies on for his paycheck and yet conveniently considers benign.
He goes on at length about how impossible it is to hold a conversation with a computer geek, and that you can't really talk to them. The moment he said this, O'Reilly might as well have hung a big neon sign around his neck that read, I'M IRRELEVANT. While everyone else in the world moves on with new advancements and new technology, the dinosaurs of our society, the people who fear change, fear advancement, and fear their own relevancy and ability to keep up, wail and gnash their teeth about how society is dying and no one else can see it but them. And society doesn't crumble beneath our feet.
If O'Reilly can't have a conversation with a person who plays video games, uses computers, or uses an iPod (another device that he complains about), devices that are considered these days to be as basic as a calculator, it's much more telling about his own inability to keep up with the rest of society. Perhaps he should be more concerned about his imminent irrelevancy and less concerned about people who don't fear technology and are out talking to their friends and their coworkers about it.
The title is a little extreme, but bear with me.
There was a time in games, back before Alyx in Half-Life 2, before Jade from Beyond Good & Evil, when female characters in games came in only one variety: overboobified, underwaistified, barely-dressed stripper. For a female gamer like me, it was an easy time championing the fight to include less sexed-up female characters with more clothes on on their body and a few more brain cells in their head. All we had to do was hold up any ad from any gaming magazine that had a prominent female character and point to it. Or just say, "come on...just look at Lara Croft, for God's sake."
Then we were rewarded with the Alyxes and the Jades, and we female gamers were the happier for it. Finally we had some realistic female characters, ones who were smart, ones who wore real clothes and didn't exist solely to serve the male fantasy. We had characters we wanted to be.
So we should be happy, right? We won our fight, didn't we?
I had an interesting discussion with friends and fellow game developers today, and it made me realize that we're in danger of thinking we've won the war when we've only really won the battle. The discussion started with video from Gears of War that I can't link to because it's been pulled for copyright reasons, but the video had a shot of a female character that helps the player through the game -- her name is Anya and she's an intelligence officer that you mostly hear but never see. The video, however, gave us a sneak peek of her and one of my friends raised an interesting point: while Anya certainly isn't scantily clad by any means, my friend found it interesting that in the video she's standing with a fully-armored soldier having just landed in a helicopter. My friend wondered why the one female character didn't have any armor on while the male soldiers had full protection?
I'm not going to pick on Gears of War specifically -- Anya is a great character. She's exactly what a great female character should be: intelligent, approachable, and not oversexualized. I think the way Epic designed her is terrific. But the more I thought about it, the more I felt that she's another character in a trend that I worry is almost eroding the very thing we're trying to get video games to stop doing: perpetuating the idea that women are an unusual commodity unfit for ordinary places in these fantasy worlds.
In an effort to raise video games out of the depths of sexism, more game companies are designing female characters that aren't oversexualized, that are approachable, and that play a pivotable role in the game. This is commendable, certainly. But what do Alyx and Anya have in common? (Or seem to -- I'll certainly be quite happy to retract any of this once GoW is released and I'm wrong about the role Anya plays.) They're helper characters. And they're almost always the only female character in the game.
It's as if the designers have figured out that they need to have a woman in the game in order to help work out this whole strippers-in-games problem, but they didn't broaden their scope enough to go beyond a band-aid solution. So they put in a female character that's certainly important enough to be a key part of the game, but its at the expense of the rest of the game world. (I should note that I'm mostly limiting this to shooters and other "hardcore" genre games -- RPGs generally do a pretty good job of having a lot of characters of both genders in their game worlds.)
Women make up 51% of the population. Women are construction workers, CEOs of large corporations, soldiers, some of whom serve in combat now. Why, when we're creating futuristic worlds, are we not including them among the background characters of our game world? Why is it that when you're walking around a military base in a futuristic world and encountering one generic soldier after another, not one of them is female?
I'm not absolving my own company (Raven Software) of this. We made Quake 4 and we absolutely should have had a good female marine in the game. We have a few women at Raven and all of us told the team that we really wanted a female marine. One of our artists even concepted a very good character but due to memory andtime constraints, we were told, there was no way to put her in. It's unfortunate, and it's something we ought to remedy in future projects.
But the remedy isn't making one very special female character, one carefully crafted to please the eye yet not offend the sensibilities, charged with carrying the burden of representing the entire female gender in the entire game. The remedy is actually a lot more plain than that: just add female characters to your game world. Add them in ordinary positions doing ordinary things. Got a game that has a scene in a futuristic space dock? Make one out of four of your generic dock worker models a woman. Got a game that takes place on futuristic battlefields with squads of soldiers, most of whom are there to add a living element to the game? Make one of them a woman. And if you can't spare the memory for the extra female model plus her voice files, put a couple of generic female voice files on one of your armored-up helmeted soldiers. After all, do men and women soldiers in heavy armor really look that different? (This idea was suggested by my friend who originally asked about Anya's armor, and I think it's a great idea. Okay, the model animations should definitely look different between a man and a woman, but hey, I'm willing to sacrifice that for something so much more obviously female as a voice file.)
Making special female characters is great, but in doing so we shift the focus from wondering why there are only strippers for female characters in a game to why there is only one woman in the game. That just shifts the problem, it doesn't help to solve it. We're an industry that's grown up a lot over the last couple of decades and we're really making strides. Many of the young guys who make games today have gone on to become family men, yet we're still seen as the socially awkward pimply-faced teen who has completely unrealistic ideas about women in games and art. Imagine how your wife, your girlfriend, or your daughter live in a world in which they can do anything they want and be represented, and then look at the game world that you're helping to create and re-evaluate why you might be leaving them out of it.
Special female characters are great and it shows how far we've come in our short history. But when the day comes that the inclusion of women characters in games is as ordinary and unexciting as the inclusion of male characters already is...that's when we've really nailed it.
And by the way, I want to add a postscript here. Some of you may say, "but Caryn, what about the Hellchick model? That's supposed to represent you, and she's the height of oversexualized, underdressed game characters!"
I want to make it clear that I have no problems with these kind of characters existing, and existing especially in game worlds that are suited for them. I like the expression of sexuality in games; I just don't want to see it as the only representation of women. Did anyone play Heavy Metal: F.A.K.K. 2? I did, and the main character was Julie Strain, a stripper heroine with only a few scarves around her that could barely be called clothes. And I loved her, because it fit the world she was in, just like the Hellchick model fits the world she lives in: a succubus creature designed specifically to be shown off in that way.
It's great to have both kinds of characters. We just need a little bit more of the ordinary ones to catch up with the extraordinary ones.
As my friend Fargo put it, "it's pretty sad when a comedy show has to be the voice of reason."
If you didn't catch the segment on The Daily Show last night about the Congressional hearings on video game violence currently wasting your hard-earned tax dollars, your homework assignment is to watch it right here.
The first two thirds of the segment mostly just uncovers the complete out-of-touchness of the people that we hire to represent us to our government. You have some gems in there, such as Congressman Fred Upton (R-Michigan) proudly proclaiming that he's a gamer because he's an expert at Pong. I'm going to assume that right after he said that, he got into his Model T to go home and play the latest wax cylinder musical recording on his brand-new grammaphone.
Mr. Upton, if the last game you played was Pong, you haven't played a game in about 25 years, and there's generally a statute of limitations on the use of the word "gamer" when claiming that you are one. Things have advanced now. We have computers, there's a space station in orbit, and we have these things called lattes. You might consider checking them out.
But the best part is that while Jon Stewart is doing what he does best -- publicly making fun of those who deserve it -- he slips in some absolute gems of blazing ignorance. Take this quote from Congressman Joseph Pitts (R-Pennsylvania):
"It's safe to say that a wealthy kid from the suburbs can play Grand Theft Auto or similar games without turning to a life of crime, but a poor kid who lives in a neighborhood where people really do steal cars or deal drugs or shoot cops might not be so fortunate."
I want you to read that passage again, folks, because those are his exact words, unaltered. And then I want you to wonder how someone so astoundingly ignorant could have had enough sense to put on pants that morning.
So let's analyze this, shall we? According to Rep. Pitts, crime did not exist before video games. People didn't steal cars, do drugs, or shoot cops before Grand Theft Auto told them how. That's right, folks: Rep. Pitts actually stated the cause of poor kids turning to crime -- that of living in an environment where crime is commonplace -- and in the same sentence proceeded to blame it on a medium that is approximately 1/1,000,000th as old as crime.
And in case it wasn't clear to you, Rep. Pitts wants to assure you that this is why suburban kids never steal cars, do drugs, or shoot cops. He wants to make sure you know that suburban kids can obviously -- without any fear whatsoever of being influenced by them -- play the most violent, gory video games and never, ever commit the crimes they see in them.
Wait...why are we having these hearings again? I thought they just said that kids are all influenced by the video games they play?
This is ridiculous. These people are assinine, and Stewart's quote that Congress is filled with insane jackasses is absolutely dead correct. These people have no right to say one word against a medium that they so proudly display an astounding ignorance of.
Not only that, but they only seem to have one game they like to hold up as the example for all video games that have ever been made; to them, every game is Grand Theft Auto or Postal. This would be analogous to saying that every movie is Faces of Death, and because that movie is out there we need to have a Congressional hearing about movies and how our young people aren't being protected from them. There is a world of video games out there of which the Grand Theft Autos make up a very small portion. But they refuse to get off of the Grand Theft Auto train because they are crusaders on a mission that lacks all common sense.
If you're reading this and you live in the states that these Congressmen represent, you owe it to your own intelligence to write to them and tell them to get out of office and let someone with at least one living brain cell do their job.
These are the people trying to tell you what's best for you and your kids. These are the people who say that you as a parent aren't capable of making your own informed decisions on what your child should and shouldn't be exposed to. These are the people who are making our laws.
Don't let these ignorant jackasses tell you what you're supposed to think, especially when they don't even know anything about what it is they're trying to legislate.
There's this great page on PBS.org called "Eight Myths About Video Games Debunked". If you haven't read it, take a look and you might be surprised at what you read if you're not a gamer. It's nice to see an outlet generally recognized as intelligent and educational as PBS is stand up for the industry I work in when so many people want to tear it down. The article debunks the theories that youth aggression, violence, and desensitivity to violence increases by playing games, and even the myth that video games are aimed mainly at children (that hasn't been the case because those of us in our late 20's to 30's have grown up with gaming and are the primary audience that's introducing the younger generation to games).
CBSNews.org recently ran an article titled "Where Is Our Frank Zappa?" The bulk of the article is really just the transcript of Zappa's hearings in the 1980's with Congress in regards to the PMRC (Parent's Music Resource Center). It's definitely worth reading since many of the issues can indeed be applied to games. But the article never answers the question is poses.
And it's a really good question. Why don't we have a Frank Zappa for the games industry? Someone in a forum I read speculated that the reason we don't have one is the age difference between music and video games. Zappa was in his 40s when he went before Congress. He was a seasoned veteran of the music industry. There was history and foundation there for him to work off of. But video games are only a couple of decades old, and that's if you include the harmless roots video games like Pac-Man and Pong. But the real history of video games as controversial mechanisms for fostering youth violence started with DOOM in the late 1990's. And this forum poster speculated that if you normalize the two histories, our Frank Zappa is probably a 26 year old without any clout to weild before Congress yet.
It's a good theory and probably has merit. But I have another theory. I think it's because we lack a diverse enough gaming portfolio yet that we can use to defend games as a valid medium of expression. Music is art, and no one at this point can really argue otherwise. And controversy in art is somewhat easily defendable because it's part of what makes art what it is. But are games art? Not everyone thinks so, and game developers themselves don't completely agree on the answer.
Within genres like rock and rap in music you can often find great examples to hold up that make the Moral Minority quiet down when you tell them, look, see? There's no reason to throw out the baby with the bathwater. Movies have the same defense strategy -- if you want to defend the issue of violence having a rightful place in all movies, then when people complain about Pulp Fiction you hold up Saving Private Ryan. It's the defense strategy that says something controversial is valid within an entire medium as long as the medium has plenty of "valid" uses of it. But games lack that when it comes to the pivotal issue of violence. It's hard to defend a game like Grand Theft Auto when we don't have anything else to hold up alongside it. We have no Saving Private Ryan in video games, no game that uses violence to show how horrible realistic violence really is.
I wish such a game existed. I wish a game developer would make a game that made a point of using what games do that no other medium does -- forcing the player to make decisions and experience the consequences -- and tie it into the issue of violence in such a way that we could make the player feel the negative consequences of actions that result in the realistic violence that we're capable of rendering in today's game engines. I don't wish this game existed because I have a problem with violence in games and want to attach morality to it -- I don't. (Hey, I'd be lying if I said I didn't enjoy running around in Grand Theft Auto beating up innocent people and stealing their cars. It isn't like I'm ever going to do that in real life, so why not enjoy doing it in a game?)
I wish it existed because it would force a discussion about broad-sweeping legislation of games that currently completely ignores any evaluation of the game content itself. We're on the verge of government legislating against any video game that contains material considered "violent" or "adult" based solely on the "M" rating on the box, and legislating it in ways that the movie and music industries have never seen, in ways that completely sidestep any discussion about whether or not the content in question provides discussion or educational value. Why? Because we currently don't have any games in which the violent or adult content actually does provide any value for discussion or education.
And we're doing all of this legislating because people are afraid of what they don't know anything about. Parents, the ESRB rating on the game box gives you just as much information about the game, if not more, than the movie rating gives you about the movie in its preview. If you're capable of making an informed decision about movies based on their rating without the government's help, then why aren't you capable of making the same decision about the games your child plays?
Before I go into anything else here, I want to take a minute to say that Refracted Mandog
has posted its first screenshot of Psycho Saucers, our first game that we're currently working on. We don't plan to inundate viewers with daily screenshots, but since you can finally actually see something of the game in it, we wanted to give a little peek.
And speaking of Mandog, I was talking to Gabe "CodeDog" Kruger today about Activision, the company I primarily work for. I was remembering today the old patches that Activision used to give out for getting certain high scores in their old Atari 2600 and Colecovision games. What was funny was that when I said "patches", Gabe said, "you mean you could get patches for those games? Where would you download them?" Which made me laugh because I was talking about actual patches
— you know, the cloth things you'd sew on your clothes.
I had so many Activision games, and I got a few of the patches
. I was 8, maybe 9 years old, and this was before it was uncool to be so geeky. Barnstormer and Chopper Command were games I distinctly remember getting patches for. I had a little denim jacket and I begged my mother to sew on the patches. God, I was a geek. Things like that today would surely be met with name-calling and some good beatings from my peers, most likely.
And I was reflecting on how I can so clearly remember these patches and the games and the Activision logo, and it seems such a strange thing now that 20+ years later, I work for the company that has given me some of the most vivid and fun memories of my childhood when I really never set out to do that at all. I certainly never set out to work in video games. It was the love of my childhood, sure, but it was always something I did for fun, and astronomy was the thing I was going to do for a living. Funny how things turn out. If you'd told me years ago when I was playing Wolfenstein 3D on my future brother-in-law's computer during my first years as a science student that I would be working closely with the company that made that game years later, I'd have laughed at you.
So, new yarn. Len's sister Betty and her husband Gary came out from Phoenix to visit and we went yarn store visiting. She's a knitter, too (in fact, she taught me twelve or so years ago), and I took her to Velona Needlecraft up in Anaheim Hills. I found this beautiful slate blue, breezy soft cotton that I just couldn't put back down. It was the perfect color for me, so I think I'm going to make a double V-neck cap-sleeved sweater for the summer. I've already started it in a 2x2 twisted rib stitch. I've found that I like this kind of rib the best if I can do it because it keeps away that sloppy second stitch that happens so often when you do ribbing.
I came across a batch of really neat stastistical data recently that is very important to what I do for a living, which is get paid to know a lot about online gamers, especially the hardcore gamers that play mostly first person shooters.
This survey was massive and I won't go into it in great detail, but suffice it to say that it was packed full of really interesting data about the habits of FPS gamers, both PC and console. But there seemed to be one glaring problem I had with it. The first thing the survey said (crap, here comes Richard Dawson) was that it polled X male gamers.
Now I want to say right off the bat that I am not calling "gender bias!!" At least not accusatorially (Mirriam-Webster is failing to tell me if this is actually a word or not). It's a well-known fact that most women just don't play first-person shooters.
Or is it?
See, this is the problem I have with this survey. That "fact" is the first thing to pop into my head as justification for surveying only male gamers. But is it a fact? The truth is, I can't remember when I last saw a survey that polled gamers, male and female, and showed the breakdown of male and female FPS gamers. It's a commonly accepted belief in the gaming industry that we don't play them (with a few exceptions, of course, like myself), but when has the industry last proved this to be true?
This survey went into disgustingly beautiful detail about the breakdown and demographics of FPS gamers, telling me everything but how often they go to the bathroom each day. And yet right from the starting line, it failed to start at the broadest scale it could have possible started with by including female gamers.
This bothered the statistician in me. It was like hearing someone say, "I'm going to learn about what cows like to eat. But since everyone knows that only brown cows eat, I'll only ask brown cows the questions." Admittedly it's more commonly known that few women play FPS games than it is non-brown cows that don't eat, but the principle is the same. In recent years, it seems that more and more women are getting into FPS games. Halo seems to have done the most when it comes to attracting female gamers; so many guys are talking about how it's one of the few games their girlfriend enjoys playing with them in co-op mode. That doesn't mean that the numbers have risen significantly, but when was the last time we actually saw those numbers? Who has done a survey on it? I haven't seen it, and I feel pretty sure I would have seen one had it been done.
This survey could have very easily set out to show that this basic assumption was true (or maybe even not true), and thus set itself on the proper scientific path toward breaking down a demographic. They could have even possibly done interesting analysis with the data, but even if they weren't interested in that, the survey's data would have been far more grounded to me if I'd seen it start with "X gamers polled -> 98% male, 2% female -> Of the 98% male..."
So in the end, I'm not really bothered by the obvious bias against female gamers — even though I haven't seen a survey taken looking for the data I want to see, I'd bet money that the numbers aren't significant enough to have affected this survey much at all. But it bothered the scientist in me. Scientists do employ first-order assumptions; it's common in astronomy and astrophysics to make an assumption when doing models, for instance, in order to simply your first approximations toward a solution. Then you refine your model from there in order to get rid of the simplified situations and get closer to a real-world situation.
But this is a case where this first approximation just wasn't needed. I would have loved to have seen the data on how many female gamers participated in the survey had they included them in it. So next time, survey people, no assumptions. Start with the cleanest knowledge slate you can and go from there.