Tyranny of the Masses

Why developers should be weary of tracking player behavior.

“Big Brother is watching; not in an effort to control you but rather to learn from you. You’re not just playing videogames anymore. You’re actively providing feedback about what parts you like and which you don’t. How you play could ultimately help shape the future of videogame design.”

BioWare is just one of numerous development studios and publishers that have begun collecting anonymous player data. No identifying information is tied to the information harvested, so you don’t have to worry about things being traced back to you. You’re just a data point amongst millions.

-Erik Budvig

Were you worried that Bioware was not being influenced enough by the thousands of reviews, forum posts, emails, and tweets they receive for every single one of its games? Are you looking for a more impersonal way of communicating your gaming experiences with developers? Are you too lazy to write them an email with your comments and complaints?

If you are any one of these unfortunate people, then worry not my friends, for now you too can HELP SHAPE THE FUTURE OF GAME DESIGN. If, on the other hand, you’re one of those party poopers who still cares about antiquated 20th century concepts like ‘privacy,’ then you may still rest easy knowing that “you’re just a data point amongst millions.” In other words, Bioware’s data gathering efforts will somehow manage to give you unprecedented power while simultaneously making you an insignificant statistic. Makes perfect sense (statistically speaking, of course).

As you can probably tell, I’m more than a little baffled by the utopian declarations that have accompanied news of Bioware’s efforts to collect anonymous player data from Mass Effect 2. Gamers already influence the design decisions of mainstream developers  in a variety of ways, so it is simply absurd to imply that player tracking will somehow give “voice” to a previously disenfranchised demographic.

The question we should be asking is whether player tracking is good for Bioware and, consequently, good for people who want to continue playing ‘Bioware games.’ To be sure, I  absolutely get the appeal of collecting tracking data. Indeed, tracking a modest bit.ly link is pretty fun in itself (‘Ooooh look! I gots me 10 new clicks from Malaysia…clearly,  those guys know good Mario fan-art when they see it’), so I can’t even begin to imagine how great it must feel to be able to track one’s audience after years of working on a project as large and complex as ME2. But alas, the road to development hell is paved with good intentions, and I can’t help but worry that Bioware’s understandable desire to quantify player experiences might eventually backfire.

To understand the potential dangers of player tracking, we need to ask ourselves at least two questions: First, who gets to interpret the collected data? Second, how will this data influence the decision-making process of the interpreter?

The biggest brother

I don’t think it is too elitist to suggest that, as a general rule, artists do not produce their best work by worrying too much about the public’s (alleged) expectations. It was this overriding concern with fan service that made the completely arbitrary appearance of R2-D2 in The Phantom Menace seem like less than a terrible idea. This deference to “the audience” is also the reason that Nintendo decided to follow-up the Link of Wind Waker  with a more “mature,” conventional, and far less interesting version of Link in Twilight Princess.

So even if  Budvig’s claim that “Big Brother is watching” referred only to Bioware, that  would be reason enough to worry. But alas, Bioware is not the only one watching, and they will not have final say on how the data should be interpreted.  For Bioware depends on a bigger, badder brother; a passive-aggressive patron that has never made a secret of its desire to take your lunch money and leave you in tears. I’m talking about the biggest brother of all, which is to say Bioware’s publisher and parent company, EA.

If you’re reading this, chances are you already know a thing or two about EA’s history. When Trip Hawkins founded the company in 1982, his idea was to create a “different” kind of videogame publisher. His goals were incredibly idealistic for the time: EA would foster creativity, approach videogames as an “artform,” and treat game developers with the ‘respect’ that ‘artists’ deserve. Even the name “Electronic Arts” was a self-conscious attempt to convey these founding principles. As Hawkins explained in a 2007 Gammasutra article about the company:

“The original name had been Amazin’ Software. But I wanted to recognize software as an art form….So, in October of 1982 I called a meeting of our first twelve employees and our outside marketing agency and we brainstormed and decided to change it to Electronic Arts.”

But then Hawkins left the company in 1991, leaving a former Johnson & Johnson executive in charge. This is the point at which Electronic Arts began to transform itself into a “serious” publisher, abandoning its founding principles in the process.

Eric-Jon Rossel Waugh continues the story:

No sooner was Hawkins out the door than the acquisitions (and Madden milking) began.

[...]

The pattern to these acquisitions, if not universal, is infamous: find a company that made a really popular game, acquire the company and its properties; then set the team on churning out sequel after sequel to the game in question. Sometimes, likely not by design, the staff leaves or burns out, or one of the products sells poorly; the studio is closed or subsumed. Of EA’s acquisitions, only Maxis is known for retaining its autonomy and culture within the EA corporate structure, the jewel in EA’s crown.

EA seemed to have abandoned all of its founding principles and developed an attitude of rapid growth whatever the long-term cost, thereby setting a poor example for the rest of the industry.

And thus, the iconoclastic developer formerly known as Electronic Arts had completed its transformation into a faceless entity known simply as EA. Incidentally, EA was not the only multinational corporation to reduce it’s formerly descriptive name into an ambiguous acronym. According to Wikipedia, 1991 (the year of Hawkins’s departure) was also the year when Kentucky Fried Chicken changed it’s name to “KFC.” Conspiracy theorists claimed that the name-change came about because it’s genetically altered meat could no longer be considered “chicken.” Using that same conspiratorial logic, we may also say that Electronic Arts became “EA” once gamers realized that the company was no longer interested in promoting anything that could reasonably be called “art.”

Of course, my “theory” about EA’s name-change is a complete fabrication and, just like the KFC conspiracy theory, this will probably turn out to be false. But that’s entirely beside the point. The real problem for both of these companies is that they put themselves in a position that makes these rumors seem plausible to begin with. If KFC’s chicken still looked unmistakably like chicken, then no one would’ve developed a conspiracy theory about their use of KFC. Likewise, if EA hadn’t lost its way, perhaps there wouldn’t be a need for anyone to begin an article about the company by explaining “what the word ‘EA’ in ‘EA Games’’ stands for.” (Notice how ‘EA’ is described as a “word,” not an acronym.)

(What’s the moral of this somewhat obtuse and certainly gratuitous analogy? Perhaps that you shouldn’t try to use KFC in analogies. They tend to drag on for a bit, as you may or may not have noticed while reading the previous two paragraphs. I certainly learned my lesson, so let’s move on to the present day shall we?)

It must be said that things have gotten better at EA since John Riccitello was made president in 2007. A recent profile of the company in the October issue of Edge details much of what has gone right under Riccitello’s reign. First, he acknowledged that the company had grown too big and was releasing too many titles. He laid off staff and trimmed the release schedule. EA COO John Schappert is quoted saying that “A couple of years ago we shipped 67 titles; this year we’ll ship 36. Our goal is: ‘Let’s make sure the titles we’re going to make are great.’”

Under Riccitello, EA expanded the EA Partners program and allegedly made  efforts to improve the creative environment for the company’s in-house developers. By the end of 2007, EA also bought Bioware and made Bioware co-founder Ray Muzyka a Senior Vice President of EA’s RPG division. The long-term effects of this acquisition remain to be seen, however: it could improve the quality of EA’s overall RPG output, but it could just as easily result in a less focused and creative environment for Bioware’s own designers. Still, from EA’s point of view, this was a good step towards creating a more developer-friendly environment within the company (at least as far as RPGs go).

But in spite of these efforts, EA still has a lot to prove to gamers if it wants to become a respectable publisher once more. And make no mistake: “respectability” is the most that a publisher its size can ever hope to achieve. A publicly traded publisher like EA cannot hope to be “loved” or admired by gamers or developers.  Such adulation is reserved for studios (and the occasional first-party developer, e.g. Nintendo). This is because gamers recognize that publishers are not in the business of creating games, they’re in the business of making money. As such, their loyalties ultimately reside with shareholders, not gamers. Sure, EA wants gamers to be happy–and they spend quite a bit of money trying to figure out what gamers want–but gamers are simply a means to achieving the return-on-investment that shareholders expect from the company. Riccitello, after all, was not brought to EA in order to rekindle its creative spirit, but rather to sell more games and help the company regain its once dominant position in the industry (they were putting out mediocre titles long before its sales began to flounder, that’s for sure).

To that end, Riccitello has sought to diversify the company even as he tries to improve the quality of its “core” games. They have made serious in-roads into the casual games market and continue to experiment with “free-to-play” titles like Battlefield Heroes. These initiatives may very well be necessary for EA to keep pace with changes in the industry brought on by the internet, but they also serve EA’s long-term financial interests for slightly different reasons: namely, they will make the company less dependent on the sort of “core” gamer who takes games seriously, pays attention to reviews, and complains loudly when a game fails to meet her expectations. By targeting the lowest common denominator, EA (and most other major publishers) is building a mass audience that doesn’t know much about videogames, doesn’t read game reviews and, most importantly, doesn’t expect their games to be more than a 10 minute distraction to help them pass the time at airports and doctor’s offices. Unlike traditional gamers, these people are not asking for a five course meal, and they  certainly won’t get critical if you overcook the meat. All they want is a bite-sized piece of digital chocolate to get them through the day, and that’s a much easier business to manage.

Allow me to illustrate this point with one last quote from Edge’s recent profile of EA:

Battlefield Heroes attracted mediocre reviews and was heavily criticised earlier this year when the payment model was changed, making it almost impossible to progress through the game without paying for new weapons. “The perception was:  ‘Oh, EA has fucked this up, we’re never going to play again,’” says Patrick Soderlund, SVP and group general manager of EA Games’ FPS and driving titles. “But funnily enough, when we changed the way you pay, we had more players, and the game is now profitable.”

Funnily enough indeed. Hilarious in fact. But like many great jokes, there is a sad truth beneath the laughter. The truth is that EA did fuck up. It released a pretty bad game, then made it less of a game by charging real world money for the privilege of getting better at it. Somehow, this change for the worse attracted an audience and now the game is profitable, which is all they cared about to begin with. Hence the dark, pathetic laughter at EA. Note to Battlefield Heroes players: EA is not laughing with you, they’re laughing at you.

Not that there is anything intrinsically wrong with this attitude. After all, it’s their job, indeed their ‘responsibility,’ to make a profit for shareholders, and to do it as efficiently and painlessly as possible. And like any cunning politician, EA knows that the best way to achieve this is to alienate as few people as possible.  This the sort of corporate attitude that led to EA’s recent decision to eliminate playable Taliban characters in Medal of Honor (a cowardly move that is problematic for several reasons, which Ian Bogost already analyzed quite brilliantly in this essay). As a general rule, then, this strategy requires videogame publishers to pander to the lowest common denominator while simultaneously pretending to care about “taking the medium to another level,” in order to make the game seem less threatening  to casual gamers without alienating the traditional World War II ‘modern warfare’ audience.

Again, my point here is not to single out EA or vilify videogame publishers in general. I simply wish to note the basic fact (obvious to everyone in the music or film industries, but surprisingly absent from many gaming discussions) that videogame publishers are fundamentally different from both videogame developers and serious videogamers. Whereas we believe that a videogame is an end in itself, publishers use it as a means to profitability. This makes them a completely different animal: their priorities are different, their expectations from games are different, and their outlook of “success” will also be different most of the time. Not evil, just different in ways that often run counter to the interests of the medium.

This brings us, finally, to the issue of player tracking. As Bioware’s publisher and parent company, EA’s interpretation of the data collected for Mass Effect 2 is ultimately the only one that matters. So how will it read this data, and to what end? How will their interpretation differ from Bioware’s?

 

‘We deal with numbers’

Did you know that the videogame industry already has a system in place that mathematically determines the quality of every new release? It’s called “Metacritic,” perhaps you’ve heard of it? Like player tracking, Metacritic reduces incredibly complex subjective experiences into numerical values. But unlike player tracking, Metacritic does not derive its numbers from isolated in-game “events.” In fact, Metacritic scores are not really “derived” from anywhere – they are assigned by the site’s staff, whose  impressive qualifications include reading “a lot of reviews.”

You know how it works: the site monitors a wide variety of gaming publications, (including sources as diverse as the New York Times, Eurogamer Italy, Eurogamer Spain, and Eurogamer Plain), reads their reviews for you, and then provides you with a convenient numerical summary of what each reviewer thought, on a scale of 0 to 100. It doesn’t matter if you grade games on a 100 point scale, or even if you don’t grade games at all: as long as you call it a ‘review,’ Metacritic will attempt to assign a numerical value to it. Yes, Metacritic will literally convert another publication’s words into a 100 point scale, even if the publication makes a conscious decision to exclude grades from its reviews. It then “weighs” averages these scores in order to produce a “metascore,” a number that purports to reflect the overall ‘quality’ of a work without resorting to silly “qualitative concepts like art and emotion.”

If only all of life were like that!” says the Metacritic website, surely echoing the sentiments of socially awkward statisticians the world over. Luckily, life isn’t ‘like that’ – it’s just too rich and complex to be reduced to a number. The same goes for videogames – but try telling that to publishers like EA, who now rely on the site to evaluate the output of its development teams. As a developer told Michael Abbott recently, a particularly low Metacritic score means “people lose their jobs.”

The problems with using Metacritic as some kind of arbiter between publishers and developers are well-known by now, so I won’t bore you with more details. The bigger and more complex question is the one first posed by Stephen Totilo in 2008: namely, “why would a development studio ever tolerate publishers setting up deals like that?” Why indeed. Why would a developer ever agree to risk their very livelihood on Metacritic’s subjective impression of a ‘critical consensus’ which, by definition, consists of nothing more than an aggregate of the various individual subjectivities monitored on the site?

While it is true that most smaller studios have no choice on the matter, part of the blame must be placed on the development community itself,  for acquiescing to such a draconian system in the first place. Indeed, many developers direct their anger at individual critics (for depressing their metascore) while remaining deeply ambivalent towards the system that put them there in the first place. Others, like designer Soren Johnson (incidentally, a very talented guy who writes the excellent Designer Notes blog), seem to regard Metacritic as a necessary evil of sorts:

What should executives do if they want to objectively raise the quality bar at their companies? They certainly don’t have enough time to play and judge their games for themselves. Even if they did, they would invariably overvalue their own tastes and opinions. Should they instead rely on their own internal play-testers? Trust the word of the developers? Simply listen to the market? I’ve been in the industry for ten years now, and when I started, the only objective measuring stick we had for “quality” was sales. Is that really what we want to return to?

 

 

I’ve argued before that it is impossible to objectively determine (much less raise) the quality of a product that can only be experienced subjectively. Even Johnson seems to recognize this when he notes that publishers who play their own games would “invariably overvalue their own tastes and opinions.” Well, duh: of course our own tastes and opinions will be central to any activity that requires us to taste and opine – that’s just common sense. Still, why exactly would this be a problem? Publishers are the ones financing the product after all; why, then, shouldn’t it reflect their taste? By Johnson’s logic, we should also worry about a restaurant owner who hires a chef “just because” he enjoys the person’s cooking.

(His last point – that sales are not a good way to measure quality – is more compelling, and I’m certainly not one to argue that against it. But it does have one thing going for it: actual objectivity. True, sales tracking may not be a reliably objective measure of a game’s quality, but at least it’s an objective measure of something, which is more than we can say for metascores.)

 

I don’t want to keep harping on about the role that developers have played in allowing such a flawed system to be used against them because, as I mentioned earlier, most simply don’t have a choice. If forced to choose between placing their royalties at the mercy of Metacritic or not making the game at all, most  studios will understandably go with the former option. In this respect, they’re like the proverbial starving musician who signs a record deal without reading the contract only to discover, years later, that she was screwed by the label.

So perhaps we should have addressed Totilo’s question to publisher’s instead. Why do publishers rely so much on metacritic anyway? What is it about the site that publishers find so attractive and useful when dealing with studios? Here, Johnson gives us a hint, when he notes that publishers “don’t have enough time to play and judge their games for themselves.” But it’s not just that they don’t have the time, it’s that they don’t have the skill or the know-how to play and pass judgment on games. See, executives are numbers people. They like and respect numbers, and have very little patience for the ambiguities of art, language, and criticism (this is also why many business leaders are so contemptuous of  “fancy political rhetoric”).

Thus, Metacritic seems like an ideal solution to many publishers: it reduces numerous qualitative opinions into a single number and, since the number is determined by an outside party with its own (secretive) methodology, it confers the illusion of objectivity upon the final number. But don’t take my word for it, here’s game industry marketer Bruce Everiss (itallics mine):

I have used [Game Rankings] countless times as a tool to help in my work. Most notably to prove to the directors of Codemasters that their game quality was slipping in comparison to their direct competitors.

Then in 2001 Metacritic came along and changed the world. Firstly they convert all the review scores into percentages, then they average them to come up with one figure. (They also weight the average so more respected reviewers have more influence.) This single figure to represent a game is a very powerful thing and everybody in the industry is far more aware now of game Metacritics than they ever were of individual review scores, they have become the standard benchmark for the industry.

Of course, the only way that a site like Game Rankings or Metacritic can actually “prove” anything is if we buy into their methodologies, and since Metacritic keeps its methodology (including its “weight” system) a secret, it seems odd that Everiss would use those numbers to prove anything. But we’ve already been over that. The point is that those numbers are being used as if they were the final word on a game’s quality, even though their reliability is questionable at best. In short, Metacritic scores have empowered publishers to make decisions over “quality” in spite of knowing that they lack the gaming knowledge to do so, and they do this by drawing specific conclusions from numbers that at best provide us with nothing more than one person’s impressionistic assessment of a critical “consensus.”

Seeing as most developers actually take the time to read individual reviews of their games–and therefore better suited to put Metacritic scores in their proper context–it is hard to see how this publisher-dominated metascore system benefits anyone other than the publishers themselves. At the very least, it has empowered publishers, by giving them a greater say over areas of development that typically belong to developers.

My fear is that player tracking will eventually create the same situation: i.e., statistics that were meant to aid developers when starting work on their next game may be taken out of context by publishers, who would then use these numbers as “proof” of what audiences want from future games. The result will be even less risk-taking than we currently see in the games industry. The era of focus-group-tested games will slowly give way to an age of mathematically tailored experiences, targeting the lowest common denominator with unprecedented precision.

Personally, that’s not a future I want to see – but then again, I’m not a numbers guy.

GamePub Story

Let’s pretend for a second that I am an EA executive. It’s my job to supervise an external development team that is currently working on a Mass Effect spin-off project scheduled to be released before Bioware finishes work on Mass Effect 3. Being a responsible executive, I’ve decided to do some research before my next formal meeting with the studio in charge of the spin-off. Of course, this research doesn’t involve me actually playing the game–hell no. By research I mean reading up on Metacritic scores, development schedules, sales numbers, and – you guessed it – Mass Effect 2 player data.

 

Some of the data really startles me. I focus my attention on two in particular:

  • 80% of players played as a male Shepard?! (I wonder how much time and money was spent developing the female Shepard’s character models, dialogue options, and voice-acting).”
  • 80% of players chose to play as a Soldier? More than every other class combined?! (Was our investment to develop the 5 other classes worth the other 20%? And of that 20%, how many of them were belong to the 50% of hardcore fans who imported their saves from the first game?).”

So I jot those stats down on my Blackberry and head out to meet the developers in person.

When I get there, the news is not good. The project manager tells me that the team is six months behind schedule and significantly over budget. “This is our first attempt to make a game within the Mass Effect universe, but we’re confident that our next ME spin-off will take less time to complete,” they explain. “All we need is an additional six to ten months to deliver a product worthy of the Mass Effect brand.”

But I don’t want to hear that. Having reviewed my company’s release schedule prior to the meeting, I know that Mass Effect 3 is scheduled to release in late 2012 or early 2013, and since the whole point of releasing this spin-off is to satiate the fans while they wait for the third installment of the main series, we simply can’t afford to push it back another six months. “Sorry,” I tell them, “the game simply must be released in 2011–we’re going to have to figure out a way to make this work. I could perhaps get you some more funding, but we also need to figure out how to cut costs on your end–otherwise, our request for more money won’t be too well-received at corporate headquarters.” This prompts the creatives sitting at the table to roll their eyes at me.

“So then….what do you propose we cut? Any ideas?,” asks one of them with a hefty dose of sarcasm.

“As a matter of fact, I have some suggestions right here on my Blackberry. For instance, how much time and money would we save if we axed the option of playing a female character altogether?”

“Well that would certainly help us quite a bit, though I’m not sure it’s enough; besides, giving you a choice of gender is a Bioware tradition, and we want to do them justice with this game.”

“Of course we do,” I answer, “we want to be as faithful to the series as possible, but remember that this game will not form part of the main series; since it is only a side story, I think we can persuade the guys at Bioware to let us limit the game to a male protagonist just this once, especially if you give him a compelling back-story.” After showing them the Mass Effect 2 player statistics and some further discussion, I manage to convince them that this is a good idea, a task that was probably made easier by the fact that–surprise plot-twist ahead!–there were no women present at the meeting. But that is still not enough to bring development back on track, so I move on to my next idea, which proves to be far more controversial.

“What!?!?! You want us to axe every other class in the game!? You really want us to limit players to the role of soldier??? That is simply unacceptable,” says a visibly angry lead designer. “No. That’s just not happening, and I don’t care what statistic you show me…the class system is part of the game’s legacy–hardcore gamers expect this from us, there is just no way we can risk alienating them like this. Trust me, you’ll get a backlash from the dedicated fans, and that won’t be good for any of us.”

“Besides,” adds the project manager, “we’re already pretty far along in the design of the various classes, so that wouldn’t really cut costs as much as you’d think.”

“But would it cut development time?” I ask.

“Yes, maybe. But we’d still need additional funding, so it would be a waste of resources to simply abandon something that our team has been working on for months.” Faced with this impasse, I lean back on my chair and close my eyes for a second. The room stays silent until, suddenly, a little light bulb goes off in my head.

“What about this,” I tell them. “What if we save the other classes for DLC? That way, we can postpone developing them for the time being, get additional ‘DLC funding’ to finish the classes at a later date, while earning additional income from the hardcore players, who are the only ones interested in playing with them in the first place.” Once I show them that a whopping 80% of players chose to play as a soldier in ME2, and that our company has committed itself to prioritizing the hardcore-gamer-cash-cow that is DLC, the  team grudgingly goes along with my brilliant idea. And so the meeting comes to a close, I express my gratitude towards the development team, and assure them that “my bosses at EA will be very grateful for your understanding, and grateful to know that we already have promising DLC content in development!”

Now back to work guys….”

Visibility is a trap

Think of this as a kind of Nietzschean parable: the point is not so much to provide a faithful account of the future, as it is to speculate and warn about what could happen if we continue down this path. As we have seen, publishers and developers have profoundly different ways of looking at the world, and this creates the possibility of conflict when it comes to interpreting player data. Developers may look at a statistic such as “80% percent of players chose the soldier” and see it as an eminently solvable problem of menu design and presentation, but publishers could just as easily seize on that as a justification to cut costs or–worse–to make additional money off of the dedicated fan who  is willing to pay for DLC.  The worst case scenario is that developers will end up losing such arguments more often than not, and we the audience will end up settling for lesser games.

Trust me, dear friends and developers, I get why you would be excited by the prospect of using new technology to learn more about your audience. Why wouldn’t you be?  But please be careful how you collect such data; be careful who you share it with; and for goodness sake, be ready to defend your findings in front of the people who pay your bills, lest you end up in another meta-prison of your own making.

In short, beware of unintended consequences, and always remember Foucault’s prophetic warning: “visibility is a trap.”

Videogame vs. video game, cont.

Mark J.P. Wolf weighs in on the videogame/video game debate in the opening chapter of The Video Game Explosion: A History From Pong to Playstation and Beyond:

What exactly constitutes a “video game”? Although the term seems simple enough, its usage has varied a great deal over the years and from place to place. We might start by noting the two criteria present in the name itself; its status as a “game” and its use of “video” technology. These two aspects of video games may be reason for why one finds both “video game” (two words) and “videogame” (one word) in use: considered as a game, “video game” is consistent with “board game” and “card game,” whereas if one considers it as another type of video technology, then “videogame” is consistent with terms like “videotape” and “videodisc.” Terms like “computer games” and electronic games” are also sometimes used synonymously with “video games,” but distinctions between them can be made. “Electronic games” and “computer games both do not require any visuals, while “video games” would not require a microprocessor (or whatever one wanted to define as being essential to being referred to as a “computer”). Thus, a board game like Stop Thief  (1979), for example, which has a handheld computer that makes sounds that relate to game play on the board, could be considered a computer game, but not a video game. More of these kinds of games exist than games that involve video but not a computer, making “video games” the more exclusive term. The term “video games” is also more accurate in regard to what kinds of games are meant when the term is used in common parlance, and so it will be the term used here.

It’s clear why Wolf would choose to say “video game” instead of electronic or computer games, but it seems to me that he never really explains why this is preferable to writing “videogame” as one word. Moreover, he mischaracterizes the motivation behind those who consciously choose to treat “videogames” as one word. The reason I write videogame is not because I consider them to be a “different type of video technology.” That would imply that I am giving priority to the video element of videogames at the expense of their gaming roots. But that’s not the case at all. I don’t consider videogames to be a new type of video technology, I consider them to be a new type of technology, period. It is a technology and a medium composed of two preexisting mediums–i.e., games and video–but one which remains irreducible to either one; more precisely, I don’t think of the videogame as a new form of video or a new form of game, but rather as an entirely new form in and of itself, one with distinct characteristics and powers of expression.

Simply put, both games and video are essential precursors to the modern videogame, but if you believe that the medium is more than the sum of its parts, then it follows that we give it a name all its own. The advantage of using videogame as one word, then, is that it acknowledges and transcends these precursors in one fell swoop. It is a new word, a made-up word, but one that is very clearly anchored in the medium’s roots.

This is a surprisingly divisive issue, but I find it endlessly fascinating. For more points of view, check out this impromptu debate that took place over at Gameology 2.0 in 2006. Perhaps my favorite comment in that thread is the one by videogame critic/scholar Ian Bogost, who wrote in support of the one word spelling. (I only discovered this recently through his twitter feed–wish I had read it prior to my last post on the subject!) Bogost:

I use the term “videogame” for rhetorical reasons. Separating the words, in my opinion, suggests that videogames are merely games with some video screen or computer attached. But, I believe that videogames are fundamentally a computational medium, not just the extension of a medium like board or role-playing games (although there is also a genealogy there). I think that closing the space, in part, helps consolidate this concept. Personally, I’m only interested in gaming as it relates to computation. That doesn’t mean I don’t think gambling or board games or whatnot are useful, it just means that they are not my primary focus.

As for the argument that “videogame” implies video display…I don’t really care. I’m more interested in common usage, and the fact is that people use “videogame” to refer to the kinds of artifacts I want to talk about. I think video qua television screen is a vestigial effect of the arcade era and nobody is really confused about it.

For the same reason I abhor terms like “interactive entertainment.” I think inventing terms like this is a bit like trying to rename film or photography. More precise terms are more dangerous because they will lead to fragmentation. Jane McGonigal and I have had inconclusive conversations about whether ARGs and other so-called “big games” are videogames. I contend that they are, if they make significant use of computation (so, Cruel 2 B Kind, the game she and I created, is a videogame for me!). “Videogame” is a fine equivalent for “film” if we’d just stop worrying about it so much. And forcing the term into broader usage will help expand the medium much more than making up new words for each sub-type.

(Image by Bill Mudron)

noby noby

Videogame, no space

Why are they called video games?  Okay, stupid question.  We all know why they are called video games, but why do we still call them video games.  The term is so literal.  It is a game that you play on a video screen.  Is that the best we can come up with?  Why do we come up with something cool like….. Visual Attack Challenge Activities?  Or VACAs for short.  I realize it’s kind of wordy but it was just off the top of my head.  Give me a break. –Shelby Coulter

Recently, I’ve started calling them videogames–no space–in spite of constant objections from my spellchecker, for many of the same reasons alluded to in that piece. A videogame is not simply a game on video, it’s a distinct form. Accordingly, I write videogame as a single word in an attempt to acknowledge gaming’s roots without undermining its claim of being a unique expressive medium in its own right.

Chess, now on video! = video game

**********

**********

= videogame

Νow, this is not to say that there is no such thing as a video(_space_)game. When you play Chess or Monopoly online, you’re playing a video game; when you play Solitaire on your computer, you’re also  playing a video game. Those are games, plain and simple. They’ve just been transported into a video monitor for your convenience.

So, if online chess is a “video_game,” then what do we call something like Noby Noby Boy?  It obviously uses video and, clearly, it’s meant to be played with. But many gamers remain suspicious of it: “Sure,” they say, “Noby Noby Boy might have all those things, but is it really a game?

Alas, that question, interesting as it is to debate, is (in this specific case) largely beside the point.  For Noby Noby Boy is neither a game nor a video game, and it  was never meant to be.

It is, however, a fantastic videogame, no space—and that’s something else entirely.

3D Dot Game Heroes

Pixel Nostalgia

To follow up on the last post, make sure to check out this very nice response on Stephen Northcott’s blog–I think it balances out the tone of my initial remarks quite nicely. Whereas my post perhaps dwelled a bit too long on the decline of pixel art in mainstream gaming, here Northcott gives us a slightly more upbeat outlook for the future, citing Silicon Studio’s 3D Dot Game Heroes as a recent example of “how big publishing houses are starting to take notice of this hitherto underground Indie scene.”

It is also not suprising how popular this medium is set to be when you consider the average age of serious video game players these days. There are a lot of 30 – 40 year olds out there with a nostalgic view of the handful of decades that video games have been around.

He has a point: nostalgia probably has a lot to do with recent high-profile examples of the pixel aesthetic (I wrote a comment on his post with some ideas about why this happens).

The problem with nostalgia, however,  is that its power inevitably (ironically?) weakens over time. Right now, it is working to our advantage by appealing to the older generation of gamers. But some day these older gamers will die, taking their nostalgia with them. Worse, our nostalgic longing will steadily lose much of its power during our own lifespan, since eventually we’ll have to yield control of the medium to a  new generation of developers and players who  are being taught to see 3D as standard and pixels as “retro.”

So how can we protect pixels from the certain death that awaits us? Obviously, the first thing we need is to continue developing awesome pixel-based games. But just as important, we need to  secure a permanent place for pixels in videogame discourse.  Future gamers might not be able to fully understand our sense of nostalgia regardless of what we do (and this is a good thing, given that it is our nostalgia, not theirs), but by making a forceful case on its behalf, we can at least ensure that it remains a dignified and relevant option for game developers far into the future. This is what happened to black and white in film, and it will happen to pixel art as well if we make a strong enough case for it (a few more youtube documentaries Pixel and we’d already be halfway there). Hopefully we’ll do better than film, so that pixel art won’t be as rare as black and white movies have become nowadays.

Speaking of pixels and nostalgia, check out this super sweet stop-motion tribute to classic NES games!

The pixel is alive indeed. For more info on the film, see this Kotaku post.

pixel

Pixel: A Documentary by Simon Cottee

This short pixel art documentary by Simon Cottee is required viewing for readers of this blog (thanks for reading, by the way). An unassuming film with an unassuming title, Pixel may be modest in scope, but it is also a deeply enjoyable and thoughtful account of the rise, fall, and triumphant return of pixel art in videogames and other media.

Perhaps the most memorable part of the film is an interview with Jason Rohrer in which he attempts to justify the use of pixels in his own work with an interpretation consisting of two distinct but complimentary claims. His first point is that abstract, pixelated graphics make it easier for players to identify themselves with the characters, a concept that should come as no surprise to people with an interest in games or animation in general. His second point is a bit more surprising, given that it’s about technology, and given Rohrer’s reputation for being one of the least nerdy, least techie, and most artsy game designers around. Even more surprising–he actually makes a lot of sense (okay, maybe that isn’t so surprising, but it definitely makes for a more powerful defense of his taste in graphics).

Essentially, Rohrer’s point is that pixel art offers the most natural and transparent approach to making videogame imagery, due mostly to the fact that pixels, like videogames, always take place inside a computer. It’s a powerful thought, but a relatively simple one when you think about it. More importantly, it shows that Rohrer recognizes the need to reconcile the expressive nature of videogames with the technology that makes them possible.

Reconciling these two ideas is especially important in our 3D-dominated world, where pixelated abstraction is often portrayed as a reactionary move deployed by those who remain suspicious of polygons. Consequently, it becomes extremely easy to buy into the notion that pixel revivalists are simply part of a “backlash” against advances in game technology. Rohrer’s response turns this idea on its head by depicting pixels as the authentic and “hardcore” style, while simultaneously implying that 3D is the real format of choice for “casual” gamers (that means you, Halo fans…you casual gamers you).  Simply put, pixels are not a backlash against technology, they are the quintessential videogame technology. Likewise, the use of pixels does not constitute a rejection of realism, but rather an affirmation of abstraction.

At one point during the documentary, Rohrer suggests that an appreciation of pixel art is inextricably tied to achieving videogame literacy. He’s right, which is why this documentary should be applauded for doing its part to remedy the situation. But of course, even those who speak videogame fluently should watch the film, as it is sure to enrich your gaming vocabulary in one way or another.

You can watch the entire video above or on Cotteen’s YouTube page.

This short pixel art documentary by Simon Cottee has spread pretty quickly since its release on youtube last Saturday–and it deserves to spread some more. In fact, I’m going to say that this is required viewing for readers of this blog.  is every bit as practical and level-headed as its title suggests. There are no self-indulgent nostalgia trips or geeky ‘retro’ comedy skits here  (to be clear, I absolutely love the comedic stylings of Yahtzee and the Angry Video Game Nerd…less fond of their imitators), what we have is a crash course on the glorious world of the pixel–the first and purest (and best) form of visual representation in videogames. The film begins with a brief overview of the rise of pixels during the 80s and early 90s–a time when pixel art was pretty much the only way of putting graphics onscreen–and its eventual decline during the 3D era, which began in earnest with the 32/64 bit generation of consoles and continues to this day.
Deep Horizon

Games as Experience Machines

Deep Horizon

I’ve mentioned before that L.B. Jeffries is, in my estimation, one of the most capable videogame critics in the blogosphere. That is why I was taken aback by an essay titled The Trouble Shooting Review, in which he proposes a more technical approach to game criticism not unlike the one you would use to review a car. I want to think that either Jeffries misstated what he wanted to say, or perhaps that I misread it. I’m hoping that’s the case. In any event, I’ll use it as an opportunity to explain why games aren’t cars (in case you were wondering), and why the kind of technical analysis he seems to endorse is problematic for an expressive medium like videogames.

I’ll begin with two of the most baffling passages in his essay, which seem to summarize his general view as well as any other.

It’s easy to dismiss technical critiques like bugs or load times as irrelevant to a game’s value, but the notion of bringing them up still has merit. What can be gained by approaching a game review from a more technical perspective than things like fun factor or story? Looking at a game from a technical perspective really just means treating games like experience generating machines instead of experiences themselves. [...]

A lot of what I’m describing is basically what you do when you’re reviewing something like a car (Peter Johnson, et al, “How to Write a Car Review”, Wikihow, 27 October 2009). You don’t just drive on paved roads, you take it down some dirt roads and maybe slam the brakes at high-speeds a few times. Maybe even take someone for a ride with you and see if the passenger side is fun. Applied to games, it makes the reviewer consider things like if it has co-op, then you should do your best to play it that way.

1.

Let’s assume Jeffries is right to describe games as “experience generating machines” (truthfully, they can be called many things besides, but for now let’s stick to his fine term and agree that they are machines–figuratively speaking, since a game is not the same as the machine that reads them). Let’s assume, furthermore, that the goal of an experience generating machine is–what else?–to generate experiences. If this is true, then surely we must also accept the notion that an experience producing machine is only as good (interesting/powerful/profound) as the experience it produces in the player. Accordingly, Jeffries’s desire to treat games as “experience generating machines instead of experiences themselves” seems dangerously close to undercutting the entire purpose of the game.

In other words, games invite us to treat them as “experiences [in] themselves” first, and it is entirely beside the point to downplay this part of their nature on the grounds that such experiences have been “generated.” After all, every experience is generated by something. There is no such thing as a stand-alone experience. “Experiences themselves” are always the end result of external factors that frequently exist beyond our control or understanding. For instance, I could describe my first summer job as a teenager as a “wonderful, life-changing experience.” But why was this such a wonderful experience? What made it so transformative? That experience is not the result of nothing–surely, something must have made it a pleasant and transformative one. Maybe I was lucky enough to have a wise and patient boss; maybe I had great colleagues; maybe I met the love of my life in that job. One could cite hundreds of different potential explanations to support my recollection of that experience.

And yet, these explanations are secondary to the experience itself. They are my way of trying to rationalize something I have already experienced, not the other way around. Simply put: expereiences come first, then come the explanations and rationalizations.

This is also the case when we’re talking about videogames as experience producing machines. If the point of a game is to produce an experience, then how could we not give priority to the experience of playing it? Imagine if we applied that same reasoning to a painting. Say that the “point” of painting is to produce a visual. If that’s the case, then it would make no sense to approach a painting as anything other than a visual artifact. A painting’s worth is not determined by the materials that the artist used to construct that painting. One does not say that a painting is “great, unless we consider the materials involved in creating it.” That would make no sense. Of course, identifying the materials is an interesting and useful thing to know, because it might help to explain the causes behind our reaction to said painting. But they do not precede our value judgments on the painting–they merely help us to justify and rationalize those reactions.

2.

Aside from being misguided and unrealistic, Jeffries’s approach risks courting the resentment of some readers. For it implies that the reviewer has somehow transcended his own initial experiences with the game and is therefore uniquely positioned to impartially dissect and predict the experiences of others, when in fact the opposite is true: i.e., our initial experience with a work often plays an essential role in determining not only what we think of the work, but also what we think of the public’s reaction to that work.

(A recent example of this phenomenon: many film critics and analysts explained the poor box office of The Hurt Locker by suggesting that the movie is simply ‘too heavy” for a mainstream audience that already finds itself coping with the grim reality of two wars and an economic recession. But if this is true, then what do we make of Transformers 2, which is just as violent and was also released during the same tumultuous period, only to make hundreds of millions of dollars?  The answer to this riddle is quite simple: film critics saw The Hurt Locker and determined it was very, very good. They also saw Transformers 2 and determined that it was very, very bad. Accordingly, they politely blamed the audience for Hurt Locker’s failure and Transformers 2’s success by describing the former as “too heavy” and the latter as “easy escapism.” In other words, they assumed that the success or failure of either movie had nothing to do with their actual value as a work. Transformers was a success not because it did things right, but because it did things wrong. Conversely, The Hurt Locker was a commercial failure because of everything it did right! At this point, it is worth mentioning that there is nothing wrong with this attitude. My only wish is that more videogame critics followed suit.)

3.

An ideal approach to reviewing an “experience generating machine” would begin with a subjective appraisal of ‘the affects’ or sensations that the game communicates to us as a player. Only after we’ve reflected and formed an opinion on these matters do we begin to search for clues that explain how the game was able to transmit those feelings in the first place. In other words, we do not determine that a game is “good” after evaluating how the experience machine works. To the contrary, we first determine that a game is “good” and then look to the machine’s structure in an attempt to understand why it “worked” for us. Looked at from this perspective, the study of “experience machines” is really a study of the self. It is a study of the conditions (both real and virtual) that must be met for certain feelings and sensations to be triggered within us. And if you accept the notion that emotional triggers are only as valuable as the emotions they trigger, then it becomes crucial to reflect on the value of the experience itself before getting down to a functional analysis of the game design.

This is also why the “car review” analogy just doesn’t fit. A car is not an experience or an experience generator. Yes, we may find that driving a particularly nice car results in a pleasurable morning commute. But in this case the pleasant experience is incidental to tha act of driving the car. One does not buy a nice car because the experience of driving it is worthwhile in and of itself (unless you’re Jay Leno, but no one wants to be him nowadays, lol). No, we buy cars because we need it to go places. Once we’ve made the decision to buy it, we might say to ourselves: “Hey look, it seems like I have some extra cash burning a hole in my pockets! And seeing as I’m already buying a car, why not use this extra dough to buy a really nice one, so that I can get to my destination more comfortably.” Notice how the need to buy a car precedes our decision to go for the one with the fancy features. The fancy features are just our way of making an otherwise necessary purchase more pleasant than it otherwise would be.

Games as “experience machines,” however, are designed for the express purpose of generating worthwhile experiences. We technically don’t need to play them, but those of us who recognize the medium’s expressive power and limitless potential actually want  to, believing that our lives can be enriched in the process.

If not cars, then what else could these experience machines be compared to? Perhaps the well-known “game criticism as ‘travel log’” approach gives us a better analogy. Ideally, we play games for the same (ideal) reason that we travel: to experience an ‘alternate reality,’ to disrupt the monotonous flow of daily life, to unsettle our notion of what is ‘normal’ or ‘necessary,’ to engage others on their own terms, to learn from and experiment with different modes of being in the world, etc. There are times when these cultural exchanges require you to push the limits of what’s possible in the way Jeffries suggests. But oftentimes it is best to temporarily surrender to the experience itself and abide by the rules in place in the country we’re visiting so as to allow yourself to learn something meaningful in the process.

4.

At the beginning of his essay, Jeffries suggests that reviewing games might be harder than any other kind of review, because our experiences with games tend to be more unique and varied than  they are with any other medium. I think he’s right.  Accordingly, I don’t want to suggest that this is somehow the definitive way to approach game criticism. Not even close. In fact, I’m not entirely sure that games are best understood as experience generating machines, though I certainly find that description persuasive and believe it applies to many types of games.

Rather than seeking to impose a single methodology on this endlessly complex medium, perhaps we should be embracing its many ambiguities and enigmas. We should be flexible, adapting our styles according to the games under review at any given time. Critical gaming “schools” will come in due time, but we shouldn’t be in any rush to get there. The fact that game criticism remains a largely uncharted field might make it more difficult to navigate, but it also allows us a degree of freedom that is no longer present in more established mediums.

So let’s enjoy this freedom while it lasts, for it might no longer be available by the time the next generation of game critics arrive. Once you adopt this attitude, you might even find that the medium’s many ambiguities are precisely what make it so interesting.

Pictured above: “Deep Horizon” by UBERMORGEN.COM

m5

[Resonance Machine] Foucault on Heterotopias


Videogames communicate through the strategic use of space. These spaces are virtual, but no less “real” than actual spaces in so far as they depend on the very same kind of spatial relationships that define how we relate to an environment.  These virtual spaces allow us to briefly escape the moralities and identities that define (and limit) our actions in the “real world,” only to replace them with a new set of values and identities determined by the game designer. It is in this respect that a videogame can be said to resemble a heterotopia.

A heterotopia is a space that exists outside of the society that it forms a part of.  Their relationship with the rest of society might seem paradoxical at first blush, but it is central to the crucial role that heterotopias  occupy in most cultures: generally  speaking, these can be understood as the designated spaces on which various kinds of rites of passage may take place and/or a place where one may take refuge from society without really leaving it. There are many types of heterotopias, but all of them share at least one thing in common: namely, that one cannot enter and/or exit them without first meeting a set of criteria. We don’t work or live in a heterotopia, instead, a heterotopia is that place which we only visit for specific reasons at specific times. We can return to the broader society once the task has been completed (i.e., once the heterotopia has served its purpose).

Boarding schools, “love hotels,” museums, the theater, and even cementeries can all be understood as heterotopias, for  the various reasons that Foucault explains below. Towards the very end of the essay, in a beautiful (and uncharacteristically earnest) passage, Foucault identifies ships as the ultimate type of heterotopia in Western civilization, one that sadly began to disappear with the advent of air transportation and has yet to be replaced.

He thinks this is bad news, but you don’t have to despair! For there is something new emerging on the horizon. You guessed it: videogames. Could it be that a new heterotopia arrived just in time for Foucault’s lament about the imminent disappearance of most Western heterotopias? Perhaps, but I readily concede that it is a very debatable proposition. Many basic questions remain. For one,while it is certainly  true that videogames and heterotopias share a lot in common, is this enough to regard them as essentially serving the same functions? Is the gaming medium best understood as a heterotopia with expressive attributes, or as a mode of expression with heterotopian functions? Should we even think of these alternatives as being mutually exclusive in the first place? Can’t a game serve both functions without fear of contradiction?

I’m keeping my opinions to myself for the time being, since the aim of posing these questions was never to find “the right answer” for them (assuming  one exists), but rather to frame the heterotopia-gaming relatioship in a way that productively problematizes both. After all, no one cares about the actual connection between a heterotopia and a game. The connection itself is meaningless–what’s important is trying to connect the two concepts in ways that can enrich our understanding of both games and heterotopias alike.  Who knows, maybe we’ll even expand our critical-gaming vocabulary in the process!

So, without further ado, here is an excerpt of Michel Foucault’s Of Other Spaces, Heterotopias. The full text is available at Foucault.info, a great online resource for many of Foucault’s shorter writings.

HETEROTOPIAS

First there are the utopias. Utopias are sites with no real place. They are sites that have a general relation of direct or inverted analogy with the real space of Society. They present society itself in a perfected form, or else society turned upside down, but in any case these utopias are fundamentally unreal spaces.

There are also, probably in every culture, in every civilization, real places – places that do exist and that are formed in the very founding of society – which are something like counter-sites, a kind of effectively enacted utopia in which the real sites, all the other real sites that can be found within the culture, are simultaneously represented, contested, and inverted. Places of this kind are outside of all places, even though it may be possible to indicate their location in reality. Because these places are absolutely different from all the sites that they reflect and speak about, I shall call them, by way of contrast to utopias, heterotopias. I believe that between utopias and these quite other sites, these heterotopias, there might be a sort of mixed, joint experience, which would be the mirror. The mirror is, after all, a utopia, since it is a placeless place. In the mirror, I see myself there where I am not, in an unreal, virtual space that opens up behind the surface; I am over there, there where I am not, a sort of shadow that gives my own visibility to myself, that enables me to see myself there where I am absent: such is the utopia of the mirror. But it is also a heterotopia in so far as the mirror does exist in reality, where it exerts a sort of counteraction on the position that I occupy. From the standpoint of the mirror I discover my absence from the place where I am since I see myself over there. Starting from this gaze that is, as it were, directed toward me, from the ground of this virtual space that is on the other side of the glass, I come back toward myself; I begin again to direct my eyes toward myself and to reconstitute myself there where I am. The mirror functions as a heterotopia in this respect: it makes this place that I occupy at the moment when I look at myself in the glass at once absolutely real, connected with all the space that surrounds it, and absolutely unreal, since in order to be perceived it has to pass through this virtual point which is over there.

As for the heterotopias as such, how can they be described? What meaning do they have? We might imagine a sort of systematic description – I do not say a science because the term is too galvanized now -that would, in a given society, take as its object the study, analysis, description, and ‘reading’ (as some like to say nowadays) of these different spaces, of these other places. As a sort of simultaneously mythic and real contestation of the space in which we live, this description could be called heterotopology.

Its first principle is that there is probably not a single culture in the world that fails to constitute heterotopias. That is a constant of every human group. But the heterotopias obviously take quite varied forms, and perhaps no one absolutely universal form of heterotopia would be found. We can however class them in two main categories.

In the so-called primitive societies, there is a certain form of heterotopia that I would call crisis heterotopias, i.e., there are privileged or sacred or forbidden places, reserved for individuals who are, in relation to society and to the human environment in which they live, in a state of crisis: adolescents, menstruating women, pregnant women. the elderly, etc. In out society, these crisis heterotopias are persistently disappearing, though a few remnants can still be found. For example, the boarding school, in its nineteenth-century form, or military service for young men, have certainly played such a role, as the first manifestations of sexual virility were in fact supposed to take place “elsewhere” than at home. For girls, there was, until the middle of the twentieth century, a tradition called the “honeymoon trip” which was an ancestral theme. The young woman’s deflowering could take place “nowhere” and, at the moment of its occurrence the train or honeymoon hotel was indeed the place of this nowhere, this heterotopia without geographical markers.

But these heterotopias of crisis are disappearing today and are being replaced, I believe, by what we might call heterotopias of deviation: those in which individuals whose behavior is deviant in relation to the required mean or norm are placed. Cases of this are rest homes and psychiatric hospitals, and of course prisons, and one should perhaps add retirement homes that are, as it were, on the borderline between the heterotopia of crisis and the heterotopia of deviation since, after all, old age is a crisis, but is also a deviation since in our society where leisure is the rule, idleness is a sort of deviation.

The second principle of this description of heterotopias is that a society, as its history unfolds, can make an existing heterotopia function in a very different fashion; for each heterotopia has a precise and determined function within a society and the same heterotopia can, according to the synchrony of the culture in which it occurs, have one function or another.

As an example I shall take the strange heterotopia of the cemetery. The cemetery is certainly a place unlike ordinary cultural spaces. It is a space that is however connected with all the sites of the city, state or society or village, etc., since each individual, each family has relatives in the cemetery. In western culture the cemetery has practically always existed. But it has undergone important changes. Until the end of the eighteenth century, the cemetery was placed at the heart of the city, next to the church. In it there was a hierarchy of possible tombs. There was the charnel house in which bodies lost the last traces of individuality, there were a few individual tombs and then there were the tombs inside the church. These latter tombs were themselves of two types, either simply tombstones with an inscription, or mausoleums with statues. This cemetery housed inside the sacred space of the church has taken on a quite different cast in modern civilizations, and curiously, it is in a time when civilization has become ‘atheistic,’ as one says very crudely, that western culture has established what is termed the cult of the dead.

Basically it was quite natural that, in a time of real belief in the resurrection of bodies and the immortality of the soul, overriding importance was not accorded to the body’s remains. On the contrary, from the moment when people are no longer sure that they have a soul or that the body will regain life, it is perhaps necessary to give much more attention to the dead body, which is ultimately the only trace of our existence in the world and in language. In any case, it is from the beginning of the nineteenth century that everyone has a right to her or his own little box for her or his own little personal decay, but on the other hand, it is only from that start of the nineteenth century that cemeteries began to be located at the outside border of cities. In correlation with the individualization of death and the bourgeois appropriation of the cemetery, there arises an obsession with death as an ‘illness.’ The dead, it is supposed, bring illnesses to the living, and it is the presence and proximity of the dead right beside the houses, next to the church, almost in the middle of the street, it is this proximity that propagates death itself. This major theme of illness spread by the contagion in the cemeteries persisted until the end of the eighteenth century, until, during the nineteenth century, the shift of cemeteries toward the suburbs was initiated. The cemeteries then came to constitute, no longer the sacred and immortal heart of the city, but the other city, where each family possesses its dark resting place.

Third principle. The heterotopia is capable of juxtaposing in a single real place several spaces, several sites that are in themselves incompatible. Thus it is that the theater brings onto the rectangle of the stage, one after the other, a whole series of places that are foreign to one another; thus it is that the cinema is a very odd rectangular room, at the end of which, on a two-dimensional screen, one sees the projection of a three-dimensional space, but perhaps the oldest example of these heterotopias that take the form of contradictory sites is the garden. We must not forget that in the Orient the garden, an astonishing creation that is now a thousand years old, had very deep and seemingly superimposed meanings. The traditional garden of the Persians was a sacred space that was supposed to bring together inside its rectangle four parts representing the four parts of the world, with a space still more sacred than the others that were like an umbilicus, the navel of the world at its center (the basin and water fountain were there); and all the vegetation of the garden was supposed to come together in this space, in this sort of microcosm. As for carpets, they were originally reproductions of gardens (the garden is a rug onto which the whole world comes to enact its symbolic perfection, and the rug is a sort of garden that can move across space). The garden is the smallest parcel of the world and then it is the totality of the world. The garden has been a sort of happy, universalizing heterotopia since the beginnings of antiquity (our modern zoological gardens spring from that source).

Fourth principle. Heterotopias are most often linked to slices in time – which is to say that they open onto what might be termed, for the sake of symmetry, heterochronies. The heterotopia begins to function at full capacity when men arrive at a sort of absolute break with their traditional time. This situation shows us that the cemetery is indeed a highly heterotopic place since, for the individual, the cemetery begins with this strange heterochrony, the loss of life, and with this quasi-eternity in which her permanent lot is dissolution and disappearance.

From a general standpoint, in a society like ours heterotopias and heterochronies are structured and distributed in a relatively complex fashion. First of all, there are heterotopias of indefinitely accumulating time, for example museums and libraries, Museums and libraries have become heterotopias in which time never stops building up and topping its own summit, whereas in the seventeenth century, even at the end of the century, museums and libraries were the expression of an individual choice. By contrast, the idea of accumulating everything, of establishing a sort of general archive, the will to enclose in one place all times, all epochs, all forms, all tastes, the idea of constituting a place of all times that is itself outside of time and inaccessible to its ravages, the project of organizing in this way a sort of perpetual and indefinite accumulation of time in an immobile place, this whole idea belongs to our modernity. The museum and the library are heterotopias that are proper to western culture of the nineteenth century.

Opposite these heterotopias that are linked to the accumulation of time, there are those linked, on the contrary, to time in its most flowing, transitory, precarious aspect, to time in the mode of the festival. These heterotopias are not oriented toward the eternal, they are rather absolutely temporal [chroniques]. Such, for example, are the fairgrounds, these’ marvelous empty sites on the outskirts of cities that teem once or twice a year with stands, displays, heteroclite objects, wrestlers, snakewomen, fortune-tellers, and so forth. Quite recently, a new kind of temporal heterotopia has been invented: vacation villages, such as those Polynesian villages that offer a compact three weeks of primitive and eternal nudity to the inhabitants of the cities. You see, moreover, that through the two forms of heterotopias that come together here, the heterotopia of the festival and that of the eternity of accumulating time, the huts of Djerba are in a sense relatives of libraries and museums. for the rediscovery of Polynesian life abolishes time; yet the experience is just as much the,, rediscovery of time, it is as if the entire history of humanity reaching back to its origin were accessible in a sort of immediate knowledge,

Fifth principle. Heterotopias always presuppose a system of opening and closing that both isolates them and makes them penetrable. In general, the heterotopic site is not freely accessible like a public place. Either the entry is compulsory, as in the case of entering a barracks or a prison, or else the individual has to submit to rites and purifications. To get in one must have a certain permission and make certain gestures. Moreover, there are even heterotopias that are entirely consecrated to these activities of purification -purification that is partly religious and partly hygienic, such as the hammin of the Moslems, or else purification that appears to be purely hygienic, as in Scandinavian saunas.

There are others, on the contrary, that seem to be pure and simple openings, but that generally hide curious exclusions. Everyone can enter into thew heterotopic sites, but in fact that is only an illusion- we think we enter where we are, by the very fact that we enter, excluded. I am thinking for example, of the famous bedrooms that existed on the great farms of Brazil and elsewhere in South America. The entry door did not lead into the central room where the family lived, and every individual or traveler who came by had the right to ope this door, to enter into the bedroom and to sleep there for a night. Now these bedrooms were such that the individual who went into them never had access to the family’s quarter the visitor was absolutely the guest in transit, was not really the invited guest. This type of heterotopia, which has practically disappeared from our civilizations, could perhaps be found in the famous American motel rooms where a man goes with his car and his mistress and where illicit sex is both absolutely sheltered and absolutely hidden, kept isolated without however being allowed out in the open.

Sixth principle. The last trait of heterotopias is that they have a function in relation to all the space that remains. This function unfolds between two extreme poles. Either their role is to create a space of illusion that exposes every real space, all the sites inside of which human life is partitioned, as still more illusory (perhaps that is the role that was played by those famous brothels of which we are now deprived). Or else, on the contrary, their role is to create a space that is other, another real space, as perfect, as meticulous, as well arranged as ours is messy, ill constructed, and jumbled. This latter type would be the heterotopia, not of illusion, but of compensation, and I wonder if certain colonies have not functioned somewhat in this manner. In certain cases, they have played, on the level of the general organization of terrestrial space, the role of heterotopias. I am thinking, for example, of the first wave of colonization in the seventeenth century, of the Puritan societies that the English had founded in America and that were absolutely perfect other places. I am also thinking of those extraordinary Jesuit colonies that were founded in South America: marvelous, absolutely regulated colonies in which human perfection was effectively achieved. The Jesuits of Paraguay established colonies in which existence was regulated at every turn. The village was laid out according to a rigorous plan around a rectangular place at the foot of which was the church; on one side, there was the school; on the other, the cemetery-, and then, in front of the church, an avenue set out that another crossed at fight angles; each family had its little cabin along these two axes and thus the sign of Christ was exactly reproduced. Christianity marked the space and geography of the American world with its fundamental sign.

The daily life of individuals was regulated, not by the whistle, but by the bell. Everyone was awakened at the same time, everyone began work at the same time; meals were at noon and five o’clock-, then came bedtime, and at midnight came what was called the marital wake-up, that is, at the chime of the churchbell, each person carried out her/his duty.

Brothels and colonies are two extreme types of heterotopia, and if we think, after all, that the boat is a floating piece of space, a place without a place, that exists by itself, that is closed in on itself and at the same time is given over to the infinity of the sea and that, from port to port, from tack to tack, from brothel to brothel, it goes as far as the colonies in search of the most precious treasures they conceal in their gardens, you will understand why the boat has not only been for our civilization, from the sixteenth century until the present, the great instrument of economic development (I have not been speaking of that today), but has been simultaneously the greatest reserve of the imagination. The ship is the heterotopia par excellence. In civilizations without boats, dreams dry up, espionage takes the place of adventure, and the police take the place of pirates.

Read up on texts by Foucault at Foucault.info; if you’re new to his ideas, this site serves as a good introduction to his life and work.

HETEROTOPIAS

First there are the utopias. Utopias are sites with no real place. They are sites that have a general relation of direct or inverted analogy with the real space of Society. They present society itself in a perfected form, or else society turned upside down, but in any case these utopias are fundamentally unreal spaces.

There are also, probably in every culture, in every civilization, real places – places that do exist and that are formed in the very founding of society – which are something like counter-sites, a kind of effectively enacted utopia in which the real sites, all the other real sites that can be found within the culture, are simultaneously represented, contested, and inverted. Places of this kind are outside of all places, even though it may be possible to indicate their location in reality. Because these places are absolutely different from all the sites that they reflect and speak about, I shall call them, by way of contrast to utopias, heterotopias. I believe that between utopias and these quite other sites, these heterotopias, there might be a sort of mixed, joint experience, which would be the mirror. The mirror is, after all, a utopia, since it is a placeless place. In the mirror, I see myself there where I am not, in an unreal, virtual space that opens up behind the surface; I am over there, there where I am not, a sort of shadow that gives my own visibility to myself, that enables me to see myself there where I am absent: such is the utopia of the mirror. But it is also a heterotopia in so far as the mirror does exist in reality, where it exerts a sort of counteraction on the position that I occupy. From the standpoint of the mirror I discover my absence from the place where I am since I see myself over there. Starting from this gaze that is, as it were, directed toward me, from the ground of this virtual space that is on the other side of the glass, I come back toward myself; I begin again to direct my eyes toward myself and to reconstitute myself there where I am. The mirror functions as a heterotopia in this respect: it makes this place that I occupy at the moment when I look at myself in the glass at once absolutely real, connected with all the space that surrounds it, and absolutely unreal, since in order to be perceived it has to pass through this virtual point which is over there.

As for the heterotopias as such, how can they be described? What meaning do they have? We might imagine a sort of systematic description – I do not say a science because the term is too galvanized now -that would, in a given society, take as its object the study, analysis, description, and ‘reading’ (as some like to say nowadays) of these different spaces, of these other places. As a sort of simultaneously mythic and real contestation of the space in which we live, this description could be called heterotopology.

Its first principle is that there is probably not a single culture in the world that fails to constitute heterotopias. That is a constant of every human group. But the heterotopias obviously take quite varied forms, and perhaps no one absolutely universal form of heterotopia would be found. We can however class them in two main categories.

In the so-called primitive societies, there is a certain form of heterotopia that I would call crisis heterotopias, i.e., there are privileged or sacred or forbidden places, reserved for individuals who are, in relation to society and to the human environment in which they live, in a state of crisis: adolescents, menstruating women, pregnant women. the elderly, etc. In out society, these crisis heterotopias are persistently disappearing, though a few remnants can still be found. For example, the boarding school, in its nineteenth-century form, or military service for young men, have certainly played such a role, as the first manifestations of sexual virility were in fact supposed to take place “elsewhere” than at home. For girls, there was, until the middle of the twentieth century, a tradition called the “honeymoon trip” which was an ancestral theme. The young woman’s deflowering could take place “nowhere” and, at the moment of its occurrence the train or honeymoon hotel was indeed the place of this nowhere, this heterotopia without geographical markers.

But these heterotopias of crisis are disappearing today and are being replaced, I believe, by what we might call heterotopias of deviation: those in which individuals whose behavior is deviant in relation to the required mean or norm are placed. Cases of this are rest homes and psychiatric hospitals, and of course prisons, and one should perhaps add retirement homes that are, as it were, on the borderline between the heterotopia of crisis and the heterotopia of deviation since, after all, old age is a crisis, but is also a deviation since in our society where leisure is the rule, idleness is a sort of deviation.

The second principle of this description of heterotopias is that a society, as its history unfolds, can make an existing heterotopia function in a very different fashion; for each heterotopia has a precise and determined function within a society and the same heterotopia can, according to the synchrony of the culture in which it occurs, have one function or another.

As an example I shall take the strange heterotopia of the cemetery. The cemetery is certainly a place unlike ordinary cultural spaces. It is a space that is however connected with all the sites of the city, state or society or village, etc., since each individual, each family has relatives in the cemetery. In western culture the cemetery has practically always existed. But it has undergone important changes. Until the end of the eighteenth century, the cemetery was placed at the heart of the city, next to the church. In it there was a hierarchy of possible tombs. There was the charnel house in which bodies lost the last traces of individuality, there were a few individual tombs and then there were the tombs inside the church. These latter tombs were themselves of two types, either simply tombstones with an inscription, or mausoleums with statues. This cemetery housed inside the sacred space of the church has taken on a quite different cast in modern civilizations, and curiously, it is in a time when civilization has become ‘atheistic,’ as one says very crudely, that western culture has established what is termed the cult of the dead.

Basically it was quite natural that, in a time of real belief in the resurrection of bodies and the immortality of the soul, overriding importance was not accorded to the body’s remains. On the contrary, from the moment when people are no longer sure that they have a soul or that the body will regain life, it is perhaps necessary to give much more attention to the dead body, which is ultimately the only trace of our existence in the world and in language. In any case, it is from the beginning of the nineteenth century that everyone has a right to her or his own little box for her or his own little personal decay, but on the other hand, it is only from that start of the nineteenth century that cemeteries began to be located at the outside border of cities. In correlation with the individualization of death and the bourgeois appropriation of the cemetery, there arises an obsession with death as an ‘illness.’ The dead, it is supposed, bring illnesses to the living, and it is the presence and proximity of the dead right beside the houses, next to the church, almost in the middle of the street, it is this proximity that propagates death itself. This major theme of illness spread by the contagion in the cemeteries persisted until the end of the eighteenth century, until, during the nineteenth century, the shift of cemeteries toward the suburbs was initiated. The cemeteries then came to constitute, no longer the sacred and immortal heart of the city, but the other city, where each family possesses its dark resting place.

Third principle. The heterotopia is capable of juxtaposing in a single real place several spaces, several sites that are in themselves incompatible. Thus it is that the theater brings onto the rectangle of the stage, one after the other, a whole series of places that are foreign to one another; thus it is that the cinema is a very odd rectangular room, at the end of which, on a two-dimensional screen, one sees the projection of a three-dimensional space, but perhaps the oldest example of these heterotopias that take the form of contradictory sites is the garden. We must not forget that in the Orient the garden, an astonishing creation that is now a thousand years old, had very deep and seemingly superimposed meanings. The traditional garden of the Persians was a sacred space that was supposed to bring together inside its rectangle four parts representing the four parts of the world, with a space still more sacred than the others that were like an umbilicus, the navel of the world at its center (the basin and water fountain were there); and all the vegetation of the garden was supposed to come together in this space, in this sort of microcosm. As for carpets, they were originally reproductions of gardens (the garden is a rug onto which the whole world comes to enact its symbolic perfection, and the rug is a sort of garden that can move across space). The garden is the smallest parcel of the world and then it is the totality of the world. The garden has been a sort of happy, universalizing heterotopia since the beginnings of antiquity (our modern zoological gardens spring from that source).

Fourth principle. Heterotopias are most often linked to slices in time – which is to say that they open onto what might be termed, for the sake of symmetry, heterochronies. The heterotopia begins to function at full capacity when men arrive at a sort of absolute break with their traditional time. This situation shows us that the cemetery is indeed a highly heterotopic place since, for the individual, the cemetery begins with this strange heterochrony, the loss of life, and with this quasi-eternity in which her permanent lot is dissolution and disappearance.

From a general standpoint, in a society like ours heterotopias and heterochronies are structured and distributed in a relatively complex fashion. First of all, there are heterotopias of indefinitely accumulating time, for example museums and libraries, Museums and libraries have become heterotopias in which time never stops building up and topping its own summit, whereas in the seventeenth century, even at the end of the century, museums and libraries were the expression of an individual choice. By contrast, the idea of accumulating everything, of establishing a sort of general archive, the will to enclose in one place all times, all epochs, all forms, all tastes, the idea of constituting a place of all times that is itself outside of time and inaccessible to its ravages, the project of organizing in this way a sort of perpetual and indefinite accumulation of time in an immobile place, this whole idea belongs to our modernity. The museum and the library are heterotopias that are proper to western culture of the nineteenth century.

Opposite these heterotopias that are linked to the accumulation of time, there are those linked, on the contrary, to time in its most flowing, transitory, precarious aspect, to time in the mode of the festival. These heterotopias are not oriented toward the eternal, they are rather absolutely temporal [chroniques]. Such, for example, are the fairgrounds, these’ marvelous empty sites on the outskirts of cities that teem once or twice a year with stands, displays, heteroclite objects, wrestlers, snakewomen, fortune-tellers, and so forth. Quite recently, a new kind of temporal heterotopia has been invented: vacation villages, such as those Polynesian villages that offer a compact three weeks of primitive and eternal nudity to the inhabitants of the cities. You see, moreover, that through the two forms of heterotopias that come together here, the heterotopia of the festival and that of the eternity of accumulating time, the huts of Djerba are in a sense relatives of libraries and museums. for the rediscovery of Polynesian life abolishes time; yet the experience is just as much the,, rediscovery of time, it is as if the entire history of humanity reaching back to its origin were accessible in a sort of immediate knowledge,

Fifth principle. Heterotopias always presuppose a system of opening and closing that both isolates them and makes them penetrable. In general, the heterotopic site is not freely accessible like a public place. Either the entry is compulsory, as in the case of entering a barracks or a prison, or else the individual has to submit to rites and purifications. To get in one must have a certain permission and make certain gestures. Moreover, there are even heterotopias that are entirely consecrated to these activities of purification -purification that is partly religious and partly hygienic, such as the hammin of the Moslems, or else purification that appears to be purely hygienic, as in Scandinavian saunas.

There are others, on the contrary, that seem to be pure and simple openings, but that generally hide curious exclusions. Everyone can enter into thew heterotopic sites, but in fact that is only an illusion- we think we enter where we are, by the very fact that we enter, excluded. I am thinking for example, of the famous bedrooms that existed on the great farms of Brazil and elsewhere in South America. The entry door did not lead into the central room where the family lived, and every individual or traveler who came by had the right to ope this door, to enter into the bedroom and to sleep there for a night. Now these bedrooms were such that the individual who went into them never had access to the family’s quarter the visitor was absolutely the guest in transit, was not really the invited guest. This type of heterotopia, which has practically disappeared from our civilizations, could perhaps be found in the famous American motel rooms where a man goes with his car and his mistress and where illicit sex is both absolutely sheltered and absolutely hidden, kept isolated without however being allowed out in the open.

Sixth principle. The last trait of heterotopias is that they have a function in relation to all the space that remains. This function unfolds between two extreme poles. Either their role is to create a space of illusion that exposes every real space, all the sites inside of which human life is partitioned, as still more illusory (perhaps that is the role that was played by those famous brothels of which we are now deprived). Or else, on the contrary, their role is to create a space that is other, another real space, as perfect, as meticulous, as well arranged as ours is messy, ill constructed, and jumbled. This latter type would be the heterotopia, not of illusion, but of compensation, and I wonder if certain colonies have not functioned somewhat in this manner. In certain cases, they have played, on the level of the general organization of terrestrial space, the role of heterotopias. I am thinking, for example, of the first wave of colonization in the seventeenth century, of the Puritan societies that the English had founded in America and that were absolutely perfect other places. I am also thinking of those extraordinary Jesuit colonies that were founded in South America: marvelous, absolutely regulated colonies in which human perfection was effectively achieved. The Jesuits of Paraguay established colonies in which existence was regulated at every turn. The village was laid out according to a rigorous plan around a rectangular place at the foot of which was the church; on one side, there was the school; on the other, the cemetery-, and then, in front of the church, an avenue set out that another crossed at fight angles; each family had its little cabin along these two axes and thus the sign of Christ was exactly reproduced. Christianity marked the space and geography of the American world with its fundamental sign.

The daily life of individuals was regulated, not by the whistle, but by the bell. Everyone was awakened at the same time, everyone began work at the same time; meals were at noon and five o’clock-, then came bedtime, and at midnight came what was called the marital wake-up, that is, at the chime of the churchbell, each person carried out her/his duty.

Brothels and colonies are two extreme types of heterotopia, and if we think, after all, that the boat is a floating piece of space, a place without a place, that exists by itself, that is closed in on itself and at the same time is given over to the infinity of the sea and that, from port to port, from tack to tack, from brothel to brothel, it goes as far as the colonies in search of the most precious treasures they conceal in their gardens, you will understand why the boat has not only been for our civilization, from the sixteenth century until the present, the great instrument of economic development (I have not been speaking of that today), but has been simultaneously the greatest reserve of the imagination. The ship is the heterotopia par excellence. In civilizations without boats, dreams dry up, espionage takes the place of adventure, and the police take the place of pirates.