Imperial College – GaME 07 – Creating Games for the Next Generation

by danhon

I’m at the Games and Media 07 event at Imperial College today. Next up is:

Creating Games for the Next Generation

David Braben. founder of Frontier Developments.

Some history for those people who haven’t heard about Elite. Chairman of Frontier: RollerCoaster Tycoon, Wallce & Grommit, Thrilville, and Outsider.

The games industry is in the second tier of the entertainment industry. First tier is film, books, television, sport. Second tier is computer games, board games, cuddly toys. Sort of merchandising. Why? “Games sales numbers are not yet up there”. It’s not because of that, there’s no doubt that among the older generation, games are not respectable. There’s a Boris Johnson quote from just after Christmas: “Games rot the brain, they become like blinking lizards…”

That’s the image a lot of people have of games. It will change, but we have to overcome that image.

Films in the20s were similar – cheap entertainment for the masses. Lloyd, Keaton – beautifully executed, vignettes joined by irrelevant story. Cars hit by trains, etc. We need our Hitchcoks and Welles to move us forwards. It was enabled by sound and the ability to do sound – but it was 5 years or more before reliable sound and it was used to tell a story. In the 5th generation of the games industry, it’s similar now.

It’s never not been next generation in the games industry.

1. 1982: 8bit, Apple II, Pet, BBC, NES, C64
2. 1988: 16 bit, Atari ST
3. 1994: 32bit PlayStation, Saturn, N64, PC
4. 2000: PS2, XBox, Gamecube, PSP, PC
5. 2006: XBox 360, PS3, PC

We’re outperforming Moore’s law. RAM, Storage is tailing off, but CPU has outperformed Moore’s law. In Elite, we were compressing data tables, etc. so we could make better use of RAM – if we were doing it then, we should absolutely be doing it now. Performance per byte ahs increased 1200 times since then.

What’s a Fifth Gen game?
hundreds of people on screen with facial expressions?
Speak to them and they remember you?
UGC?
Vast array of things you can do?

Gameplay has always lagged and comes 3-4 years after the generation happened. If you look at PS2 and XBox, they were comparable with the previous gen for the first year, but with spangly graphical effects. That’s the same for this generation.

But. The thing that’s rubbish about that is that all those things were done last year with Thrillville on PSP. We haven’t finished exploting the PS2 and XBox! The rate of change and opportunity to do something that stands out from the crowd is there. Lots of mediocre games, and a few stand way out from the crowd.

We have a fantastic opportunity
Empathy, emotion and emergence? We can involve the player as the film industry did with the viewer, with the story. In movies, viewers can’t affect the story. This is something that happened with Elite – you were drawn into the story, you could affect things, cared about your character. If you have bottlenecks that trigger things, you can’t affect story.

Massihiro Morri – The Uncanny Valley
He was in robotics and coined “the uncanny valley”. It’s where you want people to relate to a character in a human way. You dress a robot as a person and it becomes sinister – there’s a sinister element when it’s close to a person, but not behaving like a person. This is an issue we’ve got – all game characters are the other side, they’re in “if it moves, shoot it, if it doesn’t, shoot it anyway” – that’s easy, programatically. It’s harder to make it fall in love with you. That’s the big challenge. You start to get human empathy and emotion on a social level – look at Pixar’s films – people care about the characters.

I want to extend the uncanny valley – you can jump across the stones. but imagine further down you need waders, and there where it’s a canyon. The Pixar desk lamp was wonderful. You can do it with a desk lamp, you can do it with lines and dots for a face. It’s astonishing how much emotion you can get with that. That crosses the uncanny valley – you relate in a human way. (He’s talking about Luxo Jr). Then, cartoon characters. Most cartoon characters don’t try. Grommit – glancing at the camera, frowning, when Wallace does something ridiculous. A human connection. You’re still crossing the valley – it’s easier to cross. Sinister is very narrow. There’s no harm crossing it early on. The one we’re trying to cross – the grand canyon – is trying to make it look real.

It’s not about photo-realism. We’re getting hung up on that, we need to draw emotion into the game. The term ‘game’ itself, is difficult as it brings baggage.

Putting that aside, and looking at the process of satisfying that – this is an appeal to you guys (I think he means the students). Banking’s motivated by money, and 10 years later, people say “Oh god, what did I do.” You’ll get less money in games – eventually maybe more! – but what people don’t see is that you can see something on a shelf. “I amde that!” Dealing with such different things. You’re creating something, you’re designing things. That’s extremely rewarding. They may be extremely mathematical, but they’re very enjoyable. You get creativity plus technology. I love technology and telling stories.

We’re right at the beginning. We have the early start. Respectability will improve in the next few years.

Working on computing’s most advanced problems – we work with university research departments – we’re up there with the universities. What we’re doing is trying to pass the Turing test. If you enjoy research, then gaming’s a good platform for doing that.

Doing something without trying to solve a problem – the how you do that is open…

So, The Outsider. A guy who’s been accused of killing the President. It’s an open story. No cutscenes, the story unfolds in the game, you can go anywhere in DC. That’s an ambitious story and it needs tech to support it. To do a big city with interiors is quite hard. So here’s an example of the tech working for that.

Environments
Rather than putting buildings together in Max and Maya, we have our own toolchain, with higher quality, and much faster. 100x faster.

Ambient occlusion and radiosity. It’s so that dark bits – edges – come out, without having to darken them up. Radiosity is where things pick up colour. If you don’t have it, it looks wrong.

Also, we render a whole city. 49 square kilometer city data test streamed throughput on XBox 360 and PS3.

The game understands buildings, not just a soup of polygons.

Characters
Games that aren’t ina desolate wasteland needs thousands of people. Games that try – frequently, the people all look the same, or chosen from a dozen or so. If you have people who’re dramatically different, you can’t realistically make thousands of different people, because they can’t share animations. E.g. fat faces need different animations than thin faces. So we worked out muscle structure that works across the range: very skinny – to… morphing between three different heads in a triangle. All the animations work. You don’t go through the valley when you blend between the two.

This produces striking variation. Demo of lots of different looking posed characters. Huge richness to the characters. They say character before they’ve opened their mouths. From a tech point of view, you problem solve ont he face adn go straight to varied characters.

Character Animation
use an AI approach to drive muscles. Dmeo of throwing objects at Characers – ragdoll. Combine rag-doll phyhsics, and animation synthesis. Characters fall over in random ways, and then get up individually. They get up – they’re animated in a goal-driven way. They move their feet to regain balance. They protect their head before they land, etc. They’re hard from an animator’s point of view. You can’t hand-animate the video he’s showing.

There are other solutions like this, but the characters can look drunk. A physics solved walk, they tend not to look right. We’re good at spotting robots – the valley again. In Outsider, we’re still mo-capping. We got an SAS soldier to beat someone up in mocap – it’s really brutal. That’s one of our guys. The animation synthesis tries to match the mocap. So it’s trying to match real-world animations. We can get the best of both worlds. So, if a desk gets in the way, then the guy slams his head against the desk – it works in the environment, with the desk, takes over from the mocap. It’s driven by physics – then it looks natural. There’s a great deal of mathematical problem solving here.

Conversation
This is a layer above those before. This is reliant on AI. We’ve been working on this for a very long time, now we’re 6-7 times round the loop. We want players to interact with characters, but as soon as they open their mouths, you realise how thick they are. So you have canned speech. So you don’t really have conversations, you have pseduo-story. Or you get a branch in the story, in Fahrenheit, where a kid falls into a lake – story mixed with conversation – if you rescue him, then you’re on the run, but they let you go – they rejoin. It’s an empty experience. If you do something with the sword example – where it’s pseudo – or if you take a branch, then you have to populate both those branches. Most storytelling up until now is cutscenes. Either film, or hand-animated. The problem from gameplay point of view, are that there are iots of different ways you can play a scene, but then they have to merge together. That’s frustrating, you can’t play the game the way you want. I couldn’t play GTA the way I wanted. That’s uncomfortable.

So: demo video. Cop meets player, but they’re pinned down.

Conversation can make a huge difference to combat. It’s another weapon. The story for the Outsider is that you get a view of 3-way/4-way interaction in a scene. Here, you choose to work with the cop, which helps you a lot. What’s happening: we run a number of AIs, following a number of separate emotions in parallel, scored numerically. The guy started as a cop, doing his behaviour to arrest the player, so he holds up his gun, and he’s in a building. Very quickly, he goes out of his comfort zone, so in terms of what happens, he becomes scared, then there’s an opportunity to persuade him to work together. It’s looking at a way you start to see the guy as a character rather than a random game character to be exploited. So he’ll tell his mates, and his friends will be more ready to work with you, even though you’re accused of killing the President. It shows how the AI – and it’s hard to have gotten working – and how it works into the gameplay. It’s a techy, dry problem, but it works right into the gameplay. Or the guy could go to the toilet in the middle of a battle – which means you can hide – but, sometimes that didn’t work! Making sure the behaviours are correctly ordered is important.

Q&A:

Q: So why does a game company do this, rather than a third party middleware company do this?

A: That’s a good question. “point me at the middleware” is the first answer. In the games industry, there’s an ethos that middleware provides functions for games that have been, rather than will be. E.g. Unreal Engine 3 – Gears of Wars showcases their latest technoloy, and others license it. If you do middleware without using in anger, then there’s an assumption that it won’t work. This might become middleware – it’s not the intention – but if this middleware were around, then it wouldn’t be unique, you guys wouldn’t see it.

Q: When generating the conversation, how is the player involved? Is that a menu dialogue, or do they type?

A: There’s a controller button – you can call up the menu, the game provides options that make sense in the context. It depends the way you’re looking, etc. The players get cues from the choice. But it brings so much more freedom.

Q: I worked on crowds for Troy and Kingdom of Heaven. All of that was mocapped, but then we generated characters based on cut/shut and that was emergent/AI that made armies run together. Trebuchets, etc. that’s what we’re doing: 1 frame every 6 hours, instead of 60 frames per second.

A: It’s different – that’s LOTR, Massive, but we just don’t ahve that. I understand that they’re similar, it’s a fair point, but you couldn’t re-run that scene without a lot of manpower to set it up. You’re right, there are other things, eg. insects, particle effects. There is a bit more to it than that.

Q: When you’re creating large cities, it takes a lot to populate them. What do you use to drive them?

A: We have LoD, we use flow systems for traffic in extreme distance. We’re only talking 32 sq km, but you can model a lot in that amount. It’s a big job, but it’s a doable job. When you design a building, it implicitly says who’ll work there.

Q: Are buildings hand-designed, or are they procedurally designed?

A: Hand-designed, but quickly. We’re moving toward procedural. Artists end up cutting and pasting a lot. The game doesn’t know that, it’s just got a soup of polygons to carry it around, stream it from disk, it’s a big load on the system. Still a DVD on XBox 360, we filled that on PS2 and compressed that. For buildings, the artists, say: this bit windows, this bit a door. Then the game knows they’re windows, can portal them up, and statistically, portalling is only around 95% right when an artist does it. Things aren’t properly welded, you can do verification. Secondary things like linking floors, power, artists don’t think about that. An automated system deals with that. This tech is put together with future games in mind. Youc an get uniquness, but the artists have use of procedural technology – fill this room with desks, and jiggle them around a bit. A “tonnes of crap” generator for stuff on desks. That sort of thing – maximise effort of artist, rather than replace the artist. Otherwise you end up with sameness. That’s why we do it the way we’re doing it.

Q: You’ve given players massive freedom in what they can do, but not what they can say. Are you actively pursuing natural language?

A: We have some tech to do with speech generation which I haven’t shown today. In terms of where it’s going, there’s a wide range of things you can say. First time through, we had hundreds of options – it stops the gameplay dead. The clever thing is coming up with sensible choices so you don’t feel constrained. We score what you want to say – if you’re not looking at the guy, it changes the options.

Q: With buildings, having seen it and the speed, it seems similar to Second Life

A: Did you look at Second Life? We haven’t looked at Second Life, we did buildings as well in Roller Coaster Tycoon. I haven’t looked at Second Life, but I suspect it’s aiming in a different way. [Dan: It doesn't look like Second Life at all, sorry]. With all of these things, the need is there, and I’ll be astonished if others don’t do things that are similar in some way.

Q: On characters interacting with a player – you choose a question from a generated bank – what about NPC interaction – will they not fall into the trap of Oblivion?

A: That’s what we’re working against to make sure that doesn’t happen. With Oblivion one issue is the amount of text and audio they can store on the disc.