In this week’s feature piece Jeffrey Matulef asks why the industry has been so quick to demote pre-rendered cut scenes to the status of taboo.
I’ve been playing Demon’s Souls, From Software’s dungeon-crawling action-RPG that’s coming to American shores soon thanks (once again) to Atlus. During a recent playthrough it struck me that while the game’s dark fantasy aesthetic does look good, it doesn’t look as good as its cut scenes do. The cut scenes look stylistically similar to the rest of the game, but there’s more detailed texturing, smoother animation – in fact, it looks better in every conceivable way. Demon’s Souls uses pre-rendered CGI footage for its cut scenes, sparingly placed between long chapters of dungeon crawling. It reminded me of the PlayStation games of the late 90s, when pre-rendered footage was the standard for video game cut scenes. A decade later and CGI cut scenes are almost completely out of fashion. What happened?
It used to be that games traditionally told their stories via text and in-game animations. That all changed with the advent of 3D gaming around the PlayStation era. The industry shifted towards pre-rendered cut scenes using computer-generated imagery to help push narrative forward, with Final Fantasy VII being the epitome of this shift. Prior to VII, the series had always been rather niche outside of Japan, but Square’s debut on PlayStation employed CGI cut scenes for the first time in the series. Sony’s merciless promotional campaign featuring these lavishly produced cut scenes – and none of the game engine – helped to ensure Final Fantasy VII’s immense popularity, and to make the brand a household name. Everyone wanted the game that looked like a movie, and its dependence on pre-rendered cut scenes quickly became the dominant form for video game storytelling for several years.
Final Fantasy VII’s impressive cut scenes took it to new levels of popularity.
Yet by the time the PlayStation 2 had launched, the impetus within the industry was to use game engines to handle cut scenes. Games like Metal Gear Solid proved that developers didn’t need to rely on flashy CGI movies to tell a rich, complex story, while the PS2 allowed them to achieve fully realized 3D character models that weren’t overly pixilated. By the time this generation of consoles launched, the concept of pre-rendered footage became all but taboo. But I think there was something to the old way of doing things that’s been lost in the industry’s dogged persistence to prove it can do everything within its game engines.
The flipside to this argument, one that no-one seems to mention, is that pre-rendered CGI cut scenes simply look better than the rest of a game. Consistency is all well and good, but need we handicap our cut scenes just so they match inferior graphics? Even on the technological wunderkind that is the PlayStation 3, the graphics of a game like Demon’s Souls don’t come close to matching those in its cut scenes. For example, the game doesn’t have shadows on its models, whereas its cut scenes look akin to something you’d pay to see in a movie theatre, even putting the best real-time visuals from games like Uncharted and Metal Gear Solid 4 to shame. Basically, it comes down to an argument between fidelity and consistency.
What makes us attached to the characters and worlds of video games?
I believe that people grow attached to particular characters or worlds even if the interpretation changes. Take licensed games, for example. People are more willing to express goodwill towards an above average game, even if it’s not great, because it’s part of a license that they have some affection towards. Would the Ghostbusters game have been nearly as successful had it been called "Spirit Hunters" and had all new characters? We grow attachments through watching as well as through plying, so when we do play, we feel that much more invested in the experience. So even now, as good as modern graphics are, we fill the gaps in ourselves, just as we always did with video games. The way I see it, watching Bill Murray on screen, then playing as the uncanny valley-esque 3D video game Bill Murray, is pretty similar to watching 3D Cloud in FFVII’s cut scenes, then playing as 2D sprite Cloud in-game. We’re engaged because of what they represent, not what they look like.
There is one difference, however, and that’s that the movie form of Ghostbusters is a completely separate product to the game, one that can be enjoyed solely on its own merits. Swapping between live-action footage and in-game character models in the same game is more jarring than simply playing a game because of your affection for the lore.
When games do use pre-rendered cut scenes, they tend to be used sparingly (most likely because they take up extra room on the game disc). I think that this can be an effective tool in allowing players to peek into the looking glass of a particular game world, while leaving the rest to our imagination. Demon’s Souls certainly does this with quick, wondrous glimpses of its dark fantasy setting. Professor Layton and the Curious Village’s beautifully hand-drawn, Miyazaki-influenced animated cut scenes helped to flesh out the game’s turn-of-the-century European village. Infamous used 2D comic-book panels to move along its plot, achieving more with less, especially within its opening sequence where it mentioned rape, something it would obviously be unable to display in-game. My personal favorite is found within Metal Gear Solid 4. Hideo Kojima’s retroactive move employed live-action cut scenes in the form of surreal, darkly comic fake commercials for private military corporations, and to great success.
One of the most charming features of Professor Layton was its animation.
Just because you can toggle art styles, it doesn’t necessarily mean you should. For example, I can’t imagine how anything but in-game graphics could have benefitted something like Ico, a game that achieves its power through minimalism. Adding another art style would’ve felt jarring, and would’ve took players out of the experience. Mirror’s Edge tried to do things differently by employing 2D animations to tell its story, but this was met with almost unanimous critical disapproval. Maybe it was because they were poorly animated, or because they were uglier than the in-game graphics, or maybe it was because they were just plain ineffective on their own terms.
It’s always a delicate balancing act when a game chooses to break the immersion by employing pre-rendered cut scenes rather than in-game graphics. If done well, these cut scenes can be entertaining in their own right, and can enhance the player’s connection to its characters and its world. If done poorly, it can be a jarring distraction that feels out of place. There isn’t one right way of doing things, but I think developers can get so swept up in the idea of staying in-game at all times that they forget there are other ways to tell a story. Pre-rendered cut scenes shouldn’t be thought of as cheating, but more like something outside the box, like E.H. Shepard’s illustrations in Wind and the Willows. Purists may argue that this detracts from what the medium is about, but video games are unique in that they can combine so many different mediums into one product. Developers can put just about anything they want onto these discs, so why stay strictly within the confines of what a game engine provides? Just because a technique is old, it doesn’t make it invalid.
”