I’ve been thinking about how much I’d like perspective to make it into the mix of features for the Opticks engine, and have had a bit of experience with the Papervision3d engine, as seen on the lovely The Chills website. Creating basic geometry and loading in stuff created from modeling packages was pretty straightforward, but I had always run up against a wall when loading and dealing with animations.
Another thing I noticed when pushing PV3d to the performance limit (and beyond, according to some people), is something that I have seen a lot of other complaints about: Papervision is very slow. This is a fair accusation in comparison with, say, the Unreal engine (or even the Quake 1 engine at this point, but I’m convinced it’ll improve with time) – but the thing about real-time 3d applications, games included, is that they have always been about the cheapest hacks available.
With the Chills animation I did things the hard way – hard for me, and unfortunately, hard for Papervision. It looks pretty – there’s an aurora, reflections, changing colours for the clouds and the sky, dynamically generated igloos and a shaded, textured mountainscape. And a motion blur. All good things, save for the fact that it runs at about 5FPS without hardware acceleration. When I added more features in – a better motion blur, more detailed shading and mesh detailing, it brought even the most powerful desktops to their knees. Screenshots looked fantastic, though.
That’s not a movie, that’s a slideshow
Let’s think of the possibilities for a second. Instead of being required to produce 25+ FPS, if you could render one frame every second or two, and give the user a fancy zoom/fade/wipe/etc. transition between them, you’d get away with much more. It might even give them a more enjoyable experience than struggling to cram everything you can into the constraints of realtime. If you wanted to go further, you could even pre-render an entire sequence to play back at realtime later – providing you had something to show now, and you could properly bridge the two sequences.
The slideshow idea is still something I’m interested in but we didn’t have time on that particular project. The second option, using Papervision as a pre-renderer – was impractical because of the first-person, full-scene animation of the thing. It was an idea I considered, discounted as inappropriate, and forgot about.
Until recently. Earlier last month, Tim posted about Playfish, who are doing some amazing things with Flash and social media. After seeing this mention, and generally seeing Pet Society being spammed about for a while, I was curious about their offerings, so I signed up for a few of their games.
I wasn’t impressed with the first experience, but it turns out that Pet Society is probably the weakest title in their stable. Since then they have made a series of Isometric titles with customizable characters. Since I’m always fascinated by new technical developments in flash, and the fact that these titlesseem to run surprisingly well given the amount of stuff happening on-screen, I tried to dig up what the underlying technology is.
It’s Papervision, but not as we know it
They’ve done what I was pondering over earlier this year – According to some discussion on old papvervison3d mailing lists, they’re using pv3d to pre-render elements like characters to a series of bitmaps in a preload phase, and just playing them as simple movies as required. This can be seen in titles like Minigolf Party, where the load time includes the pre-render stuff. In more recent titles like Restaurant City, where there is a constant flow of characters in and out of the player’s view, I’m guessing they’re being generated just prior to display, and (hopefully) being dumped afterward.
Aside from sounding like a somewhat perverse approach to art, this is genius. While the artists lose the fine-grained control of painting pixels or drawing out vector points, they can create much more content by rendering all necessary angles automatically, as well as change a hairstyle or a shirt without having to redraw the whole thing.
It cuts down on bandwidth, too – instead of sending out all the character animation that a user might need to see, you send them the model and the animations, and they handle it.
That’s all advantage which is shared between real-time and pre-rendered 3d. Where they differ, though, is performance. In most 3d engines, and in most Papervision-based systems, the whole scene is drawn every frame. Papervision is fast, but it’s still running inside Flash, which means it is (at least) thousands of times slower than a modern game engine. In order to have any performance at all, the detail has to be thousands of times simpler.
Unless you’re pre-rendering! For animations that can be repeated (walk cycles, attacks etc), you can take as long as you want to render in a ‘load’ phase, and then just play back the results as necessary. This only works when you can guarantee you’ll use the same animation at least twice, and what you gain in performance comes at the cost of more storage – the inevitable space / time tradeoff we always run up against in computing. In most applications – consoles, mobile devices and such – the storage (in RAM) and processing capabilities are more or less equal. In Flash, though, the storage is vastly greater than the processing power, so it makes sense to cache heavily.
Since I plan on using using this in my Opticks game, I thought it would be worth trying out how hard it is to make this stuff happen. Toward that end, I have been experimenting. Below is a Playfish-style representation of what I have been doing. To do the things it says to, click in the panel and press keys ( Press “J”! It’s amazing!)