Developer experiences from the trenches

Developer experiences from the trenches

Start of a new post

Scripting Languages: They Aren’t ALL Bad

Mon 05 December 2011 by Michael Labbe
tags code 

Fireworks

Scripting languages have really fallen out of favor in modern game development. It’s easy to rile a working programmer up to a lather about poor debuggers, lack of threading or slow compile times related to a scripting language they were saddled with on a given project.

That said, the pendulum has swung too far in the anti-scripting direction.

The beginning of a console cycle is epitomized by a need for compatibility: “how can we ship on multiple prototype hardware platforms?” and tool efficiency: “how can we reduce the spiralling cost of development inherent in expanding horizons?” These questions led developers to make scripting plays in their engines.

In contrast, we are near the traditional end of a console cycle. We understand how to get a lot out of the hardware so the focus is free to move towards code execution efficiency in order to exceed the experiences players have had on the fixed hardware to date.

Looking past the consoles, memory latency is forcing us towards cache-friendly approaches to implementation which avoids indirection inherent in runtime bytecode interpretation and vtable lookups. Even if we push the CPU Ghz up, we aren’t going back to peppering our runtimes with vtables anytime soon thanks to memory latency overhead.

In this environment, it’s fully understandable that we would deign to avoid developing in languages like UnrealScript where only the rote engine-level loops are not bytecode interpreted. None of this means that scripting languages should be cut out of game development.

I see two places where scripting still beats out compiled code by providing new experiences for players:

First, scripting can be for one-off level scripting events. Scripts will always have a place as glue code. If you are nesting your loops in a level script, you have probably gone too far. Because you are not attempting to implement large parts of the game world in script and beacuse they do not run on tick, the performance impact is minimal.

A few lines of script is a natural alternative to abstract event firing systems. Visual scripting languages, ironically, are rarely informative at a glance. It’s much easier to see all of the code in one place than to click around a 3D blueprint looking for wires between entities and still need to guess at execution order.

The second use for scripts is more interesting. Right now, a big trend in games is towards online.

If you send state updates to your clients, the only way to add new behavior to your game is to patch your client. If you send script code for interpretation, you are only limited by the assets in the local cache and the hardcoded simulation to what your players experience.

The first version of Capture the Flag for Quake was written by Zoid before QuakeWorld. He took advantage of the fact that, before QuakeWorld, the Quake client was mostly a dumb terminal sending a subset of QuakeC over the pipe. Players could connect to his unique server without any mods and play CTF. This low barrier helped make Threewave Capture the Flag the first really popular Internet teamplay game.

If you are sending script over the pipe in 2011, please remember to fuzz your interpreter… and don’t drop in an off-the-shelf language that wasn’t designed with unsafe code in mind. Thanks.

Start of a new post

Hardware is the basis of understanding rendering

Mon 09 August 2010 by Michael Labbe
tags code 

Hardware is the basis of understanding rendering. Not numerical problems, not geometric problems and certainly not memorizing OpenGL or Direct3D APIs.

One of the most first questions that needs to be asked when deciding to implement new graphical features is what needs to be implemented on the general purpose CPU and what can be computed on a GPU (or even a set of SPEs). This question is impossible to answer without a fundamental understanding of the hardware you are programming for. The abstraction of your favorite API does less to mask this as more programmatic options become available. Consider:

Once you can competently speak on these points, you can start to devise a hypothesis about how to best divide your hardware resources to render a typical scene for your game.

At this point, you have a shot at guessing what data needs to be where in your pipeline and when it needs to be there. This is the point when numerical and geometric issues move into focus.

Learning OpenGL to learn graphics now considered harmful

OpenGL (with the exception of OpenGL ES) is a pool of functions, many of which superficially achieve the same results but with different approaches to dealing with bus latency. As memory latency becomes more of an acute problem in paring down hardware implemented pipeline stalls, literature promoting immediate mode and display lists continues to be the most prominent information on OpenGL in spite of the strong need for sizable data batches.

I’m always going to be a fan of getting stuff up on the screen quickly and programming some sample apps in OpenGL does give you something to mentally referenece when learning theory, but you aren’t programming anything really interesting until you’ve understood the graphics pipeline and at least the timeline of how hardware acceleration has crept backwards through the graphics pipeline over the past ten years.

Recommended Reading

Jim Blinn’s Corner: A Trip Down the Graphics Pipeline - An old one that covers software rendering, but gives you a basis of understanding of the graphical pipeline.

Real-Time Rendering - Currently in 3rd edition. The modern replacement for Foley and VanDam. The chapters on performance and hardware combine with the fundamental understanding of the graphics pipeline to help you really understand what’s going on.

Technical briefs for hardware prepared by NVidia and ATI for target hardware. This information is made available on their respective sites.

Start of a new post

The Secret Sauce is Ketchup

Fri 20 February 2009 by Michael Labbe
tags code 

The secret sauce is ketchup. Really good ketchup.

What makes a game great? These days, it needs to do a lot of things very well, or the user will be annoyed. But, we don’t choose our entertainment such that we avoid annoyance. Some of the greatest games of all time have had very annoying experiences associated with them. As core gamers, we overcome annoyances in order to access our preferred form of entertainment. (Ever tried to get a modem game of Doom working back in the 90s?)

The Quake games are great games. Id and Carmack have been well praised for the renderers and tech that underpin these classics. As outdated as the assumptions underlying the Quake technology may be in 2009, the games feel much more responsive and enjoyable to play than, well, a handful of the top titles from this console generation.

The secret sauce to making a game great is making the game respond well to its users. It’s obvious and simple. And, like the ketchup on you dining room table, the ingredients are right out there for you to inspect. Where, you ask? Some of the greatest movement and control code in the history of games has been GPL’d and released by Id in their three Quake releases.

For all of the praise Id gets for their tech, their movement code is a scant few hundred lines, overlooked by most. In response to mouse and keyboard input, it glides the user through the level seamlessly, giving the user the expected return for every input. That’s the secret sauce.

Behind making a responsive game lies a handful of techniques and principles. There are too many for one blog entry, but I’ll address some of the technical considerations here, moving on to design in a future post.

Aim for 60 fps.

60 frames per second just makes your game feel more responsive. If you are making a current gen console game, it is technically achievable, though it requires team wide discipline. In response for that discipline, the user receives a viscerally responsive feedback loop.

I understand that there are many development scenarios where 60 is not plausible. Maybe your platform doesn’t refresh well (I had this experience while developing a Flash 9 arcade game). Maybe your publisher won’t allow you to commit to a game design and art direction that lets you target sixty.

If that’s the case, you need to vsync lock at 30 without wavering. If Gears of War can look that good and hit a steady 30, why can’t you?

And, if you’re making a PC game, just wait a year or two and your game will fly on a $500 machine. Just don’t do the dumb thing and lock your renderer at 30 so it can’t take advantage of it, eh?

Commit to locomotion based movement at your own peril.

Use velocity-based movement unless you have strong animation talent. If your character moves through the world by calculating the model space displacement of its animations, your animation team is in charge of character movement and your programming team is not.

If you’re doing an involved interaction system with doors to open, cover to hide behind and detailed reload animations, you need to iteratively work with your animation team to ensure the animations are interruptible, blend very quickly and can be updated quickly to help your team.

Alternatively, a velocity-based movement scheme allows you to assign acceleration impulses to an entity. On each tick, you extrapolate the current position using the velocity, executing collision and response. This is a procedural approach to character movement and it can feel very organic. With this approach, your animations do not change the rate of your player’s movement.

I’m not going to say locomotion based movement is bad. It can look better than velocity based movement. You just need to seat your animator and your programmer next to each other. Remain analytical and vigilant and you will come out on top.

As a bottom line, you need to make sure the player gets the movement response he wants and expects from all input at any time.

Sub-sample your digital input to allow for light taps on buttons.

This is a technique that isn’t immediately obvious, but is difficult to argue against once you’ve considered the ramifications.

In a given tick, you can have up to two samples for a single button press whether it’s a keyboard button, an Xbox gamepad button or a mouse button. Test for button down and button up states — cut your output velocity by half if you get both.

There are two cases where this is useful:

First, consider that the user wants to move forward lightly. If a key is tested for down and not up state, you multiply the velocity by 1.0. However, if a key was tapped down and up in a tick, you multiply the velocity by 0.5. You can use this to allow half-height jumps in response to a light tap, for example.

The second point is more important. When a user’s framerate dips to, say, 20 fps, he is likely to overshoot his goal by walking off a ledge or firing for too long. By cutting his velocity in half, you assist in reeling in the negative effects of the framerate dip. Your user would thank you for doing this, but he’ll never know you did it. He’ll just inexplicably like your game better than the competition’s.

Interpolate analog input, but only if your framerate is good.

Different mice and gamepads send analog updates at different speeds. How do you make sure you have a stick or mouse movement update ready for processing when it’s time to queue up a new frame? Ya can’t.

Even if your first person shooter is vertically synced and running at 120 FPS on a beautiful 24 inch widescreen CRT, you can pan a crappy Radioshack serial mouse around your scene and watch the camera chunk at 20 updates per second. The rest of your in-game animations may seem smooth, but you won’t notice because the panning is rate is as crappy as your mouse.

A common approach to dealing with this problem is interpolating the mouse from the last known input and the one before it. This means you are always a percentage of the way between 2 known mouse samples.

If you are running at 60 fps, this is acceptable for 99.5% of the population. Even if you’re 20 years old and hyperperceptive, you’re not going to be able to determine that you are a quarter of a mouse delta away from the truth.

If you’re running at 25 fps, a mouse or gamepad starts to feel laggy. 25 fps is a new frame every 42 milliseconds and you are partially between the mouse delta received 42 milliseconds ago and the one received 84 milliseconds ago. That sucks, and perceptibly so.

Unfortunately, unless your framerate is a steady 60 fps, you do not want to filter your inputs. If you are making a PC game, make input filtering an advanced user choice. And, please, think about it - it does not make sense to automatically disable filtering on a framerate threshold or the user will perceive a small mouse leap when you toggle the filtering.

Oh, and if you’re a gamer, buy mice that refresh quickly. This is basically resolved by buying a quality, modern gamer mouse. In the old days, we had to seek out specific mice and run custom programs to bump up the refresh rate. Good riddance.

Avoid uneven framerates by avoiding an uneven distribution of work.

When developing a game, there are lots of temptations to halve the processor demand for a calculation by performing it every other frame.

It’s taking our AI 5 milliseconds to assess all of the threats in the world. Let’s run that on even frames only!” This is a naive but common suggestion heard in game programming.

Unfortunately, if your AI runs its threat test on every other frame, your framerate fluxuates by 5 milliseconds. Do that enough and the game will have a difficult to describe hitchy feel.

Some modern game perf tools and profilers work on a per-frame basis and this is an excellent way to have your code evade performance analysis tests.

Find another way.

« Page 4 / 4

rss
We built Frogtoss Labs for creative developers and gamers. We give back to the community by sharing designs, code and tools, while telling the story about ongoing independent game development at Frogtoss.