Archive for the ‘Games’ Category

How a Bug Made Me a Better Programmer

Sunday, March 1st, 2009

This is the tale of a bug. A challenging bug I encountered some time ago, and which changed my career for the better.

It was late 2004. I was lead multiplayer engineer on Goldeneye: Rogue Agent, a title which could charitably be described as mediocre, but I like to think the networked multiplayer game was a bit of fun – we had fun with it, anyway.

We’d been working 16-hour-ish days, 6.5 days a week for oh, about four months. These were the bad old days immortalized by ea_spouse, but that’s another story. Suffice to say that I was probably running up a sleep deficit, with only amusing online videos and Fire Emblem on the SP keeping me sane in between bug fixing.

We’d been having fun every evening playing 8-player network games for a while, and overall, the network game was pretty stable. The whole project was down to somewhere around a hundred outstanding bugs, which sounds like a lot, but the engineering team was over 70 engineers at its peak. There was one particular bug that was vexing me, however. The network game went in rounds, cycling maps. Every so often, a machine would fail to get into the next round after loading a map. It would just hang on the loading screen, having completed loading, and never having got the message to start playing.

I’m betting that most network game programmers (and indeed most network programmers) would immediately recognise the type of bug – a classic race condition. The QA dept reported this bug originally, and it happened about 1% of the time. If you were playing an 8-player game, you’d see it during a few hours of play. This was a real showstopper – no way out but to reboot the console, too frequent to ship, but far too rare to get much of a handle on.

So the first thing to do was to glean what I could. Since it wasn’t actually crashed at the point of the hang (still executing the game loop, just doing nothing), I could attach a debugger to a QA machine when the bug occurred. However, this didn’t really help much per se – a crash with a call stack showing the exact location would have been preferable. However, the debugger did allow me to slightly narrow down what was happening through judicious breakpointery.

Debugging network games is notoriously difficult, and with good reason. Although I could see the state of the problem machine, there was no context left to evaluate the problem. The rest of the machines had left it behind by this point, and I had to try to piece together the trail of events leading up to the bad state, after the bad state had already been triggered. So I fell back on the old school technique of adding copious amounts of logging, aka “printf debugging”.

So over the course of several days and evenings, my cubemates and I continued to play game after game, and each time we saw this problem, I’d stop and painstakingly go through the logs. I’d figure out a little bit more of what was happening, narrow down the error to a smaller section of code, add more logging, rebuild and redeploy my new version to the team. The turnaround time was probably a little under an hour. Of course, the first thing I did also was to make the game timeout after a few seconds, so that we could set up an 8-player game and leave it to map-cycle without actually wasting problem-hunting time actually playing the game.

In between times, I read the code. And re-read the code. And re-read it, and stepped through it, trying to think about and follow each possible path of execution. By this time, experiment had narrowed the problem down to a few functions, and eventually, while reading the code, I realised what the problem was. The evidence from the logs and the days of trying to reproduce the issue confirmed the diagnosis. It was, of course, a race condition. And as is so often the case with programming, once characterised, the bug was fairly easy to fix. I put a fix in place, commented the change, tested it, and committed it to the codebase. We did not see any more occurrences of the bug.

That was Thursday. On Tuesday I got the bug returned by QA. At first I was incredulous – how could it be extant after 5 days of silence, seemingly fixed? But by this stage of my programming career, I had developed some humility about my own code, and respect for QA. I mean, the guys and girls who test stuff don’t actually lie or make stuff up. If the bug was still there, it was still there, so I went back to it, after checking the reported version.

The problem was, as I saw it, that there was no way for it to happen any more. By this point, this was not hubris or denial on my part either. I had been through the code literally hundreds of times. For a week, this code was the last thing I thought about before falling asleep at night, and the first thing I pondered when waking up. I’m sure I had dreamt it once or twice. I could close my eyes and see the code in my head as clearly as I could read it on the screen. There was just no way the bug could still be there.

And yet it was there.

So, I had to find a new approach. As odd as it may seem, I had only so far looked at the bug in C++. After all, it was not a particularly gnarly bug as far as these things normally went – no heap or stack corruption, no optimised build vs debug variance, no weird-PS2-DMA-ness, no occurrence-on-real-media-only type of thing. But now it had got significantly harder, so I decided to take a look at the assembly code generated by the compiler. This was actually easy to get started with: I didn’t need to reproduce the bug; all I had to do was set a breakpoint in the function, run to that, and look at the assembly in the debugger.

Now I wouldn’t call myself an assembly language expert. I’ve never written assembler in anger, but I can get by reading it, and I know the basics to get around – how the compiler sets up and returns from function calls, how to find this pointers and recognise virtual tables, etc. And all of this stuff is easier on a RISC CPU than on say, Intel. So I fired up the PS2 devkit.

What I found was very puzzling. I looked at the assembly language for the function in question. I looked at the C++ source code. And what I saw on one side was not what I saw on the other side. They matched up to a point. But my fix was nowhere to be found in the machine code. It should have been fairly obvious to spot – the code had a few handy signposts like arithmetic instructions and constants which were transparent in the assembler. But the fix was just not there.

I called over a couple of other senior engineers to get a second opinion. Was I seeing things? They agreed that I wasn’t. Very strange, but now we were quite close to finally, properly, solving this bug. All we had to do was figure out why the compiler was ignoring the C++ source line that constituted the fix.

So we looked at the source control logs. What we found was that the file in question had, some time ago, had its line endings changed. What should have been CRLF was merely LF. Ordinarily this would be fine, but when I’d made the fix, I’d commented it. The fix was one line. The comment was one line, above the fix. Evidently the compiler had seen both lines as one. And the whole thing was therefore a comment, to be discarded.

Once found, of course, the fix was trivial. Fixing the file line endings allowed my original fix to work, and the bug was finally dead.

Since then, I regularly drop to assembler in the debugger. With assembly language, you know what’s going on. It’s made me a better programmer, and a better bug fixer. All those gnarly bugs I mentioned earlier (eg. ones that only occur in release builds) don’t seem so gnarly any more when you’re used to looking at assembler.

Blizzcon 08

Thursday, November 6th, 2008

I kept this on the down low for a while because of my job situation, but better late than never. I went to Blizzcon with my brother-in-law, and we got the VIP treatment: front-of-line skips to play the games, free swag (and how!) from the store without waiting in the literally 4-5 hour lines, chatting to the developers all day, and seeing the finale show from the VIP area above the crowd. So here’s a small flickr set at the request of Skye.

Costume contest winner

That’s a pic of the costume contest winner. Nobody else really stood a chance against the turtle!

IFComp so far…

Monday, October 6th, 2008

SPOILERS AHEAD

(But for games that are pretty bad, so you might not be bothered…)

Somewhat perversely, I like to try play the worst games first, and get them out of the way. I score not according to a strict system, but vaguely according to puzzle fun, plot, and implementation competence, with a good helping of bias for what I want to encourage and discourage in future competitions. So far:

  • The Absolute Worst Game in IF History – if there is some reason for this, or a hidden joke, I didn’t get it. I decided not to play it for any longer than the author spent writing it. 1 point.
  • Buried in Shoes – well at least there’s some kind of story here, if it is beyond me. Implementation seems to be on the verge of competence. 4 points.
  • Freedom – straightforward linear puzzler. Spartan but completable. 4 points.
  • Lighthouse – uh oh, the blurb says “my very first game!”. Unfortunately, it lives up to this claim. And there’s just not enough here. 1 point.

Thus far, they were all zcode games. But the real seam of submediocrity normally lies in the standalone games…

  • The Hall of the Fount of Artois – ah, the hackneyed “solve puzzles in a spooky old house” genre. It suffers from that basic failure of not handling any remotely interesting (or in many cases, mundane) things I wanted to do. Puzzles are get-X-use-X type, and room descriptions are almost irrelevant since there is no scenery and no searching, just items to be collected and used. 3 points.
  • Nerd Quest – oh dear: again, roll-your-own systems invariably suffer from a) paucity of handled actions, b) spartan descriptions and lack of scenery. I couldn’t even “examine” anything. 1 point.
  • Project Delta – the menu interface robbed the game of thought; then I briefly thought the inventory system was interesting; then the “game” ended. There is not enough here to enter into a competition. 1 point.

    Here’s hoping for some better luck soon. Fairly bad this year so far, and that’s even without any efforts from Dunric as far as I can see.

Cancelled

Tuesday, September 30th, 2008

My project got cancelled today.

I don’t have too much to add to the official memo; I am one of the lucky ones who now has his pick of which project to go to next. Many good people I worked with have been let go; others have been shifted to other projects; I expect some will leave.

As much as the decision is painful, I think it is the right one for the studio. Mike Verdu is a man of great integrity and has been personally involved in the project, so this decision weighs as heavy on him as on anyone. Such is the nature of the industry.

I hope that those let go can find work at other studios. I am sure that recruiters all over the LA area are having a busy day.

(Part of) what I do all day

Friday, September 12th, 2008

Lately I’ve been asked what I do at work. What I really do. Well, if Great Biggary can frequently expound on his latest Maya scripts and python excursions, here goes with my explanations…

Most of what I do all day is read code. Just like an author or poet will read many many more books and poems than they write, so programmers read much more code than they write. At least, the ones that want to be any good do.

So I read and read and read, and hopefully understand, before I finally write a little bit which pokes the program in just that way and achieves the right result. Changing or writing as little code as possible means introducing as few bugs as possible.

I’m working with a few million lines of code, most of it C++. Today was a day when I didn’t quite get it to do what I wanted, but had to settle for second best. I’m writing some code to analyse how much data we load off disk every frame. (Note for my non-technical readers: a frame – in this case – is 1/30 of a second or 33ms. Quite a long time to an Xbox 360, but not long to you and me. It’s like… now to now. Nowtonow. N-now. You get the idea.)

Where was I? Oh yes. Well, while the game is running, we’re often loading things from the disk, which is why I’m writing this code. During development, we run off hard disk, which is really fast, but when we ship the final game, we need to run off DVD, which is really slow by comparison, which is why it’s important for us to know exactly how much data we’re asking it to read over time. Otherwise bad things happen like the player coming around a corner to discover that the ground ahead has temporarily disappeared. You might have seen something like this in video games from time to time, especially when discs get dirty.

There are basically a few categories of things we load (or “stream”) while the game is playing:

  • Sound effects and music
  • Textures (i.e. graphics, to explain further is outside the scope of this post, as they say)
  • Videos (when they are playing)
  • Extra parts of levels (“the world”, which are triggered to load at certain points as the player moves along)

So, imagine I’ve written a spiffy little display which shows how much data we loaded in each category over the last 2 seconds (or 60 frames, if you were paying attention earlier). All I need to do is actually feed the data into it, which means intercepting the part of the code where we actually read from the disc, and keeping a running total of the number of bytes we read, in each category.

This was quite easy to do for textures and levels, which go through one mechanism for loading. No problem there. But it was a little more tricky to do for the audio and video data. Especially the video. Here I need to get a bit technical to explain why.

To hook up the texture and level data to the right category in the display was fairly easy – at one point in time when a disc load is scheduled, I save a little piece of context along with the load request so that later, when the disc load happens, the saved context tells me what category to add to. The code that knows which category to add to is just one step away from the code that actually does the loading, so this is easy to keep track of.

But it’s a large codebase, and the code that reads data off the disc for the audio and video data is different from the code that reads data for the other categories. In fact, it turns out that the video code especially is much more complex. In this case, the code that actually does the loading is many steps away from the code that knows the context of the data. This is because the video and audio code is pretty aggressively multi-threaded.

Explanatory aside for the non-techies: a thread is a single set of instructions the computer is executing, one after another. In the real olden days, one thread was all you got. Then some bright spark (no, really) noticed that a lot of the time the processor was waiting for other stuff to happen (like reading from a disk, say) and the concept of multi-threading was born. If one lot of instructions (e.g. one program) was waiting for something to happen and unable to proceed, well, we could let the computer work at doing something else for a while until it could resume the first task. Heck, we could even interrupt the computer after it had been doing one thing for a while and let it do another thing for a while, and if we did this fast enough, it would look like the computer was doing two things at once!

This concept is now the basis of all modern operating systems, even those written by Microsoft. This is why you can have your email open and browse the web at the same time. And nowadays we don’t just have the illusion of computers (and consoles) doing multiple things at once; we are starting to get machines with more than one processor in them, so that they really can do multiple things in parallel.

So back to the video problem. It’s this: the code that counts the bytes that are loaded is in one thread. The video player control is in another. The code handling the audio is another. The code that actually reads the data off disc is yet another. And (this is the real kicker) the code that decodes the video and actually triggers more data to be read off disc when it’s ready for more? Well that code is on several threads, and they are changed around while the program is running according to which processors have spare time.

So to get a 100% accurate attribution of bytes read from disc to audio and video categories would require passing that bit of context through many handoff points, writing a lot of code, and recompiling many libraries, and therefore also maintaining those changes to the libraries, which are normally handled by other people in other parts of the company. (A library – being a chunk of code that is self-contained and that you can use generally without detailed knowledge of its innards – is primarily a mechanism for reducing complexity. Audio and video code is typical of the sort of thing that gets put into libraries. So changing the insides of a library, while technically as possible as changing any code is, is preferably to be avoided.)

So my solution? I put in a piece of code to flag when a video was playing, and a piece of code to intercept the actual reading the data off disc part that services the audio and video code. A piece of code at the very top and bottom layer of the cake, as it were. Videos play rarely, so when a video is not playing, I add up all the bytes and simply put them in the audio category. When a video is playing, I put them all in the video category (even though at that point it includes some audio data). The result is not perfect, since video and audio data can’t be separated, but since videos play rarely, it works well enough for my purposes.

Sometimes, even in programming, one needs to abandon the perfectly correct in favour of the good-enough, pragmatic solution.

I finally played Portal

Saturday, August 23rd, 2008

It was actually among my professional objectives this year to play Portal. So last night I sat down with it for a few hours. It was enjoyable, and short (which was good); and original, which was also good. Well, the portal mechanic is certainly original (counting Portal as the true implementation of Narbacular Drop): the story a bit less so, for anyone who’s played Half-Life, or indeed any one of countless text adventures which employ the newly-awokened, amnesiac player character scenario.

If Portal, like many adventure games, is “a narrative at war with a crossword puzzle” (to quote Graham Nelson), then I think the crossword won. The narrative was good as far as that sort of thing goes, but the puzzle elements of the game almost crossed the line into frustration at times. Such is often the way with first-person motion puzzles. It must have been a tough task for the level designers to walk the difficulty line.

Still, worth playing, both for the fun of it and for the shared experience of gaming history.

Super Luigi Galaxy

Saturday, February 2nd, 2008

So I got 120 stars (the last level to fall was the Toy Time Galaxy Purple Coins). Now I’m not sure if I can be bothered to get another 120.

“Smart and Gets Things Done”?

Tuesday, January 29th, 2008

I work for a big games company as a senior programmer, and I’m a regular on the interview loop. While I’d love to hire candidates who are smart and get things done, the reality of the situation is that more often than not I also have to hire candidates with experience. They have to be smart, they have to get things done, and they also have to be effective within a week of their start date when they get thrown into a big C++ game codebase. Perhaps one could argue that for sufficient values of “smart and gets things done” this is already covered. All I know is that I’d love to walk into an interview and meet a candidate who:

  • Can whizz through a simple coding question like “remove spaces from this string”.
  • Can equally whizz through a simple high school applied maths problem.
  • Can clearly and concisely explain what virtual functions are and how they work, the mechanics of constructors and destructors, or the compile and link process.
  • Knows linear algebra such that they can explain dot and cross products, how to construct a camera matrix, or how to compute a vector reflection off a plane.
  • Has a working knowledge of lists, trees and hash tables, and can explain why certain algorithms are O(1), O(log n), O(n), O(n log n) etc.

It’s amazing how many interviewees stumble over this stuff. It’s pretty fundamental to game programming. I would love to see a candidate who breezed through this so we could get to the interesting interview questions. All too often, just three or four of these simple questions take the whole hour.

Guitar Hero 3

Sunday, October 28th, 2007

Picked up Guitar Hero 3 (on PS2) today. I got it with the (wireless) guitar since I have one wireless guitar already, and it supports co-op play apparently.

It’s not bad. But discounting for a moment the song list, it’s not quite as good as the first two. Just in little ways, everything about it is slightly, if not worse, then different enough to jar the familiarity. For example, the boot time and the load and save times seem slightly longer. Legal screens and brand movies seem to have multiplied. In a move reminiscent of Microsoft, the in-game graphics have changed, not for the better, but seemingly just to be different. It now shows the number of notes in your current streak, which is nice, but it also changes the rock meter to discrete bulbs, so I’m not sure now how much effect my whammy bar has during held star notes. The beat and measure lines are less distinguishable, which makes it harder for me to read/intuit the timing.

The guitar itself is good, with just a small amount of wobble in the neck (this guitar is in two parts – the neck snaps into the body) which is unnoticeable during gameplay. The wail tilt switch seems more sensitive than my other one. And for some reason, the start and select buttons have been moved under the whammy bar, making them difficult to hit when a quick pause is needed.

All this is by the wayside of course: the main thing is the song list. And the song list is, once more, solid. Not stellar, but solid. It’s fun, even if Sunshine of Your Love is a cover, and someone thought Paint It Black was a better idea than Jumping Jack Flash, and there’s still no AC/DC or Led Zeppelin.

The other thing that Neversoft added was the boss battles with Tom Morello and Slash. Frankly, they needn’t have bothered. This game isn’t about boss battles. It’s about the songs. Once completed, the boss battles are discarded to my memory card.

I’m currently about 15 songs through campaign mode on Medium.

Games Library

Tuesday, October 16th, 2007

My games library (excluding stuff in the basement) is now online, thanks to Tellico.