How a Bug Made Me a Better Programmer

This is the tale of a bug. A challenging bug I encountered some time ago, and which changed my career for the better.

It was late 2004. I was lead multiplayer engineer on Goldeneye: Rogue Agent, a title which could charitably be described as mediocre, but I like to think the networked multiplayer game was a bit of fun – we had fun with it, anyway.

We’d been working 16-hour-ish days, 6.5 days a week for oh, about four months. These were the bad old days immortalized by ea_spouse, but that’s another story. Suffice to say that I was probably running up a sleep deficit, with only amusing online videos and Fire Emblem on the SP keeping me sane in between bug fixing.

We’d been having fun every evening playing 8-player network games for a while, and overall, the network game was pretty stable. The whole project was down to somewhere around a hundred outstanding bugs, which sounds like a lot, but the engineering team was over 70 engineers at its peak. There was one particular bug that was vexing me, however. The network game went in rounds, cycling maps. Every so often, a machine would fail to get into the next round after loading a map. It would just hang on the loading screen, having completed loading, and never having got the message to start playing.

I’m betting that most network game programmers (and indeed most network programmers) would immediately recognise the type of bug – a classic race condition. The QA dept reported this bug originally, and it happened about 1% of the time. If you were playing an 8-player game, you’d see it during a few hours of play. This was a real showstopper – no way out but to reboot the console, too frequent to ship, but far too rare to get much of a handle on.

So the first thing to do was to glean what I could. Since it wasn’t actually crashed at the point of the hang (still executing the game loop, just doing nothing), I could attach a debugger to a QA machine when the bug occurred. However, this didn’t really help much per se – a crash with a call stack showing the exact location would have been preferable. However, the debugger did allow me to slightly narrow down what was happening through judicious breakpointery.

Debugging network games is notoriously difficult, and with good reason. Although I could see the state of the problem machine, there was no context left to evaluate the problem. The rest of the machines had left it behind by this point, and I had to try to piece together the trail of events leading up to the bad state, after the bad state had already been triggered. So I fell back on the old school technique of adding copious amounts of logging, aka “printf debugging”.

So over the course of several days and evenings, my cubemates and I continued to play game after game, and each time we saw this problem, I’d stop and painstakingly go through the logs. I’d figure out a little bit more of what was happening, narrow down the error to a smaller section of code, add more logging, rebuild and redeploy my new version to the team. The turnaround time was probably a little under an hour. Of course, the first thing I did also was to make the game timeout after a few seconds, so that we could set up an 8-player game and leave it to map-cycle without actually wasting problem-hunting time actually playing the game.

In between times, I read the code. And re-read the code. And re-read it, and stepped through it, trying to think about and follow each possible path of execution. By this time, experiment had narrowed the problem down to a few functions, and eventually, while reading the code, I realised what the problem was. The evidence from the logs and the days of trying to reproduce the issue confirmed the diagnosis. It was, of course, a race condition. And as is so often the case with programming, once characterised, the bug was fairly easy to fix. I put a fix in place, commented the change, tested it, and committed it to the codebase. We did not see any more occurrences of the bug.

That was Thursday. On Tuesday I got the bug returned by QA. At first I was incredulous – how could it be extant after 5 days of silence, seemingly fixed? But by this stage of my programming career, I had developed some humility about my own code, and respect for QA. I mean, the guys and girls who test stuff don’t actually lie or make stuff up. If the bug was still there, it was still there, so I went back to it, after checking the reported version.

The problem was, as I saw it, that there was no way for it to happen any more. By this point, this was not hubris or denial on my part either. I had been through the code literally hundreds of times. For a week, this code was the last thing I thought about before falling asleep at night, and the first thing I pondered when waking up. I’m sure I had dreamt it once or twice. I could close my eyes and see the code in my head as clearly as I could read it on the screen. There was just no way the bug could still be there.

And yet it was there.

So, I had to find a new approach. As odd as it may seem, I had only so far looked at the bug in C++. After all, it was not a particularly gnarly bug as far as these things normally went – no heap or stack corruption, no optimised build vs debug variance, no weird-PS2-DMA-ness, no occurrence-on-real-media-only type of thing. But now it had got significantly harder, so I decided to take a look at the assembly code generated by the compiler. This was actually easy to get started with: I didn’t need to reproduce the bug; all I had to do was set a breakpoint in the function, run to that, and look at the assembly in the debugger.

Now I wouldn’t call myself an assembly language expert. I’ve never written assembler in anger, but I can get by reading it, and I know the basics to get around – how the compiler sets up and returns from function calls, how to find this pointers and recognise virtual tables, etc. And all of this stuff is easier on a RISC CPU than on say, Intel. So I fired up the PS2 devkit.

What I found was very puzzling. I looked at the assembly language for the function in question. I looked at the C++ source code. And what I saw on one side was not what I saw on the other side. They matched up to a point. But my fix was nowhere to be found in the machine code. It should have been fairly obvious to spot – the code had a few handy signposts like arithmetic instructions and constants which were transparent in the assembler. But the fix was just not there.

I called over a couple of other senior engineers to get a second opinion. Was I seeing things? They agreed that I wasn’t. Very strange, but now we were quite close to finally, properly, solving this bug. All we had to do was figure out why the compiler was ignoring the C++ source line that constituted the fix.

So we looked at the source control logs. What we found was that the file in question had, some time ago, had its line endings changed. What should have been CRLF was merely LF. Ordinarily this would be fine, but when I’d made the fix, I’d commented it. The fix was one line. The comment was one line, above the fix. Evidently the compiler had seen both lines as one. And the whole thing was therefore a comment, to be discarded.

Once found, of course, the fix was trivial. Fixing the file line endings allowed my original fix to work, and the bug was finally dead.

Since then, I regularly drop to assembler in the debugger. With assembly language, you know what’s going on. It’s made me a better programmer, and a better bug fixer. All those gnarly bugs I mentioned earlier (eg. ones that only occur in release builds) don’t seem so gnarly any more when you’re used to looking at assembler.

2 comments

  1. What a royal pain!

    Btw, if you want real control, you should be viewing the transistors directly with a FLIR microscope.

  2. That was surprisingly addictive reading – I actually didn’t realise how fascinated I was until I was disappointed to reach the end :).

    Hope you’re all well.

    Jules, Suzie and Harry.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.