Thoughts on Modern C++ and Game Dev

TL;DR:

The C++ committee isn’t following some sort of agenda to ignore the needs of game programmers, and “modern” C++ isn’t going to become undebuggable.

Over the past week there has been an ongoing conversation on Twitter about how many people — especially those in the games industry — feel that the current direction of “modern C++” doesn’t align with their needs. One particular failing of C++ from a game programmer’s perspective is that it seems to be becoming a language where debug performance is ignored and optimization is increasingly expected and required.

Having worked in the games industry for 23 years prior to 2019, I have some observations and opinions on this topic as it applies to game development. Is debuggability important to game programmers? Why, and what are the related issues?

First, a bit of background.

Many C++ game developers are working with Microsoft Visual C++. Historically, Microsoft platforms have been a huge market for games and this is reflected in the typical game programmer’s experience. From the 90s through the 2000s, most games were written with this in mind. Even with the advent of non-Microsoft consoles and mobile games, the heritage of many AAA studios and many game programmers is with Microsoft tools.

Visual Studio is probably the best C++ debugger in the world. Debugging is where Visual Studio really stands out — more so than the compiler front-end, back-end, STL implementation, anything. Microsoft has been making huge strides in all things C++ over the last 5 years, but their debugger has always been pretty great. When you’re developing on a Windows desktop, you’re just used to having a world-class debugger at your fingertips.

That being said, let’s consider the process of producing code without bugs; the options from the non-game programmer’s point of view; and the constraints that game programmers face. If I may paraphrase the major “modern C++ direction” argument, it comes down to types, tools, and tests. By this thinking, the debugger should be the last line of defence. Before we reach that point, we have the following options.

Option 1: Types

We can use as much strong typing as we can to help eliminate classes of bugs at compile time. Strong typing is certainly a feature of recent C++ evolution; for example, since C++11, we’ve seen:

  • a huge expansion of type traits
  • things like nullptr and scoped enums to combat C’s legacy of weak typing
  • the GSL and tooling surrounding it
  • concepts in C++20

Some of us may not like template metaprogramming; some of us may not like almost-always-auto style. Regardless, we can still recognize that a well-founded motivation for these C++ styles is helping the compiler to help us, using what it knows best: the type system.

As far as game programming goes, strong typing is very much open for exploration, and is being actively embraced by the game programmers I know who are interested in improving their C++ usage. There are two main concerns here: impact on compile times and impact on code readability.

To put it bluntly, you can easily ignore compile times as a programmer at a very large non-games company with a mature internal infrastructure and functionally infinite computing power to compile whatever code you might write. Such very large companies are concerned with compilation cost — hence modules — but as a rule, individual programmers don’t feel any pain here. The same is not true of most game programmers. Indie game devs don’t have build farms; AAA game devs often have something like Incredibuild, but may also be working with 10+ year old codebases that still take 15-20 minutes to build.

We can argue about the relative cost of adding hardware vs. programmer
time, and I agree that hardware is the cheaper option, but:

  • Hardware is a real upfront cost to this quarter’s budget weighed against an intangible cost in time/hiring/etc. spread over some later time period. Humans are bad at making this tradeoff, and companies are practically designed to optimize short-term gains.
  • Infrastructure needs maintenance, and almost nobody gets into the games industry to be a build engineer. The games industry doesn’t pay engineers particularly well compared to other C++ fields, and tends to pay non-game engineers even worse compared to what they could make elsewhere.

We can also argue about the fact that compile times should never have gotten to that state; again, I agree. The price of this is eternal vigilance — again, with somebody wearing a build engineer hat — and ideally, some kind of automated tooling to be able to track changes in build duration over time. Happily, this is actually becoming easier to achieve with the advent of turnkey CI systems.

Option 2: Tools

We should use as much tooling as we can — warnings, static analysis, sanitizers, dynamic analysis tools, profilers, etc.

In my experience, game devs use these where possible, but the games industry has a few problems here:

  • These tools tend to work best on non-Microsoft platforms — as mentioned, this is not a typical game dev scenario.
  • These tools are mostly geared towards working with standard C++. They have out-of-the-box support for std::vector, but not for my hypothetical engine’s CStaticVector class. Admittedly, the tools can’t really be faulted for this, but it’s still a barrier to entry.
  • Setting up and maintaining a CI pipeline that runs these tools requires build engineers, and as mentioned before, employing people in non-game engineering roles is a systemic problem of the games industry.

So if these tools work so well with standard C++, why don’t game devs use the STL?

Where to begin answering that question? Perhaps with a consideration of game programming history:

  • Before about the early 90s, we didn’t trust C compilers, so we wrote in assembly.
  • Sometime in the early-to-mid 90s, we started trusting C compilers, but we still didn’t trust C++. Our code was C with C++-style comments, and we didn’t have to typedef structs all the time.
  • Around 2000, we had a C++ revolution in game dev. This was the era of design patterns and large class hierarchies. At this time, STL support was poor on consoles, and consoles were king. We got by with GCC 2.95 forever on PS2.
  • By around 2010, two more revolutions were underway. The pains of large class hierarchies spurred the development of component-based code. This change is still evolving today in the popularity of Entity-Component-System architectures. Hand-in-hand with that was the second revolution — trying to take advantage of multiprocessor architectures.

Throughout these paradigm shifts, game dev platforms were themselves changing on a frequent cadence, and in major ways. Segmented memory gave way to a flat address space. Platforms became multiprocessor, symmetric or otherwise. Game devs used to Intel architectures had to get used to MIPS (Playstation), and then custom hardware with heterogeneous CPUs (PS2), and then PowerPC (XBox 360), and then more heterogeneity (PS3), etc. With each new platform came new performance characteristics for CPU, memory, and storage. If you wanted to be optimal, you had to rewrite things. A lot. And I’m not even mentioning the rise of the Internet and the impact that had on games, or the constraints of manufacturers’ walled gardens.

Historically, STL implementations on game dev platforms have been poor. And it’s no secret that the STL containers aren’t a great fit for games. If pushed, we’d probably admit that std::string is OK, and std::vector is a reasonable default. But all containers in the STL present the problem of controlling allocation and initialization. Many games are concerned with bounding memory for various tasks, and for things that must appear to be allocated dynamically during gameplay, slab or arena allocators are very common. Amortized constant time isn’t good enough; allocation is potentially one of the most expensive things that can happen, and I don’t want to drop a frame because it happened when I wasn’t expecting it. As a game dev, I need to manage my memory requirements up front.

The same story plays out for other dependencies in general. Game devs want to know where every CPU cycle is going, where and when to account for every byte of memory, and where and when to control every thread of execution. Until recently, Microsoft compilers changed ABI with every update, so if you had a lot of dependencies, it could be painful to rebuild them all. Game devs tend to prefer dependencies that are small to integrate, do one thing and do it well — preferably with a C-style API — and in many shops, are in the public domain or have a free license that does not even require attribution. SQLite and zlib are good examples of things game devs like.

Adding to this, the C++ games industry has a rich history of Not-Invented-Here syndrome. It’s to be expected from an industry that started with individuals making things on their own, on new hardware that didn’t have any other options. The games industry is also one of the only tech sectors where programmers of no particular distinction are listed in credits. Writing things is fun, and it helps your career! Much better to build rather than buy. And because we’re so concerned with performance, we can tailor our solution to be exactly what we need rather than being as generic — and therefore wasteful — as an off-the-shelf solution. Aversion to Boost is a prime example of how this plays out in games. I’ve worked on projects that went this way:

  • Starting out, we pull in a Boost library to solve a problem.
  • It works pretty well. There’s some pain when updating, but no more than usual for any other dependency.
  • Another game wants to use our code, but using Boost is a deal-breaker, despite our experience being apparently fine.
  • We take out the Boost code, but now we have a new problem: we need to solve the problem that the Boost library solved ourselves.
  • We basically copy the parts of the Boost code we need into our own namespaces.
  • Later on, we inevitably and repeatedly find that we need extra functionality that would just be there if we’d stuck with the original code. But now we own the code, so we need to continually maintain it.

We don’t like anything that is large, tries to do too much, or may impact compile time. This is reasonable. Where we humans fail again and again is in vehemently arguing against any perceived pain today while failing to account for the very real and greater pain of maintenance spread out in someone else’s budget over the next three years. Existence proofs of games successfully using STL and Boost piecemeal don’t make any headway against psychology.

For all these reasons, many game dev shops have built up their own libraries that cover what the STL does and more, with particular support for game dev-specific use cases. Some large game companies have even developed entire almost-API-compatible in-house STL replacements that come with a resultingly huge maintenance burden.

It’s quite a reasonable thing to look for a better alternative to std::map, or for a small-buffer-capable std::vector. It’s much less palatable to have to maintain your own implementations of, say, algorithms or type traits for practically no gain. I think it’s a shame that the STL is so strongly identified with its containers. They tend to be what gets taught first, so when we say “the STL” we often first think of std::vector when we should really be thinking of std::find_if.

Option 3: Tests

Extensive testing should be implemented, goes the argument. TDD and/or BDD should cover all the code that it can, and bugs should be tackled by writing new tests.

So let’s talk about that.

In my experience, there is almost no automated testing in the games industry. Why?

1. Because correctness doesn’t really matter, and there is no real spec.

As a young programmer in the games industry, I was quickly disabused of the notion that I should strive to model things realistically. Games are all about smoke and mirrors and shortcuts. Nobody cares if your simulation is realistic; they care if it’s fun. When you have no spec other than “it should feel right,” you really have nothing to test against. Gameplay discoveries can often result from bugs. Reasonably often, bugs ship, and even become loved for their effects (see Civilization’s Gandhi). Games aren’t like some other C++ domains; lack of correctness doesn’t put someone’s safety or savings on the line.

2. Because it’s hard.

Sure, you want to do automated tests where you can. This can be done for some subsystems which have well-defined outcomes. Unit testing in the games industry exists, but it tends to be confined to low-level code — STL-alikes, string conversion routines, physics routines, etc. Things that actually have predictable outcomes do tend to be unit-tested, although not usually TDD’ed, because game programmers are trying to make their own lives easier. But how do you test gameplay code (see point 1)? And once you get beyond unit testing, you get to another reason it’s so difficult.

3. Because it involves content.

Testing a non-trivial system is probably going to involve providing content to test it. Most engineers aren’t very good at cooking up content themselves, so it’s going to require involving someone with content-building skills to get a meaningful test. Then you have the problem of how to measure what’s supposed to happen when the output is not a number or a string, but the way something displays, or sounds, or evolves over time.

4. Because we’re not practised at it.

Unit testing a function where I know the inputs and outputs is very possible. But gameplay is about emergent behaviour, and I don’t know how to test that very well. What I can test, if I have the approval from my manager to devote the necessary time to it, are things like performance, or some higher level features like matchmaking that I can analyze. Such infrastructural work can be engaging to some game programmers, but perhaps not to most, and it needs buy-in from the people who control the purse strings. As a game programmer, I never get the chance to become more practised at writing higher-level tests.

5. Because [company] doesn’t see the need for automated testing.

We’re trying to ship a game here. We’re in a hit-driven industry, and that game is probably going to make almost all of its money in the first month of sales when the marketing spend lines up with the ship date. The console cycle has taught us that code really doesn’t live that long anyway. If we are working on an online game, we are likely to get some more time to do that testing on, say, matchmaking or load testing. Since performance is a requirement for ship, we need to do at least some performance testing, but we don’t need to automate it. To games industry management, automated testing is a time and money sink. It requires experienced engineers to do work that is mostly invisible. That time could be better spent on building features. It’s far cheaper in the short term to use human QA to test the game, which brings me to the next point.

6. Because testing in general is a second-class activity in games.

I love good QA people. They’re absolutely worth their weight in gold. They know how to make your game the best it can be by breaking it in ways you never thought possible. They are subject matter experts on your gameplay in ways that you just aren’t, and likely never will be. They’re better than a team of super-powered compilers helping you to make things right. I’m happy that I’ve had the privilege of working with some excellent QA folks in my time.

I have almost always had to fight for them just to stay on my team.

In larger AAA game shops, the QA organization is usually an entirely separate concern from any game team, with separate management and organizational structure. Ostensibly, this is so that they can provide a better service by bringing a cool objectivity to their testing. In practice, it’s a different story.

They tend to be treated like cogs in a machine, often switched between projects without warning and generally made to feel like anyone could do their job. When a date slips, engineering may feel a crunch, but it’s always QA that gets crunched the most, working shifts at nights and on weekends, and even getting blamed for being the bearers of bad news about the game’s quality.

They are severely underpaid. A very experienced QA person with years of domain knowledge is routinely paid less than half what a mid-level software engineer makes. I’ve worked with brilliant QA engineers who set up performance test pipelines with historical tracking and alerts, built frameworks for API testing and load testing, and did a bunch of really valuable technical tasks that were somehow not worth the time of “real game engineers.” I have no doubt that these excellent folks could have made much more money at any large tech company you care to name.

They are untrusted. It’s not uncommon for QA folks to be kept apart from the other devs and to have badges that only work for that floor of the building, or even to have to use a completely separate entryway.

They are socialized into subservience. QA people are sometimes taught not to “bother” engineers! When they report bugs directly, they are told to call engineers “Ms. X” or “Mr. Y.” Sometimes I have even received angry phone calls from the QA “chain of command” when I’ve reached out to individuals to pair up and investigate bugs they’re encountering.

This sounds like a bad story, and thankfully it’s not everyone’s experience, but unfortunately it is still fairly common; common enough that it can cause engineers — possibly stressed out themselves, though that’s still no excuse — to start thinking that it’s QA’s job to find their bugs, or even to blame QA for bugs!

The best teams I have worked on have been the ones where we lobbied for and got embedded QA folks who worked hand-in-glove with engineers. They didn’t lose their objectivity or their passion for making the game the best it could be. They loved getting engineering help in automating tests. There’s no doubt in my mind that the games industry can benefit from automating more.

On Debug Performance

If we take these points together — being used to debugging, a platform for APIs and tools that is still maturing, and the difficulty with and consequent lack of culture around automated testing — it becomes clear why game developers insist on being able to debug.

But there are still problems with debugging itself, and problems with how game developers cope with the direction of C++.

The principal problem with debugging is that it doesn’t scale. There will be game developers reading this post and feeling that my descriptions don’t jibe with their experiences. That’s probably because at some point they have come up against the debugging scalability problem first hand and have had to find ways around it.

To put it another way, we want debug performance because in order to catch bugs, we often need to be able to run with sufficiently large and representative data sets. When we’re at this point, the debugger is usually a crude tool to use, debug performance or not. Sure, setting data breakpoints can help with tracking down intermediate-size problems, but what do we actually do with the real bugs, the ones that are left when we’ve fixed everything else? The ones that only happen under network load, or memory pressure, or extreme concurrency, or to some small as-yet unidentifiable subset of our multiple millions of players, or on burned discs, in the German build, sometime after 3 hours of soak testing?

We sure as Hell don’t just rely on the debugger. We do what we’ve always done. We try to isolate the problem, to make it happen more frequently; we add logging and pore over it; we tweak timing and threading settings; we binary-search builds; we inspect core dumps and crash log data; we build cut-down content to try to reproduce the issue; we think and talk about what might be causing the issue.

Often we end up fixing multiple things along the way to finding the actual crash. In other words, we solve problems, and in the end, using the debugger is just a tiny part of that. So yes, debug performance is nice, but the lack of it isn’t going to prevent us from being engineers. We still need skills like being able to analyze core dumps and read optimized assembly.

When using “modern C++” I use the debugger in just the same way as always. Stepping through newly-written code; setting breakpoints on particular data I’m interested in; and using the debugger to explore unfamiliar code. This doesn’t change with “modern C++”, and yes, even though the STL uses _Ugly _Identifiers, it’s not magic. It can sometimes be useful to explore what the STL does, or I can just step over it, or these days, have the debugger hide it for me.

When I run into issues of debug performance, the problem is not that “modern C++” is slowing me down, it’s that I’m just doing too much anyway. Using a debugger doesn’t scale. Types and tools and tests do.

I’ve been concerned about this issue of C++ increasingly requiring optimization and I’ve asked compiler developers for their opinions on it. The fact is that it’s not a binary thing. We’re already on the continuum, and there is room to move further along without impacting debuggability. Right now, our compilers do copy elision for temporaries, even when we don’t ask for that optimization. It doesn’t make a difference in our ability to debug things. I doubt we would complain if debug builds started including NRVO, or half a dozen other optimizations that could be done without our noticing any change in debugging. That’s the likely direction of C++.

Epilogue: Modern C++ direction

If you’re a games industry programmer lamenting the direction C++ is taking, you essentially have two options:

1. Do nothing

Assuming you’re still going to use C++, you can keep using it just as you’ve always done. You don’t need to adopt any new features you don’t want to. Practically everything you do now will remain supported, and you’re still going to reap the benefits of improvements in compiler technology over the years to come.

This is a perfectly viable strategy if you’re working for yourself or with a handful of like-minded individuals. C++98, with selected features beyond that, is still a fine language to write games in.

But if you’re in a bigger company, at some point you’re going to have to deal with change, because you will have to hire more people. Increasingly, hiring C++ engineers means hiring “modern” C++ engineers. Generational change will happen, just as it happened with assembly, C, and C++98. You can deal with that by imposing rules on what is and isn’t allowed in your codebase, but that isn’t a long-term solution. So what is?

2. Get involved

Stop going to GDC as your one conference per year and start going to CppCon. It’s way better value for your company’s money, for a start. Participate in standards discussions; get on the groups or the mailing lists; read standards papers and provide feedback to the authors. If you can also attend meetings, that’s great, but if you can’t, you can still do a lot to advance your perspective.

C++ committee participation is open to everyone. All the information you need to get involved with SG14, or SG7, or SG15, or whatever your particular area of interest is can be found on isocpp.org. The committee doesn’t have some hidden agenda — do you really think 200+ programmers could be organized enough to have a coherent agenda? Even the “higher-ups” on the committee don’t get their way very often.

If you want a voice, you need to speak in the places you’ll be heard, rather than on Twitter or Reddit. Please do so — I look forward to the conversation.

Published
Categorized as C++