Archive for the ‘C++’ Category

Monads are part of C++, and are in your code

Monday, September 19th, 2022

For several reasons, I didn’t get a CppCon ticket this year, but since I live in the Denver metro area I did go on the Friday, which is free to attend. Herb’s closing keynote (Cpp2) was thought provoking. At one point though, he said,

Think about the words and the ideas that we have been using for the last hour and a half. I have not said the word ‘monad’ once!

Herb Sutter, Simplifying C++#9 of N, CppCon 2022

The general audience reaction: laughter.
My reaction: inward sigh.

Herb was trying to make the point, throughout the talk, that he’s sticking with C++. That he’s for evolution, not revolution. That C++, if we can make it simpler, safer and more toolable, is right for the future. And that’s great. It was unfortunate that he picked on monads though, because it came off to my ears as a cheap shot, and because I think it doesn’t make his point, or at least is very likely to be misconstrued.

Because monads are part of C++.

They’re not some alien thing. They’re just as much a part of C++ as RAII; templates; any pattern you care to name; even functions and classes. When I write some code, often I don’t think about writing a particular pattern. But knowing about patterns helps me realise what I’ve done when I step back and take a look. It can be the same story with monads. They’re in the code, waiting to be recognised and to help us understand what we’ve done.

Monads are part of C++, because monads are part of programming.

We have many tools in the C++ toolkit. C++ is famously large; it contains multitudes. By design. In fact, we can take a cue from Herb’s talk and ask, “What would Bjarne say?” Well, he already said it, several times, in The Design and Evolution of C++:

People don’t just write classes that fit a narrowly defined abstract data type or object-oriented style; they also — often for perfectly good reasons — write classes that take on aspects of both. They also write programs in which different parts use different styles to match needs and taste.

The language should support a range of reasonable design and programming styles rather than try to force people into adopting a single notion.

There is always a design choice but in most languages the language designer has made the choice for you. For C++ I did not: the choice is yours. This flexibility is naturally distasteful to people who believe that there is exactly one right way of doing things. It can also scare beginners and teachers who feel that a good language is one that you can completely understand in a week. C++ is not such a language. It was designed to provide a toolset for professionals, and complaining that there are too many features is like the “layman” looking into an upholsterer’s tool chest and exclaiming that there couldn’t possibly be a need for all those little hammers.

Bjarne Stroustrup, The Design and Evolution of C++

Bjarne was very deliberate in designing C++ for working programmers, and it’s clear that making C++ 10x more teachable and learnable — to paraphrase something Herb brought up — is not a goal for C++, at least not when it conflicts with features and choice. Many techniques and tools are available to the C++ programmer, and monads are certainly one of them.

Herb knows this, of course. His throwaway line was just that: a bit of clickbait. But there is a better message here about monads. Let’s not make them a strawman scapegoat for things that we think are weird and don’t belong in C++, because that’s not really a tenable position.

The more nuanced, more important message to convey about monads and other functional patterns is this: write clear APIs that fit the domain.

When we write classes, functions, variables, templates, APIs, we don’t tend to give them — unless the point is to be very generic — generic names. We give them names that fit their domain. So-called functional patterns are currently suffering from abstract names because they are new and they come with these abstract names, and because we programmers — and I include myself very much in this — are neophiles and pedants. There is hope that this is slowly changing, and that as we gain experience with these patterns they are becoming a more normal tool; not particularly more special than any other, but nonetheless very useful. The important thing is not the hammer, but what we build with it.

I’ve given talks where I described techniques that any functional programmer would immediately recognise as applications of monads or monoids. I was completely aware of this; nevertheless I tried to keep the talks focused on familiar domains so as to be accessible and grounded in examples. Understanding of abstraction comes after seeing concrete instances, not the other way around.

David Sankel has also given talks covering this theme: if you’re writing an API for some domain, use that domain’s language. Don’t use “bind” or “fmap” as function names if you have terms that fit your use better.

P2300 is a major proposal presenting a model for asynchronous execution in C++. The word monad does not appear in that paper. This is definitely not because the authors don’t know what monads are! I am quite sure that every author of that paper is well-practised with monads as a general tool and the continuation monad in particular, and as such the design they came up with was very deliberately monadically-informed. And this is incredibly useful, because it means we can have confidence about the power of the P2300 design. We can lean on what we know about the expressive and compositional capabilities of monads.

A proper question to ask in a critique of any programming interface, that Herb was implicitly and explicitly asking throughout his talk, but which the monad jab jarred with, is not “can we do X?” but “is doing X ergonomic, safe, explainable?” As programmers we get so caught up in asking binary questions that we frequently get the wrong idea that everything should be so categorised. But that line of questions just leads to uninteresting answers and lack of communication. More often than “is it Turing complete?” perhaps we should ask, “is it Pac-man complete?”

Monads are part of C++. They’re also part of Cpp2, even if Herb didn’t say the word. And just like other patterns, if you develop a sense for them, they become part of your toolkit. And they are a very useful tool. But we don’t name the things we build after the tools we use. And choosing not to say “monad” in a talk is a pedagogical device and a recognition of familiar C++ vocabulary — not an implication that monads are weird and other. That is not helpful, and not the C++ way.

C++23’s new function syntax

Monday, September 5th, 2022

We’ve had a couple of ways to spell functions for a long time:

[[nodiscard]] auto say_a_to(
    std::string_view what, std::string_view name) -> std::string {
  return std::string{what} + ", " + std::string{name} + "!";

say_a_to("Hi", "Kate"); // -> "Hi, Kate!"

struct {
  [[nodiscard]] auto operator()(
      std::string_view what, std::string_view name) const -> std::string {
    return std::string{what} + ", " + std::string{name} + "!";
} say_b_to;

say_b_to("Hello", "Tony"); // -> "Hello, Tony!"

And we’ve had a shortcut for that second one for a while, too:

auto say_c_to = [] [[nodiscard]] (
    std::string_view what, std::string_view name) -> std::string {
  return std::string{what} + ", " + std::string{name} + "!";

say_c_to("Bye", "Kate"); // -> "Bye, Kate!"

(Aside: notice where [[nodiscard]] is positioned, since C++23’s P2173, to apply to the lambda’s call operator.)

All these ways to spell functions look the same at the call site. But “regular functions” and function objects behave a bit differently, so as application and library writers we use them for different things according to our needs – how we need users to customize or overload them; whether we want ADL; etc.

C++23 added a new way to spell functions:

struct {
  [[nodiscard]] auto operator[](
      std::string_view what, std::string_view name) const -> std::string {
    return std::string{what} + ", " + std::string{name} + "!";
} say_d_to;

say_d_to["Goodbye", "Tony"]; // -> "Goodbye, Tony!"

It hasn’t been publicized as such, but that’s exactly what it is. C++20 removed the ability to use the comma operator in subscripts (P1161). As a follow-up to that, C++23 now allows multi-argument subscript operators (P2128). And that effectively gives us an alternative syntax for calling functions. The subscript operator now has the same mechanics as the function call operator. These operators are now the only two that can take arbitrary numbers of arguments of arbitrary types. And they’re also the same precedence.

So we can do things like this too:

struct {
  template <std::integral ...Ts>
  [[nodiscard]] auto operator[](Ts... ts) const noexcept {
    return (0 + ... + ts);
} sum;

And call it accordingly:

const auto s1 = sum[1, 2, 3]; // 6
const auto s2 = sum[];        // 0

(Yes, operator[] can also be written as a nullary function.)

This works, today. Probably because to the compiler this was already the bread-and-butter of how functions work anyway, so I’m guessing (although IANACW) it was pretty easy to implement these papers.

P2128‘s envisaged use cases are all about numeric computing and multi-dimensional arrays with integral subscripting. But that’s not all that operator[] is now. Quite literally, it’s alternative syntax for a function call, with everything that might imply.

What use might this be? Well a few things spring to mind. Using operator[] for function calls has all the same lookup and customization implications as using operator(), but adds the inability to call through type-erased function wrappers — at least at the moment. So that might be useful to someone.

A second convention that springs to mind is perhaps for pure functions. If a function is “pure” then it will always return the same output given the same input, which means mathematically it can be implemented with a lookup table. Using operator[] historically looks something like a map lookup, so perhaps it’s a natural fit for pure function syntax?

It might also be useful to naturally express two different areas of functionality within a library, or operations with different evaluation semantics (compile-time? runtime? lazy?) as characterized by function calls with operator() and with operator[]. This would perhaps provide a nice call-site indication to make the code more readable.

There are sure to be other uses. Should you look for operator[] coming soon to a library near you? I don’t know. This might seem strange to some folks, but it’s not necessarily less readable; just less familiar. And if there’s one thing I know about C++, it’s that it’s a short hop from bizarre newly-discovered quirk to established technique. operator[] is now equivalent to operator(), and when someone finds a use for that, it will get used.

constexpr Function Parameters

Monday, August 29th, 2022

The Set-up

In C++, this doesn’t work:

consteval auto square(int x) -> int { return x * x; }

constexpr auto twice_square(int x) -> int { return square(x); }

The compiler complains, quite rightly:

error: call to consteval function 'square' is not a constant expression

note: function parameter 'x' with unknown value cannot be used in a constant expression

Despite the fact that twice_square is a constexpr function, its parameter x is not constexpr. We can’t use it in a static_assert, we can’t pass it to a template, we can’t call an immediate (consteval) function with it.

So far, so well known. This is the current situation in C++, although this area is a potential future expansion of constexpr (see P1045).

The Hook

And yet… we all know that C++ has some dark corners.

A disclaimer is in order. I don’t know why the technique I’m about to cover works, and I haven’t found the part of the standard that guarantees it (or not). So the possibilities lie on a spectrum:

  1. This is a valid technique, and it’s explicitly called out in the standard
  2. This is a valid technique that’s a consequence of a valid reading of perhaps several parts of the standard, but not explicitly called out.
  3. This is known to the standard and left up to implementations, who happen to mostly make the same choice.
  4. This is outside the standard, but a logical consequence of how C++ must be implemented, so implementations are bound to make the same choice.
  5. This is behaviour that is prohibited by the standard.
  6. This is behaviour for which the standard imposes no requirements.

Things in buckets 1, 2 and 3 are well known. Stateful template metaprogramming is an example of something in bucket 4, that the committee would like to move into bucket 5. And bucket 6 is of course undefined behaviour.

My gut feeling is that we are somewhere in 2-3 territory here. The technique I’m about to highlight is accepted by Clang, GCC and MSVC, and the basics work in every standard since C++11. So 5 seems unlikely. But I could well be wrong — caveat lector. And let’s get to it.

The Tale

The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka” but “That’s funny…”

Isaac Asimov (1920–1992)

My suspicions were first aroused while reading a code review recently. The engineer who’d written the code at the time perhaps didn’t appreciate the implication of what they’d written.

template <auto F>
concept C = true;

auto do_something(auto fn) {


The actual code is removed to highlight the structure. We’re passing a lambda expression into a function template. And then we’re verifying that the result of calling the lambda satisfies a concept. And by the way, if it’s new to you that concepts can work on NTTPs, that’s also a thing — although it’s not well supported by so-called “terse syntax”.

But hold on here, fn is a function parameter. And function parameters aren’t constexpr! So why is the static_assert well-formed?

Since C++17, the function call operators of lambda expressions are implicitly constexpr. But they aren’t static (yet — see P1169), so there is an implicit object parameter here that should not be a constant expression – except that the compiler thinks that it is? Again, I’m not sure quite yet what is happening here.

A bit of experimentation shows that this works for lambdas that don’t capture. And for empty structs with function call operators. And for any kind of derived structure, as long as it is empty, including overload sets deriving from several lambda expressions in the familiar way.

template <typename... Ts> 
struct overloaded : Ts... { 
  using Ts::operator()...;

And on the MSVC compiler with support for P0847, it also works with explicit object parameters.

So to achieve “constexpr function parameters” it seems wrapping values inside non-capturing lambda expressions and then calling to unwrap in a constexpr context is possible. And in fact, this is the technique used by Jason Turner in C++ Weekly Episode 313, “The constexpr problem that took me 5 years to fix!”

The Wire

That on its own is strange. But then I had another idea.

#define constexpr_value(X) \
[] { \
  struct { \
    consteval operator decltype(X)() const noexcept { \
      return X; \
    } \
    using constexpr_value_t = void; \
  } val; \
  return val; \

What if I wrap up a value inside an empty structure with a compile-time conversion operator? The immediately-invoked lambda expression on the outside here is just to turn the whole thing into an expression. And the alias declaration is doing basically the same job as is_transparent in the standard library: it allows us to detect compile-time values with a concept.

template <typename T, typename U>
concept compile_time = 
  requires { typename T::constexpr_value_t; } 
  and std::convertible_to<T, U>;

Now I can write the following:

consteval auto square(int x) -> int { return x * x; }

constexpr auto twice_square(compile_time<int> auto x) -> int {
  return square(x);

and call it:


And everything is happy. In fact, twice_square doesn’t even have to be a constexpr function: the entire “constexpr-ness” is contained within constexpr_value. And the result of constexpr_value doesn’t even need to be assigned to a constexpr variable; this works just the same saying:

auto x = constexpr_value(4);

Regular non-constexpr functions can take arguments like this as use them in arbitrary constexpr contexts as needed, e.g.

auto sqrt(compile_time<double> x) -> double {
  static_assert(x >= 0, "negative numbers not allowed");
  return std::sqrt(x);

When I call this with a cromulent constexpr_value, all the compile-time machinery melts away and it’s a regular call to std::sqrt. When I call it with a negative constexpr_value, the static_assert fires. Contrast this with a throw inside an if consteval block or something similar that we’d typically use in a constexpr function today to signal an error at compile time: I think this is clearer and gives a nicer error.

The Shut-out

This works on all 3 major compilers, and the fundamentals work all the way back to C++11. I have noticed that the compilers differ slightly when using a constexpr_value as a NTTP, seemingly dependent on where the conversion happens. For example there is difference between:

template <auto N>
constexpr int var = N;


template <int N>
constexpr auto var = N;

Other than this, the 3 compilers seem remarkably in agreement about how this works.

The Sting

So what does this mean? If we have a compile-time value, and we wrap it in constexpr_value, we have potentially the following situation:

The variable declaration is not marked constexpr. The function parameter isn’t constexpr. The function itself isn’t marked constexpr, and it’s not an immediate function. But arbitrarily we can use the value in a constexpr context. We can call into a constexpr or immediate function. We can use the value in a static_assert. We can use the value as a NTTP. And we don’t have to jump through hoops with less-friendly ways to signal errors in constexpr contexts.

I’m not sure of all the potential uses yet, but this is a curious thing.

Familiar Template Syntax IILEs

Wednesday, October 23rd, 2019

A lot has already been said in the blogosphere about the use of immediately-invoked lambda expressions (IILEs) for initialization, and they’re certainly very useful.

In C++20, P0428 gives us “familiar template syntax” for lambdas. Now, instead of writing a regular generic lambda:

auto add = [] (auto x, auto y) {
  return x + y;

we have the option to use “familiar template syntax” to name the template arguments:

auto add = [] <typename T> (T x, T y) {
  return x + y;

This has several uses — see the paper for motivations listed there — but one I want to draw attention to in particular: when we use an FTSL as an IILE, it can simplify code.

Examples, you say? I have two for you.

First, consider the “possible implementation” of std::apply listed on

namespace detail {
  template <class F, class Tuple, std::size_t... I> 
  constexpr decltype(auto) apply_impl(F&& f, Tuple&& t,
                                      std::index_sequence<I...>) {
    return std::invoke(std::forward<F>(f),
} // namespace detail   
template <class F, class Tuple> 
constexpr decltype(auto) apply(F&& f, Tuple&& t) {
    return detail::apply_impl(
      std::forward<F>(f), std::forward<Tuple>(t),

With an FTSIILE, that can become:

template <class F, class Tuple>
constexpr decltype(auto) apply(F&& f, Tuple&& t)
  return [&] <auto... I> (std::index_sequence<I...>) {
      return std::invoke(std::forward<F>(f), 

With an FTSIILE, we avoid having to forward multiple times: we can capture by reference into the lambda instead, and do the forwarding once, inside. The result for small functions like this is about half as much code, better encapsulated. No need to declare a helper any more just to destructure template arguments.

Here’s another quick example, from Jonathan Boccara’s recent blog post about STL Algorithms on Tuples. In that post, Jonathan presents for_each2, a function template that applies a binary function over two tuples:

template <class Tuple1, class Tuple2, class F, std::size_t... I>
F for_each2_impl(Tuple1&& t1, Tuple2&& t2, F&& f, std::index_sequence<I...>)
  return (void)std::initializer_list<int>{
                        std::get<I>(std::forward<Tuple2>(t2))),0)...}, f;
template <class Tuple1, class Tuple2, class F>
constexpr decltype(auto) for_each2(Tuple1&& t1, Tuple2&& t2, F&& f)
  return for_each2_impl(std::forward<Tuple1>(t1), std::forward<Tuple2>(t2),

There is a small problem in the original code here: it’s not right to call std::forward<F>(f) in for_each2_impl and potentially move multiple times from the same callable. But that’s not too important; the function rewritten in C++20 could look like this:

template <class Tuple1, class Tuple2, class F>
constexpr decltype(auto) for_each2(Tuple1&& t1, Tuple2&& t2, F&& f)
  return [&] <std::size_t... I> (std::index_sequence<I...>) {
      return (std::invoke(f, std::get<I>(std::forward<Tuple1>(t1)),
                          std::get<I>(std::forward<Tuple2>(t2))), ...), f;

Again, less std::forward boilerplate, better encapsulation, less code overall, and the generated code is identical.

So there you have it. Plain IILEs can improve initialization in regular code with less fuss that it would take to extract a small function, and FTSIILEs can now improve template code in the same way, removing the need for separate functions that previously existed only to destructure template arguments and that necessitated more forwarding boilerplate.

Remember the Vasa! or, plus ça change, plus c’est la même chose

Monday, August 12th, 2019

I’ve been programming in C++ for almost a quarter of a century now. I grew up, professionally, with C++, and in many ways, it grew up along with me.

For someone who is used to C++, even used to recently-standardised C++, it’s hard not to feel apprehension when looking at C++20. Modules, coroutines, ranges, concepts — these are all massive features that I don’t have real experience with yet and that offer completely new ways of doing things. There are also several medium-sized features which could turn out to have large impacts; for example the spaceship operator, or the further expansion of constexpr capabilities.

It’s very easy to react to all this by thinking, “It’s too much; it’s not baked yet; C++ is collapsing under its own weight.” And it’s natural not to trust something new, especially when I already know how to solve my current problems using current technology.

What I try to remember is: this isn’t a new situation, and this isn’t a new feeling. It’s not new to me, and it’s not new to the world. Many people have felt this way about different technologies over the decades, and very seldom has actual catastrophe been the result.

We felt this way about C, when we programmed games mostly in assembler. We felt this way about object-oriented C++ once we started trusting C. We felt this way about generic C++ and the STL after we’d gotten used to object-oriented C++. Some of us currently feel this way about C++11 and beyond.

Often this feeling is based on seeing new tech not (currently) performing quite as well as old tech for certain use cases. This is also not a new concern. People even thought the same about assembly language, preferring actual machine code! In her HOPL keynote (1978), Grace Hopper said:

In the early years of programming languages, the most frequent phrase we heard was that the only way to program a computer was in octal. […] the entire establishment was firmly convinced that the only way to write an efficient program was in octal.

It’s not wrong to feel this way because feelings aren’t wrong. But it’s also worth remembering that we’re always on the edge of technology that is in the process of proving — and improving — itself. Some things don’t stick around, but many do, and they get better, usually surpassing older tech in all sorts of ways, performance included.

We’re always learning how to do more. Betting against progress is seldom a good bet.

Much has been made of Bjarne’s 2018 paper, “Remember the Vasa“, and it might seem to validate these feelings of distrust for the new. But even that paper provides some historical context, recalling:

During the early days of WG21 the story of the Vasa was popular as warning against overelaboration (from 1992):

Please also understand that there are dozens of reasonable extensions and changes being proposed. If every extension that is reasonably well-defined, clean and general, and would make life easier for a couple of hundred or couple of thousand C++ programmers were accepted, the language would more than double in size. We do not think this would be an advantage to the C++ community.

In fact, that’s exactly what has happened, maybe several times over since 1992, and yet the C++ community is thriving more than ever with conferences, podcasts, a healthy social networking scene, growing standards participation, and more.

In The Design and Evolution of C++, section 6.4, Bjarne mentions that “Remember the Vasa” became a popular cautionary tale at the Lund meeting (of June 1991). But he also says, in section,

[C++] was designed to provide a toolset for professionals, and complaining that there are too many features is like the “layman” looking into an upholsterer’s tool chest and complaining that there couldn’t possibly be a need for all those little hammers.

C++ is growing. Change can be daunting, but I think we’re going to be fine. And when one day a specific little hammer is just the right tool for the task at hand, I’ll be thankful that someone added that hammer to my toolbox.

Thoughts on Modern C++ and Game Dev

Tuesday, January 1st, 2019


The C++ committee isn’t following some sort of agenda to ignore the needs of game programmers, and “modern” C++ isn’t going to become undebuggable.

Over the past week there has been an ongoing conversation on Twitter about how many people — especially those in the games industry — feel that the current direction of “modern C++” doesn’t align with their needs. One particular failing of C++ from a game programmer’s perspective is that it seems to be becoming a language where debug performance is ignored and optimization is increasingly expected and required.

Having worked in the games industry for 23 years prior to 2019, I have some observations and opinions on this topic as it applies to game development. Is debuggability important to game programmers? Why, and what are the related issues?

First, a bit of background.

Many C++ game developers are working with Microsoft Visual C++. Historically, Microsoft platforms have been a huge market for games and this is reflected in the typical game programmer’s experience. From the 90s through the 2000s, most games were written with this in mind. Even with the advent of non-Microsoft consoles and mobile games, the heritage of many AAA studios and many game programmers is with Microsoft tools.

Visual Studio is probably the best C++ debugger in the world. Debugging is where Visual Studio really stands out — more so than the compiler front-end, back-end, STL implementation, anything. Microsoft has been making huge strides in all things C++ over the last 5 years, but their debugger has always been pretty great. When you’re developing on a Windows desktop, you’re just used to having a world-class debugger at your fingertips.

That being said, let’s consider the process of producing code without bugs; the options from the non-game programmer’s point of view; and the constraints that game programmers face. If I may paraphrase the major “modern C++ direction” argument, it comes down to types, tools, and tests. By this thinking, the debugger should be the last line of defence. Before we reach that point, we have the following options.

Option 1: Types

We can use as much strong typing as we can to help eliminate classes of bugs at compile time. Strong typing is certainly a feature of recent C++ evolution; for example, since C++11, we’ve seen:

  • a huge expansion of type traits
  • things like nullptr and scoped enums to combat C’s legacy of weak typing
  • the GSL and tooling surrounding it
  • concepts in C++20

Some of us may not like template metaprogramming; some of us may not like almost-always-auto style. Regardless, we can still recognize that a well-founded motivation for these C++ styles is helping the compiler to help us, using what it knows best: the type system.

As far as game programming goes, strong typing is very much open for exploration, and is being actively embraced by the game programmers I know who are interested in improving their C++ usage. There are two main concerns here: impact on compile times and impact on code readability.

To put it bluntly, you can easily ignore compile times as a programmer at a very large non-games company with a mature internal infrastructure and functionally infinite computing power to compile whatever code you might write. Such very large companies are concerned with compilation cost — hence modules — but as a rule, individual programmers don’t feel any pain here. The same is not true of most game programmers. Indie game devs don’t have build farms; AAA game devs often have something like Incredibuild, but may also be working with 10+ year old codebases that still take 15-20 minutes to build.

We can argue about the relative cost of adding hardware vs. programmer
time, and I agree that hardware is the cheaper option, but:

  • Hardware is a real upfront cost to this quarter’s budget weighed against an intangible cost in time/hiring/etc. spread over some later time period. Humans are bad at making this tradeoff, and companies are practically designed to optimize short-term gains.
  • Infrastructure needs maintenance, and almost nobody gets into the games industry to be a build engineer. The games industry doesn’t pay engineers particularly well compared to other C++ fields, and tends to pay non-game engineers even worse compared to what they could make elsewhere.

We can also argue about the fact that compile times should never have gotten to that state; again, I agree. The price of this is eternal vigilance — again, with somebody wearing a build engineer hat — and ideally, some kind of automated tooling to be able to track changes in build duration over time. Happily, this is actually becoming easier to achieve with the advent of turnkey CI systems.

Option 2: Tools

We should use as much tooling as we can — warnings, static analysis, sanitizers, dynamic analysis tools, profilers, etc.

In my experience, game devs use these where possible, but the games industry has a few problems here:

  • These tools tend to work best on non-Microsoft platforms — as mentioned, this is not a typical game dev scenario.
  • These tools are mostly geared towards working with standard C++. They have out-of-the-box support for std::vector, but not for my hypothetical engine’s CStaticVector class. Admittedly, the tools can’t really be faulted for this, but it’s still a barrier to entry.
  • Setting up and maintaining a CI pipeline that runs these tools requires build engineers, and as mentioned before, employing people in non-game engineering roles is a systemic problem of the games industry.

So if these tools work so well with standard C++, why don’t game devs use the STL?

Where to begin answering that question? Perhaps with a consideration of game programming history:

  • Before about the early 90s, we didn’t trust C compilers, so we wrote in assembly.
  • Sometime in the early-to-mid 90s, we started trusting C compilers, but we still didn’t trust C++. Our code was C with C++-style comments, and we didn’t have to typedef structs all the time.
  • Around 2000, we had a C++ revolution in game dev. This was the era of design patterns and large class hierarchies. At this time, STL support was poor on consoles, and consoles were king. We got by with GCC 2.95 forever on PS2.
  • By around 2010, two more revolutions were underway. The pains of large class hierarchies spurred the development of component-based code. This change is still evolving today in the popularity of Entity-Component-System architectures. Hand-in-hand with that was the second revolution — trying to take advantage of multiprocessor architectures.

Throughout these paradigm shifts, game dev platforms were themselves changing on a frequent cadence, and in major ways. Segmented memory gave way to a flat address space. Platforms became multiprocessor, symmetric or otherwise. Game devs used to Intel architectures had to get used to MIPS (Playstation), and then custom hardware with heterogeneous CPUs (PS2), and then PowerPC (XBox 360), and then more heterogeneity (PS3), etc. With each new platform came new performance characteristics for CPU, memory, and storage. If you wanted to be optimal, you had to rewrite things. A lot. And I’m not even mentioning the rise of the Internet and the impact that had on games, or the constraints of manufacturers’ walled gardens.

Historically, STL implementations on game dev platforms have been poor. And it’s no secret that the STL containers aren’t a great fit for games. If pushed, we’d probably admit that std::string is OK, and std::vector is a reasonable default. But all containers in the STL present the problem of controlling allocation and initialization. Many games are concerned with bounding memory for various tasks, and for things that must appear to be allocated dynamically during gameplay, slab or arena allocators are very common. Amortized constant time isn’t good enough; allocation is potentially one of the most expensive things that can happen, and I don’t want to drop a frame because it happened when I wasn’t expecting it. As a game dev, I need to manage my memory requirements up front.

The same story plays out for other dependencies in general. Game devs want to know where every CPU cycle is going, where and when to account for every byte of memory, and where and when to control every thread of execution. Until recently, Microsoft compilers changed ABI with every update, so if you had a lot of dependencies, it could be painful to rebuild them all. Game devs tend to prefer dependencies that are small to integrate, do one thing and do it well — preferably with a C-style API — and in many shops, are in the public domain or have a free license that does not even require attribution. SQLite and zlib are good examples of things game devs like.

Adding to this, the C++ games industry has a rich history of Not-Invented-Here syndrome. It’s to be expected from an industry that started with individuals making things on their own, on new hardware that didn’t have any other options. The games industry is also one of the only tech sectors where programmers of no particular distinction are listed in credits. Writing things is fun, and it helps your career! Much better to build rather than buy. And because we’re so concerned with performance, we can tailor our solution to be exactly what we need rather than being as generic — and therefore wasteful — as an off-the-shelf solution. Aversion to Boost is a prime example of how this plays out in games. I’ve worked on projects that went this way:

  • Starting out, we pull in a Boost library to solve a problem.
  • It works pretty well. There’s some pain when updating, but no more than usual for any other dependency.
  • Another game wants to use our code, but using Boost is a deal-breaker, despite our experience being apparently fine.
  • We take out the Boost code, but now we have a new problem: we need to solve the problem that the Boost library solved ourselves.
  • We basically copy the parts of the Boost code we need into our own namespaces.
  • Later on, we inevitably and repeatedly find that we need extra functionality that would just be there if we’d stuck with the original code. But now we own the code, so we need to continually maintain it.

We don’t like anything that is large, tries to do too much, or may impact compile time. This is reasonable. Where we humans fail again and again is in vehemently arguing against any perceived pain today while failing to account for the very real and greater pain of maintenance spread out in someone else’s budget over the next three years. Existence proofs of games successfully using STL and Boost piecemeal don’t make any headway against psychology.

For all these reasons, many game dev shops have built up their own libraries that cover what the STL does and more, with particular support for game dev-specific use cases. Some large game companies have even developed entire almost-API-compatible in-house STL replacements that come with a resultingly huge maintenance burden.

It’s quite a reasonable thing to look for a better alternative to std::map, or for a small-buffer-capable std::vector. It’s much less palatable to have to maintain your own implementations of, say, algorithms or type traits for practically no gain. I think it’s a shame that the STL is so strongly identified with its containers. They tend to be what gets taught first, so when we say “the STL” we often first think of std::vector when we should really be thinking of std::find_if.

Option 3: Tests

Extensive testing should be implemented, goes the argument. TDD and/or BDD should cover all the code that it can, and bugs should be tackled by writing new tests.

So let’s talk about that.

In my experience, there is almost no automated testing in the games industry. Why?

1. Because correctness doesn’t really matter, and there is no real spec.

As a young programmer in the games industry, I was quickly disabused of the notion that I should strive to model things realistically. Games are all about smoke and mirrors and shortcuts. Nobody cares if your simulation is realistic; they care if it’s fun. When you have no spec other than “it should feel right,” you really have nothing to test against. Gameplay discoveries can often result from bugs. Reasonably often, bugs ship, and even become loved for their effects (see Civilization’s Gandhi). Games aren’t like some other C++ domains; lack of correctness doesn’t put someone’s safety or savings on the line.

2. Because it’s hard.

Sure, you want to do automated tests where you can. This can be done for some subsystems which have well-defined outcomes. Unit testing in the games industry exists, but it tends to be confined to low-level code — STL-alikes, string conversion routines, physics routines, etc. Things that actually have predictable outcomes do tend to be unit-tested, although not usually TDD’ed, because game programmers are trying to make their own lives easier. But how do you test gameplay code (see point 1)? And once you get beyond unit testing, you get to another reason it’s so difficult.

3. Because it involves content.

Testing a non-trivial system is probably going to involve providing content to test it. Most engineers aren’t very good at cooking up content themselves, so it’s going to require involving someone with content-building skills to get a meaningful test. Then you have the problem of how to measure what’s supposed to happen when the output is not a number or a string, but the way something displays, or sounds, or evolves over time.

4. Because we’re not practised at it.

Unit testing a function where I know the inputs and outputs is very possible. But gameplay is about emergent behaviour, and I don’t know how to test that very well. What I can test, if I have the approval from my manager to devote the necessary time to it, are things like performance, or some higher level features like matchmaking that I can analyze. Such infrastructural work can be engaging to some game programmers, but perhaps not to most, and it needs buy-in from the people who control the purse strings. As a game programmer, I never get the chance to become more practised at writing higher-level tests.

5. Because [company] doesn’t see the need for automated testing.

We’re trying to ship a game here. We’re in a hit-driven industry, and that game is probably going to make almost all of its money in the first month of sales when the marketing spend lines up with the ship date. The console cycle has taught us that code really doesn’t live that long anyway. If we are working on an online game, we are likely to get some more time to do that testing on, say, matchmaking or load testing. Since performance is a requirement for ship, we need to do at least some performance testing, but we don’t need to automate it. To games industry management, automated testing is a time and money sink. It requires experienced engineers to do work that is mostly invisible. That time could be better spent on building features. It’s far cheaper in the short term to use human QA to test the game, which brings me to the next point.

6. Because testing in general is a second-class activity in games.

I love good QA people. They’re absolutely worth their weight in gold. They know how to make your game the best it can be by breaking it in ways you never thought possible. They are subject matter experts on your gameplay in ways that you just aren’t, and likely never will be. They’re better than a team of super-powered compilers helping you to make things right. I’m happy that I’ve had the privilege of working with some excellent QA folks in my time.

I have almost always had to fight for them just to stay on my team.

In larger AAA game shops, the QA organization is usually an entirely separate concern from any game team, with separate management and organizational structure. Ostensibly, this is so that they can provide a better service by bringing a cool objectivity to their testing. In practice, it’s a different story.

They tend to be treated like cogs in a machine, often switched between projects without warning and generally made to feel like anyone could do their job. When a date slips, engineering may feel a crunch, but it’s always QA that gets crunched the most, working shifts at nights and on weekends, and even getting blamed for being the bearers of bad news about the game’s quality.

They are severely underpaid. A very experienced QA person with years of domain knowledge is routinely paid less than half what a mid-level software engineer makes. I’ve worked with brilliant QA engineers who set up performance test pipelines with historical tracking and alerts, built frameworks for API testing and load testing, and did a bunch of really valuable technical tasks that were somehow not worth the time of “real game engineers.” I have no doubt that these excellent folks could have made much more money at any large tech company you care to name.

They are untrusted. It’s not uncommon for QA folks to be kept apart from the other devs and to have badges that only work for that floor of the building, or even to have to use a completely separate entryway.

They are socialized into subservience. QA people are sometimes taught not to “bother” engineers! When they report bugs directly, they are told to call engineers “Ms. X” or “Mr. Y.” Sometimes I have even received angry phone calls from the QA “chain of command” when I’ve reached out to individuals to pair up and investigate bugs they’re encountering.

This sounds like a bad story, and thankfully it’s not everyone’s experience, but unfortunately it is still fairly common; common enough that it can cause engineers — possibly stressed out themselves, though that’s still no excuse — to start thinking that it’s QA’s job to find their bugs, or even to blame QA for bugs!

The best teams I have worked on have been the ones where we lobbied for and got embedded QA folks who worked hand-in-glove with engineers. They didn’t lose their objectivity or their passion for making the game the best it could be. They loved getting engineering help in automating tests. There’s no doubt in my mind that the games industry can benefit from automating more.

On Debug Performance

If we take these points together — being used to debugging, a platform for APIs and tools that is still maturing, and the difficulty with and consequent lack of culture around automated testing — it becomes clear why game developers insist on being able to debug.

But there are still problems with debugging itself, and problems with how game developers cope with the direction of C++.

The principal problem with debugging is that it doesn’t scale. There will be game developers reading this post and feeling that my descriptions don’t jibe with their experiences. That’s probably because at some point they have come up against the debugging scalability problem first hand and have had to find ways around it.

To put it another way, we want debug performance because in order to catch bugs, we often need to be able to run with sufficiently large and representative data sets. When we’re at this point, the debugger is usually a crude tool to use, debug performance or not. Sure, setting data breakpoints can help with tracking down intermediate-size problems, but what do we actually do with the real bugs, the ones that are left when we’ve fixed everything else? The ones that only happen under network load, or memory pressure, or extreme concurrency, or to some small as-yet unidentifiable subset of our multiple millions of players, or on burned discs, in the German build, sometime after 3 hours of soak testing?

We sure as Hell don’t just rely on the debugger. We do what we’ve always done. We try to isolate the problem, to make it happen more frequently; we add logging and pore over it; we tweak timing and threading settings; we binary-search builds; we inspect core dumps and crash log data; we build cut-down content to try to reproduce the issue; we think and talk about what might be causing the issue.

Often we end up fixing multiple things along the way to finding the actual crash. In other words, we solve problems, and in the end, using the debugger is just a tiny part of that. So yes, debug performance is nice, but the lack of it isn’t going to prevent us from being engineers. We still need skills like being able to analyze core dumps and read optimized assembly.

When using “modern C++” I use the debugger in just the same way as always. Stepping through newly-written code; setting breakpoints on particular data I’m interested in; and using the debugger to explore unfamiliar code. This doesn’t change with “modern C++”, and yes, even though the STL uses _Ugly _Identifiers, it’s not magic. It can sometimes be useful to explore what the STL does, or I can just step over it, or these days, have the debugger hide it for me.

When I run into issues of debug performance, the problem is not that “modern C++” is slowing me down, it’s that I’m just doing too much anyway. Using a debugger doesn’t scale. Types and tools and tests do.

I’ve been concerned about this issue of C++ increasingly requiring optimization and I’ve asked compiler developers for their opinions on it. The fact is that it’s not a binary thing. We’re already on the continuum, and there is room to move further along without impacting debuggability. Right now, our compilers do copy elision for temporaries, even when we don’t ask for that optimization. It doesn’t make a difference in our ability to debug things. I doubt we would complain if debug builds started including NRVO, or half a dozen other optimizations that could be done without our noticing any change in debugging. That’s the likely direction of C++.

Epilogue: Modern C++ direction

If you’re a games industry programmer lamenting the direction C++ is taking, you essentially have two options:

1. Do nothing

Assuming you’re still going to use C++, you can keep using it just as you’ve always done. You don’t need to adopt any new features you don’t want to. Practically everything you do now will remain supported, and you’re still going to reap the benefits of improvements in compiler technology over the years to come.

This is a perfectly viable strategy if you’re working for yourself or with a handful of like-minded individuals. C++98, with selected features beyond that, is still a fine language to write games in.

But if you’re in a bigger company, at some point you’re going to have to deal with change, because you will have to hire more people. Increasingly, hiring C++ engineers means hiring “modern” C++ engineers. Generational change will happen, just as it happened with assembly, C, and C++98. You can deal with that by imposing rules on what is and isn’t allowed in your codebase, but that isn’t a long-term solution. So what is?

2. Get involved

Stop going to GDC as your one conference per year and start going to CppCon. It’s way better value for your company’s money, for a start. Participate in standards discussions; get on the groups or the mailing lists; read standards papers and provide feedback to the authors. If you can also attend meetings, that’s great, but if you can’t, you can still do a lot to advance your perspective.

C++ committee participation is open to everyone. All the information you need to get involved with SG14, or SG7, or SG15, or whatever your particular area of interest is can be found on The committee doesn’t have some hidden agenda — do you really think 200+ programmers could be organized enough to have a coherent agenda? Even the “higher-ups” on the committee don’t get their way very often.

If you want a voice, you need to speak in the places you’ll be heard, rather than on Twitter or Reddit. Please do so — I look forward to the conversation.

Pointer-to-member-functions can be tricky

Friday, August 31st, 2018

Note: the following applies to Microsoft’s compiler only — not to GCC or Clang.

Pointers-to-member-functions (PMFs) are a bit off the beaten track in C++. They aren’t very syntactically pleasing, and they aren’t as easy to deal with as regular pointers-to-free-functions (PFFs). But they still see use, particularly in pre-C++11 codebases or where people choose to avoid the overhead of std::function or lambdas as template arguments.

Although idiosyncratic, the way Microsoft’s compiler implements PMFs is fairly well-known in its effects. Classes that use single inheritance can achieve PMFs the same size as PFFs, i.e. the size of a pointer. Classes that use multiple inheritance must use an implementation-defined structure, which is larger than a pointer.

This snippet of code illustrates the difference.

This has all been well-documented on Raymond Chen’s blog, The Old New Thing, for years now. But given the scarcity of PMF usage, subtle dangers still lurk.

One issue can arise when separating the definition of a PMF from its class. Let’s say we have a class defined in a header a.h:

class A
  // probably a reasonable size class; we're going to use 
  // PMFs for this class
  // multiple inheritance is rare, and this class doesn't 
  // use it, so PMFs for this class will be just the size
  // of a pointer

Because A is a fairly large class, we might have a forward declaration for A, either in a conventionally-named header such as a_fwd.h, or just as a standalone forward declaration where pointers and references to A are used.

Then, in another file somewhere, we have a class that uses PMFs for A:

using A_PMF = auto (A::*)() -> void;
class B
  A_PMF a_pmf;
  // other stuff...  

A hard-to-diagnose bug lurks here. Do you see it?

We would assume that A_PMF is the size of a pointer. Generally, multiple inheritance is rarer than single inheritance, so this is nice, and what we normally expect from MSVC.

However, the size of A_PMF — and therefore the size of B — differs depending on whether A‘s class definition is known, or A is simply forward-declared!

If A is known, the compiler knows A is using single inheritance, and therefore A_PMF can be the size of a pointer. This is our normal expectation.

If A is forward-declared, we can still declare A_PMF. But now the compiler doesn’t know for sure that A_PMF can be the size of a pointer. A could use multiple inheritance, so the compiler must err on the side of caution and use the implementation-defined structure for A_PMF.

This can lead to B being two different sizes in two different translation units! We could have one where a.h is included (and A is known) and one where only a_fwd.h is included. Both versions will compile without warnings, but of course, depending on what comes after a_pmf in B, the bug may not show up immediately.

When it does show up, it may be difficult to diagnose because the debugger probably has perfect knowledge of A_PMF and will claim that it’s the size of a pointer, even when the translation unit you’re looking at compiled it differently. You can run into situations where you print a value, but when you break at the print statement, what prints is not what the debugger shows. This can be rather confusing, to say the least!

Perhaps the moral is: if you’re going to use PMFs, declare them right next to the class they belong with, and beware forward declarations.

C++Now 2018 Trip Report

Wednesday, May 16th, 2018

Last week, a little fewer than 150 C++ programmers gathered in Aspen, CO for C++Now 2018. This year the conference was scheduled before Mother’s Day, so with it being quite a bit earlier than usual, I was half-expecting snow and travel delays. In fact last week turned out to be uniformly lovely, for the most part with clear skies and temperatures in the low 20s Celsius.

After a connection through Denver, I arrived on Sunday afternoon ready for a week of geeking out about C++. As always, there were many talk slots where I had to make a tough choice, but professional recording by Bash Films means the ones I missed should be on YouTube in just a few weeks. Here are a handful of the standout presentations.

Smart Output IteratorsJonathan Boccara

I met Jonathan for the first time in person on Sunday night, although I’ve been reading his blog for a while and really enjoyed the recent algorithms talk he gave at ACCU. He has an infectious enthusiasm for C++ that comes across well, and this talk was very enjoyable. This sort of thing is right in my wheelhouse — I’m a sucker for algorithms and code structure — and exploring the space between the existing STL and the Ranges TS sparked in me a few ideas for exploration. I spent a while talking to Jonathan afterwards and I have some things to work up in the coming days and weeks.

Fancy Pointers for Fun and ProfitBob Steagall

The problem with submitting multiple talks to a conference is that they might get accepted, which is how Bob ended up with 3 talks to give at C++Now! Allocators are a hot topic at the moment, with lots of changes for C++17 and more than a few conference talks about them over the past year. Bob has been doing a lot of work in this area and he presented a nice demo of relocatable heaps with an impressively small amount of code. I think this has a real potential for saving time spent in serialization; it’s basically a memcpy between machines!

My Little *this Deduction: Friendship is … Uniform?GaÅ¡per Ažman

This was the kind of presentation that you only get at C++Now: a room full of C++ nerds arguing and trying to pin down the truth, just about kept on track by the presenter. It’s the sort of presentation that gets irked comments on YouTube, but for those of us in the room it’s brilliant fun. I think GaÅ¡per knew what he was getting into, and did a great job taking the audience through the ramifications of the proposal, even though at times they wanted to jump a dozen slides ahead! Anyway, this talk got the “most educational & inspiring” award, and well deserved. Congrats, GaÅ¡per! This is going to be an interesting one to caption…

Initializer Lists are Broken, Let’s Fix ThemJason Turner

This was a great talk of the kind we’ve come to expect from Jason: polished material taking us through various implementation options to solve a problem, with a good amount of audience participation, and of course exploration with Compiler Explorer. Jason: “I put this in Godbolt earlier…” Matt: “Oh, I wondered what that was!” It turns out that std::initializer_list has a few gotchas and — surprise — isn’t the panacea it may have initially seemed to some. Oh well. Jason’s new presentation secret is the ability to click on any code sample and bring it up inline in Compiler Explorer. That’s nifty.

Other talks and the Keynotes

There were so many more great talks — I really enjoyed Matt Calabrese‘s Argot: Simplifying Variants, Tuples and Futures; Zach Laine’s talks on Unicode support; Tony Van Eerd‘s Words of Wisdom; David Sankel’s C++17’s std::pmr Comes With a Cost; and Alan Talbot’s Moving Faster: Everyday Efficiency in Modern C++ to name just a few.

On the surface, Lisa Lippincott’s opening keynote and John Regehr‘s closing keynote (locknote?) were completely different — one talking about high level ideas for modelling program shape mathematically, the other concerned with compiler optimizations and undefined behaviour. I would say the similarity between them was the rigour each presenter brought to their subject. Both Lisa and John also have a talent for making the complex seem simple — I have a new framework to think about contracts now, and I have a deeper understanding of the reasoning and tradeoffs behind optimizations and undefined behaviour choices.

My Talk

This year, my talk Easy to Use, Hard to Misuse: Declarative Style in C++ seemed to me a bit of a departure from previous talks I’ve given. Generally, my talks do have some well-founded ideas behind them, but they tend to involve a large amount of code experimentation beforehand, and that is rather easy to talk about. This time, I had plenty of code examples, but they weren’t especially modern; this was much more a talk about the everyday basics of programming, and an attempt to think more deeply about good practices and consciously elucidate concrete guidelines. I’m glad that it was very well received, and that the time management worked out!

It’s sad to leave Aspen again, but I’m back to my regular schedule now with lots of new things to contemplate, disseminate among my colleagues, and investigate.

10 Non-C++ Book Recommendations for C++ Programmers

Friday, January 19th, 2018

So you’ve learned enough C++ to function. You’ve read a bunch of the usual recommendations: Meyers, Stroustrup, maybe even Alexandrescu and Stepanov. You know enough to recommend Lippman et al. to newbies rather than the other “C++ Primer.”

The internet has lots of C++-related book recommendations to make — for example, you should absolutely read all the authors listed above — at whatever C++ developmental stage is appropriate for you. But since you can already find so many C++-specific book lists out there, I’d like to recommend some books perhaps a bit further off the beaten track that should nevertheless be interesting and help you become a better programmer in C++ or any other language. C++ is a multiparadigm language, and there is a lot outside of the usual C++ ecosystem that is valuable to learn.

These recommendations fall broadly in two categories: books about specific areas of programming, mathematics, or computer science; and books about the history of the subject and industry. There is some overlap between these categories, and I believe that all of my recommendations will either give you extra tools to use or at the very least add context to your endeavours. I can earnestly say that I have read every single one of the books I’m recommending here, although some are now available in editions that have evolved since I last read them.

In no particular order:

Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age – Michael A. Hiltzik

I’m kicking off the list with a historical recommendation. As programmers, we tend to exalt the new and pay little attention to the old, but it pays to study the history of the field — nihil sub sole novum, after all. Dealers of Lightning follows the history of Xerox PARC, the place where luminaries like Alan Kay and Butler Lampson pioneered a few small things you’ve probably heard of, like laser printers, personal computing, ethernet, and GUIs. If your only experience with “object-oriented programming” is the way it’s done in C++, well… yeah. You should study history.

A Book of Abstract Algebra – Charles C. Pinter

Perhaps, like me, you have a mathematical background that ends somewhere around high school or your first year of university. Maybe you’ve heard of things like monoids, rings, or groups being used in the context of programming and wondered why smart people like Alexander Stepanov seem to talk about them so much. If you want to fill in the gaps in your knowledge, this is the book for you. One slight word of warning: it is dense, and it is mathematical. The good news is that it is also very accessible. Most of us without university-level mathematics under our belts haven’t studied abstract algebra at all, but it doesn’t really require anything beyond a comprehension of junior high mathematics to get started. When I was a teenager, calculus was hard. As an adult, I’ve been able to expand and enrich my experiences in the physical world, allowing me to make more of the mental connections needed for calculus to be accessible for me. So it is with abstract algebra — it is to programming what calculus is to modelling the real world.

Digital Typography – Donald E. Knuth

Ah, Knuth! Arguably the world’s most famous living computer scientist, Knuth is a programming household name thanks to The Art of Computer Programming, the complete volumes of which sit unread on many bookshelves. Digital Typography is an alternative offering composed of a series of essays relating various aspects of the development of TeX and Metafont. What comes across the most strongly for me while reading this is the astonishing attention to detail that Knuth brings to his works. Once you’re finished reading it, odds are you’ll have read one more Knuth book than most programmers!

The Garbage Collection Handbook: The Art of Automatic Memory Management – Richard Jones, Antony Hosking, Eliot Moss

This is the more modern descendant of Garbage Collection by Richard Jones and Rafael Lins. C++ has an aversion to garbage collection at the language level, but you might well find yourself implementing garbage collection or using a framework that has GC at some point in your career. After all, smart pointers are a form of garbage collection… that don’t handle loops and incur a potentially unbounded cost on a free. At any rate, this book is the bible of GC algorithms. It’s really interesting and useful for anyone concerned with handling memory — i.e., everyone who writes C++.

Purely Functional Data Structures – Chris Okasaki

Functional programming seems to be all the rage these days. Pick a random video from a random C++ conference and there’s an even chance it mentions some FP influence. This seminal book is almost 20 years old and still relevant as we C++-programmers tentatively explore the world of persistent data structures and immutability. It is the book form of Okasaki’s PhD thesis, so it’s quite rigorous, particularly with respect to cost and complexity analysis. It’s also a great read if you want to understand data structures in functional languages and come up with some ideas for your own implementations. The original PhD thesis is also available for free.

Calendrical Calculations – Edward M. Reingold & Nachum Dershowitz

This is a niche topic — and therefore an expensive outlay if you can’t apply it — but for those who are interested in date calculations, it’s both comprehensive and fascinating. The code given in the book is in Lisp and is not licensed for commercial use, but the authors also point out that their aim is simply to communicate the ideas and give you a starting point for your own implementation in your language of choice. If you’ve done any kind of CS or programming course, chances are you’ve written a function to determine leap years, so unlike Microsoft Excel, you know that 1900 was not a leap year. This book takes things so much further; when I say that it is comprehensive, I mean astonishingly so. Gregorian and Julian calendars are just the tip of the iceberg here, and the book covers almost everything else you could think of. Jewish and Chinese calendars? Of course. How about Mayan, or French Revolutionary, or a dozen other rules-based or astronomical calendars? If this whets your appetite, you may want to wait for the Ultimate Edition, currently due to be released at the end of March 2018. I wonder if Howard Hinnant is working his way through all these calendars while building his date library?

About Face: The Essentials of Interaction Design – Alan Cooper et al.

This is now up to its 4th edition; I read the 2nd. If you work on an application that has a user interface — and they all do — you should read this. If you work with designers, this will help you understand their processes and language. This book strikes the right balance between breadth and depth and covers a lot of ground. I found the chapter “Rethinking Files and Save” particularly thought-provoking.

Hacker’s Delight – Henry S. Warren

After a fairly high-level recommendation, this one gets straight to the really low-level fun stuff. This is the bit-twiddler’s almanac, a spiritual descendant of HAKMEM and a trove of two’s complement recreations. If you’ve ever been asked during interview how to count the number of set bits in a word, this is the book you want to reference. It has everything from popcount and power-of-2 boundaries to space-filling curves, error correction, and prime number generation. You should probably also look up HAKMEM (AI Memo 239), the original collection of bit manipulation hacks.

Pearls of Functional Algorithm Design – Richard Bird

You’ve probably seen the famous “C++ Seasoning” talk by Sean Parent — if you haven’t, please go watch it now. Perhaps you were motivated to read Stepanov or study algorithms as a result. The Stepanovian C++-centric view of algorithms is very much concerned with counting operations, performing manual strength reductions, and identifying intermediate calculations. This book offers an alternative way to design beautiful algorithms in a functional style using a comparatively top-down approach to express algorithmic ideas while making them fast through clean decomposition and subsequent fusion. I think it’s important to study and apply both camps of algorithm design. It is difficult to write beautiful code if you just “count the swaps” without an awareness of the mathematics behind it, but it is equally difficult to make code fast if you simply “express the mathematics” without any awareness of the operations.

Taking Sudoku Seriously: The Math Behind the World’s Most Popular Pencil Puzzle – Jason RosenHouse & Laura Taalman

Like the first recommendation, this isn’t a book that will be immediately applicable to your day job, unless perhaps you’re writing a sudoku mobile app — in which case, buy and read this immediately. But I’m recommending it anyway because I love recreational mathematics, and this book is fun and accessible without dumbing down the material. If you’ve played sudoku and ever thought about solving — or better, generating — the puzzle programmatically, odds are you’ll enjoy this book. Group theory, graph theory, and sudoku variations of every kind abound. As a bonus, it’s dedicated to Martin Gardner.

Any recommendation list is going to be wildly incomplete and subjective, and, of course, there are a lot of books I had to leave out from this one, but perhaps a Part Two will follow at some point in the future. Feel free to chime in if you agree, disagree, or have your own favourites.

CppCon 2017 Trip Report

Saturday, September 30th, 2017

Last week in Bellevue, WA, around 1100 C++ programmers got together for CppCon. I love this conference – it’s a chance to meet up with my existing C++ community friends and make new ones, to learn new techniques and explore parts of C++, and to get excited about where C++ is headed in the next 5 years. Just about everything in C++ is represented, from low-level optimization techniques to functional template metaprogramming.

This year the program was strong as always, and many slots featured 2 or 3 talks that I wanted to see; of course I could only be in one place at a time. Of those that I attended, here are a few highlights.

Matt Godbolt’s keynote, “Unbolting the Compiler’s Lid”, was excellent. I think Matt was really feeling the love this conference, as well he should; his Compiler Explorer is an excellent tool that has changed the way we communicate about C++. His talk highlighted the accessibility of using Compiler Explorer and I hope that as a result, more C++ programmers get the curiosity to look under the hood and understand what their code is really doing. It’s great also that the whole project is available on Github so that anyone can grab it and use it locally, perhaps on a code base internal to their company.

John Regehr is a well-known blogger and authority on undefined behaviour, and his two-part talk was really good. He showed where and why undefined behaviour exists and offered ideas for a path forwards for dealing with UB. He is also a very good speaker, as I find many university professors tend to be. Scott Schurr’s talk “Type Punning in C++17: Avoiding Pun-defined Behavior” was also a good talk that showed a lot of code and covered practical techniques for avoiding UB. Scott ran out of time and had to omit a section of the slides, but what he did show was very good advice.

I like talks that are both entertaining and rigorous, and Arthur O’Dwyer delivers both in spades. He had two talks at CppCon this year. “dynamic_cast From Scratch” was an excellent exploration of C++ inheritance as implemented against the Itanium ABI, and “A Soupçon of SFINAE” offered very useful techniques for leveraging home-made type traits. There is a lot in both talks that can be applied against the codebases I work in.

This year’s CppCon featured many talks about allocators, and I found two very useful. Bob Steagall presented “How to Write a Custom Allocator”, which particularly opened my eyes to fancy pointers and their use in relocatable heaps. This is a technique I’ll definitely be looking into; I think it has great applicability to my projects. Pablo Halpern presented “Modern Allocators: The Good Parts” on Friday morning, a comprehensive overview of how allocators have changed for C++17 and a really good explanation of how exactly the new polymorphic memory resource approach works. Again, this is something that is very exciting for me personally in terms of how much easier it is to use. In the past I’ve seen people reimplement standard containers just because the allocation was so hard to control. My hope now is that with the C++17 changes, we can use an allocator to get the control we want and not take on the maintenance burden of rewriting so much.

The intense schedule and in-depth topic coverage of CppCon can leave one’s brain tired, so it’s nice now and then to have talks that are fun. Matt Godbolt’s early-morning open content session “Emulating a BBC Micro in Javascript” took me right back to my childhood. Sadly, open content sessions aren’t recorded, but it was great to remember the 6502-based BBC Micro, not to mention fascinating to hear the intricacies of the 8-bit hardware timing and copy protection schemes. Hearing that BBC disk drive and seeing Matt play Elite was a great way to start a Wednesday morning.

Another fun talk to finish up with on Friday was Juan Arrieta’s “Travelling the Solar System with C++: Programming Rocket Science”. This talk was light on C++ content compared to other talks, but really engrossing nonetheless. Juan is a good presenter and hearing all about his work at JPL kept me spellbound for the hour.

All these talks, and more, were great, and when the videos start appearing on YouTube, I’ll be watching the rest – and probably re-watching the ones I liked. Bash Films did a fantastic job last year getting the videos sorted out in short order, so I expect there won’t be long to wait. But for me CppCon isn’t just about the talks; it’s about the connections and the opportunity to discuss things, between sessions or over food and/or drink.

At CppCon, I can ask Marshall Clow about specifics of how the standard library is implemented (why is a moved-from string cleared when it’s within the small buffer, and does it matter?), or I can chat with David Sankel about whether or not the currently-proposed coroutines are sufficiently powerful to represent certain mathematical constructs. I can ask Chandler Carruth about whether such-and-such inhibits RVO (he was noncommittal, but empirical evidence shows that RVO still works fine in that case, yay!). I can hash out new ideas with Louis Dionne and GaÅ¡per Ažman for removing current limitations I run into with lambdas. Or I can meet new friends like Stephanie Hurlburt, who’s doing exciting things with texture compression that could really make a difference to the games industry. The list of inspiring people and interactions is almost endless.

I’ve come away from CppCon energised, with tons of new things to try and to apply. Thanks to Jon, Bryce, and everyone else who made CppCon run smoothly, and see you next year!