Thoughts on Default Construction

August 16th, 2017

What does default construction mean? Why do we write default constructors? When and why should we require them? I’ve been pondering these questions lately.

One of the great things that C++ gets right is that it grants programmers the ability to create types that behave like built-in types. Many languages don’t offer this feature, treating the built-in types as special in some way, e.g. limiting us to defining types with reference semantics, or perhaps preventing operators from working with user-defined types. But C++ allows us to define types with value semantics that work, syntactically and semantically, in an identical way to machine types like float and int.

Regular types: “When in doubt, do as the ints do.”

The concept of a regular type is probably familiar to anyone who has watched a C++ conference video or read a popular C++ blog within the past few years. It is an attempt to formalize the semantics of user-defined types in order to match built-in types. Alexander Stepanov and James C. Dehnert wrote in their paper Fundamentals of Generic Programming:

“Since we wish to extend semantics as well as syntax from built-in types to user types, we introduce the idea of a regular type, which matches the built-in type semantics, thereby making our user-defined types behave like built-in types as well.”

And they go on to define the fundamental operations that can be applied to a regular type:

  • default construction
  • copy construction
  • destruction
  • assignment
  • equality & inequality
  • ordering

The reason for choosing these operations is to provide a computational basis that supports interoperation of types with data structures (such as STL containers) and algorithms. The first four of these operations have default definitions in C++ for any type, user-defined or otherwise.

As a computational basis for data structures and algorithms, it seems that all of these operations serve a purpose in code — except default construction. Default construction can be used in specifying semantics, but it is not needed by data structures and algorithms. In From Mathematics to Generic Programming, chapter 10.3, Alexander Stepanov and Daniel Rose define regular types without the operation of default construction, then go on to say:

“Having a copy constructor implies having a default constructor, since T a(b); should be equivalent to T a; a = b;.”

This is a fine and necessary axiom for the semantics of regular types, but we never actually need to write that in C++. We would never write “T a; a = b;“. Instead, we would write it as “T a(b);“, invoking either the copy or the move constructor.

Default construction: inherited from C?

C++ has its roots in C, particularly when considering built-in types. In the Unix tradition, C is famously terse, parsimonious, and unforgiving of mistakes. C++ comes from the same stock with our maxim, “don’t pay for what you don’t use.”

We all recognize the following as undefined behaviour (applicable to C and C++ alike):

int main()
{
  int x;
  return x;
}

When we wrote “int x;” the compiler did nothing to initialize x, and quite rightly so. This is, in fact, precisely the meaning of default construction according to the axioms of regular types — to do as the ints do. Alexander Stepanov and Paul McJones use the phrase “partially formed” to convey this in Elements of Programming:

“An object is a partially formed state if it can be assigned to or destroyed. For an object that is partially formed but not well-formed, the effect of any procedure other than assignment (only on the left side) and destruction is not defined.”

“A default constructor takes no arguments and leaves the object in a partially formed state.”

On first encountering this definition some years ago, I experienced some discomfort; this was not my mental model of default construction for most of my previous programming life. I thought of default construction as a way to completely form an object in some kind of default state. But the more I thought about it, the more I appreciated this new point of view.

As a junior programmer, my notion of a default state was not at all well defined. Many of the types I’ve written over the years used default constructors as a crutch. Some used two-phase construction, ostensibly in the name of performance, but more likely because it was easier to write quickly. Commonly, a default constructor would set sentinel “invalid” values that polluted use of the type, requiring checks in other methods or at call sites. If I was lucky, “default construction” would establish the type’s invariants.

I didn’t have a rigorous idea of what it meant to make a type, nor was I able to formulate a solid argument or semantics for my mental model of default construction — because I wasn’t writing default constructors. I was writing nullary (zero-argument) constructors, and they just don’t make sense for all, or even many, types.

Aside: partially formed == moved-from?

This lack of clarity seems to echo the current situation with moved-from objects. Setting aside any arguments about destructive move, the current standard says that the state of a moved-from object is “valid but unspecified.” It does not mention partially formed objects.

But in my view, the right way to think about moved-from objects is to consider them as having this partially formed state. Moved-from objects may only be assigned to or destroyed, and nothing else (is guaranteed). Where it gets a bit murky is the guaranteed part, because there are some types for which the ability to destroy them necessarily entails the ability to call other methods. Containers spring to mind; for a vector to be properly destructible, it must — albeit coincidentally — also support size() and capacity().

This, in turn, is similar to the situation with certain types of undefined behaviour. These days, signed integer overflow is undefined behaviour but not necessarily malum in se. The overwhelming majority of us are programming on two’s-complement machines where we know exactly what behaviour to expect when wrapping the bit pattern. But the standard tells us that signed integer overflow is undefined behaviour and thus malum prohibitum, and optimizers exploit this. If the standard were to similarly define use-after-move, other than assignment or destruction, as undefined behaviour, I can imagine compilers taking advantage.

Nullary constructors vs default constructors

Back to default construction. How do we make a member of a type? My mental model of construction in general is as follows: a constructor takes an arbitrary piece of memory — a space apportioned from the heap or the stack — and makes that memory into a member of a given type.

I expect this jibes with what most C++ programmers think. The only problem is that this isn’t what default construction does. A partially formed object is not yet an object. There is no semantic link between the bit pattern it contains and the type it will eventually inhabit; that semantic link is made by assignment.

Consider the following examples of “default construction”:

int x;
// 1. Is x an int here? No!
 
enum struct E : int { A, B, C };
E e;
// 2. Is e an E here? No!
 
struct Coord { int x; int y; };
Coord c;
// 3. Is c a Coord here? No!
 
std::pair<int, int> p;
// 4. Is p a pair here? Yes.
 
std::vector<int> v;
// 5. Is v a vector here? Yes.
 
std::unique_ptr<int> u;
// 6. Is u a unique_ptr here? Yes?

We would probably call all of these declarations “default construction” when, in fact, they are all slightly different.

In the first example, some people would claim that x is an int after declaration. After all, in the memory where x lives, there is no possible bit pattern that is not a valid int. Representationally, x is perfectly fine as an int — it’s just undefined. In some sense it’s just a matter of theory that we choose to view x as not yet an int.

The “theory” argument is a little more persuasive in the second example. There are many possible bit patterns in the memory where e lives that don’t contain well-formed values of E. Since we are using C++, there are still arguments that can be made for seeing e as an E, but I won’t go down that rabbit hole, as it isn’t vital to the particular argument I’m pursuing.

Compare examples 3 and 4, the Coord and the pair, respectively. This is an interesting case where c is clearly default constructed and so, by our rules, is not yet a Coord. The pair p looks identical, but the standard says that pair‘s zero-argument constructor value-initializes the elements, meaning ints are zero-initialized. This means that p isn’t default constructed according to regular type axioms; rather, it has been nullary constructed.

Example 5, the vector, is something new entirely. In the case of vector, a partially formed object that must support destruction is coincidentally a well-formed object. The only operations that are not valid on v are the ones, such as front(), that are specifically prohibited because of preconditions.

The last example is interesting because it is an example of a sentinel value within a type that is baked right into the language. A default constructed unique_ptr contains a value-initialized raw pointer. Again, partially formed coincides with well-formed here. But there’s more; the language allows the destructor to call delete on that null pointer. This is a sentinel value, like so many I’ve set inside “default constructors” over the years, but one that is so ubiquitous that we don’t even think of it as unusual. I’ll hazard a guess and say that there are many systems in the world where zero is a fine address to dereference, probably far more than there are one’s-complement systems. It is perhaps only due to history and convenience that we outlaw signed overflow while codifying null pointer sentinels within the language itself.

Considering the differences between these examples, I think it makes sense to mentally differentiate default construction from nullary construction and further, to carefully consider where nullary construction is warranted and where it doesn’t actually make sense.

Sentinels can be harmful

Probably the biggest problem with nullary construction is that it tends to introduce magic sentinel values into the type itself where they should not exist. The most famous magic sentinel is the null pointer, Tony Hoare’s “billion-dollar mistake”, but we run this risk with any number of types that we use with value semantics.

Empty strings are used much more frequently as a sentinel than is sensible. Numeric types are often given default values of zero that can lead to bugs. As any game programmer can tell you, “disappeared” objects can frequently be found at the origin.

Sometimes we take value types that have no natural defaults, give them nullary constructors because we think we have to, and choose sentinel values that deliberately stand out. Colours spring to mind here; there is no real “default value” for a colour. Still, we often write a nullary constructor anyway and use something like bright magenta, indicating that a default colour would be a bug that we want to spot. Why provide a nullary constructor in the first place?

Most types that model real-world quantities, like colour, have no good defaults. There’s no default country. There’s no default gender. There’s no default time zone. Trying to provide defaults for these can lead to bugs, bad user experiences, or both.

It isn’t my intent to build too tall of a straw man here. This sort of thing does happen, but nullary construction of this kind is also something we try to avoid in C++, especially as we get more and more features in the language to mitigate it. Unlike C, we’ve always had the ability to delay declaration until the point of initialization, thus avoiding the need for nullary construction. This is both safer and more efficient. As previously mentioned, we never default construct and initialize; we just copy construct or value-initialize. RAII is considered good, so we use it to avoid two-phase initialization. Regardless of language, using sentinels to signal invalid state can be considered a code smell and a type system power and/or usage failure.

The important point here is that we shouldn’t be making nullary constructors where they don’t really make sense just because we think they’re a requirement. They’re much less of a requirement than we have led ourselves to believe.

Magic values from nowhere

Nullary constructors hinder our ability to reason about code, especially generic code. If we know that a given function output cannot be conjured out of the ether, but can only be constructed from the arguments passed in, then we can infer things about the function. We can discount edge cases in our reasoning if we know from the function signature that the inputs must be used a certain way in order to provide specific output.

A lack of nullary constructors means that total functions are favoured, because it’s not possible to create magic values. To the extent that we can achieve this, it’s desirable, particularly in a value-oriented style of programming. It’s possible to envisage a complete lack of nullary constructors with everything built up from value initialization, n-ary constructors, conversions, et cetera. If even a subsection of the code can be partitioned in this way, it allows us to be more certain of its function.

The odd requirement for nullary construction

One sticking point remains — namely, that there are places in the STL that require nullary construction. Two in particular are commonly cited, one more problematic than the other.

vector::resize

The requirement that vector::resize has on nullary construction is fairly easy to work around. We simply don’t have to use resize, and if we never use it, it’s never instantiated.

Resizing a vector to a larger size is seldom useful; reserve and push_back, emplace_back, or insert handle those use cases. There may once have been an efficiency argument for resize-to-larger, but given move semantics and the fact that any contemporary STL implementation will use memcpy for trivially copyable types, I struggle to come up with any argument for ever calling resize-to-larger these days. Of course, if a situation where it is the best option should ever arise, it can still be used — just not with types that don’t provide nullary constructors.

Resizing to a smaller size can be achieved with erase at negligible extra cost. Of course, nullary construction is not strictly required here, so in the event that vector were revised, I would advocate removing resize from the interface and perhaps instead providing a truncate method to achieve resize-to-smaller.

map::operator[]

The index operator on map has a famously poor signature. It’s frustrating that you can’t use it on a const map even when you know that the value is contained. It is the sole member of map that requires the mapped_type to be nullary constructible.

Happily, C++17 has expanded the interface on map. We now have insert_or_assign to cover the mutable use case of the index operator, and it does not require nullary constructability. In the case of const maps, C++11 offers us map::at, which behaves analogously to vector::at. Although C++17 has optional, it is not yet integrated with container types, so there is currently no lookup function on a const map that returns an optional value.

Nullary constructor leakage

I believe that there are some types in the STL that have nullary constructors for no good reason other than satisfying existing constraints. If we look through the lens of nullary construction being unnecessary, it seems that some things in the STL have nullary constructors only because of container requirements. For example, to my mind, weak_ptr doesn’t need to have a nullary constructor. The nullary constructor of variant also seems out of place, since the entire point of variant is the choice of type contained within. It seems perverse to create a variant without knowing what to put inside of it.

Conclusions

Default construction semantics are not as straightforward as they may seem.

The ability to write default constructors is undeniably valuable if we want to give our types the same semantics as built-in types, but given C++’s heritage and quirks, it’s not always possible to achieve exact parity.

The most useful, rigorous, and consistent model is the one advanced by the works of Alex Stepanov et al. on regular type semantics: default construction produces a partially formed object. A partially formed object has the same semantics as a moved-from object. A partially formed object is not yet a member of its type.

It is instructive to mentally separate the ideas of default construction — giving a partially formed object — and nullary construction — giving a well-formed object which is a member of the type with its invariants established. Some objects are well-formed coincidentally as a result of being partially formed.

We should not write nullary constructors without consideration, particularly for value types. Sensible defaults don’t always exist and sentinels should be avoided. Reasoning about functions and data flow becomes easier if types lack nullary constructors.

The requirement that regular types be default constructible is not an operational requirement; it is merely an axiomatic requirement. The vast majority of methods on most STL containers do not require default construction, and we can work around the few specific methods which do require it. Some types in the STL seem to have nullary constructors simply to fulfil a requirement which is questionable in the first place. Future revisions of the STL should scrutinize the operational requirements for default construction and remove them where possible.

Development of an Algorithm

June 21st, 2017

Here’s an exercise: given a nice piece of code sitting in a file, how do you take that code and make it generic, in the style of an STL algorithm?

For our example, let’s consider an algorithm that isn’t (yet) in the STL. First, the problem it solves. Imagine that you have a set (in the mathematical sense, not the container sense) of numbers; discontiguous, unordered, and notionally taken from the positive integers, e.g.:

\{5,2,7,9,4,0,1\}

What we want to find is the smallest number that isn’t included in the set. For the example given above, the answer is 3. If the set is sorted, the solution is apparent; adjacent_find will find the first gap for us. So one solution would be:

1
2
3
4
5
6
7
8
// assuming our set is in an array or vector
std::sort(a.begin(), a.end());
auto p = std::adjacent_find(
    a.begin(), a.end(), 
    [] (int x, int y) { return y-x > 1; });
int unused = a.back() + 1;
if (p != a.end())
  unused = *p + 1;

The asymptotic complexity of this solution is of course O(n.log(n)), since adjacent_find is linear and the complexity of sort dominates. Is there a better solution? It’s not obvious at first glance, but there is a linear time solution to this problem. The key is a divide and conquer strategy using that workhorse of algorithms, partition.

The smallest unused number

Let’s start by assuming that the minimum unused value is zero. We’ll choose a pivot value, m, which is the assumed midpoint of the sequence – it doesn’t actually have to be present in the sequence, but we’re going to simply assume that it’s the middle value. Partitioning the sequence about the pivot, we’ll get a situation like this:
Partitioned sequence
where the first value equal to or greater than m is at position p (which is returned by the call to partition). If the lower half of the sequence has no gaps, m will be equal to p, and we can recurse on the top half of the sequence, setting the new minimum unused value to m.

We know that m cannot be less than p since there cannot be any repetitions in the set. Therefore, if m is not equal to p, it must be greater – meaning that there is at least one gap below p, and that we can recurse on the bottom half of the sequence (keeping the minimum unused value as-is).

The base case of the algorithm is when the sequence we need to partition is empty. At that point we will have found the minimum unused value. Here’s the algorithm in code, with a bit of setup:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
void min_unused()
{
  // initialize RNG
  std::array<int, std::mt19937::state_size> seed_data;
  std::random_device r;
  std::generate_n(seed_data.data(), seed_data.size(), std::ref(r));
  std::seed_seq seq(std::begin(seed_data), std::end(seed_data));
  std::mt19937 gen(seq);
 
  // fill an array with ints, shuffle, discard some
  int a[10];
  std::iota(&a[0], &a[10], 0);
  std::shuffle(&a[0], &a[10], gen);
  int first_idx = 0;
  int last_idx = 7; // arbitrary truncation
 
  for (int i = first_idx; i < last_idx; ++i)
    std::cout << a[i] << '\n';
 
  // the algorithm
  int unused = 0;
  while (first_idx != last_idx) {
    int m = unused + (last_idx - first_idx + 1)/2;
    auto p = std::partition(&a[first_idx], &a[last_idx],
                            [&] (int i) { return i < m; });
    if (p - &a[first_idx] == m - unused) {
      unused = m;
      first_idx = p - &a[0];
    } else {
      last_idx = p - &a[0];
    }
  }
 
  std::cout << "Min unused: " << unused << '\n';
}

You can also see the algorithm on wandbox. I leave it to you to convince yourself that the algorithm works. The asymptotic complexity of the algorithm is linear; at each stage we are running partition, which is O(n), and then recursing, but only on one half of the sequence. Thus, the complexity is:

O(n + \frac{n}{2} + \frac{n}{4} + \frac{n}{8} + ...) = O(\sum_{i=0}^\infty \frac{n}{2^i}) = O(2n) = O(n)

From concrete to generic: first steps

So, now we have an algorithm that works… for C-style arrays, and in one place only. Naturally we want to make it as generic as we can; what, then, are the steps to transform it into a decent algorithm that can be used just as easily as those in the STL? Perhaps you’re already thinking that this is overly concrete – but hey, this is a pedagogical device. Bear with me.

First, we can pull the algorithm out into a function. You’ve already seen the setup code, so you should be able to imagine the calling code from here on. While we’re at it, let’s change the index-based arithmetic to pointers. We can pass the sequence in to the function as an argument; for now, we may as well say that the sequence is a vector, because vector is always the right default choice! Our updated function:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// version 1: a function that takes a vector
int min_unused(std::vector<int>& v)
{
  int* first = &v[0];
  int* last = first + v.size();
  int unused = 0;
  while (first != last) {
    int m = unused + (last - first + 1)/2;
    auto p = std::partition(first, last,
                            [&] (int i) { return i < m; });
    if (p - first == m - unused) {
      unused = m;
      first = p;
    } else {
      last = p;
    }
  }
  return unused;
}

This is already looking a little clearer. Using pointers removes some of the syntactic noise and allows us to see what’s going on better. But, as you may already have thought, once we have vectors, the next logical step is to consider using iterators. Let’s make that change now so that we can run the algorithm on a subrange within any vector.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// version 2: use iterators, like a real algorithm
template <typename It>
inline int min_unused(It first, It last)
{
  int unused = 0;
  while (first != last) {
    int m = unused + (last - first + 1)/2;
    auto p = std::partition(first, last,
                            [&] (int i) { return i < m; });
    if (p - first == m - unused) {
      unused = m;
      first = p;
    } else {
      last = p;
    }
  }
  return unused;
}

Now instead of using the vector directly, now we can pass in begin() and end(), or any other valid range-delimiting pair of iterators. Note also that because min_unused is now a function template, we added inline on line 3, meaning that we can put it in a header without causing link errors when it’s instantiated the same way in multiple translation units.

This is a good start, but we can make it more generic yet! At the moment it’s still only working on sequences of ints, so let’s fix that.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// version 3: infer the value_type
template <typename It>
inline typename std::iterator_traits<It>::value_type min_unused(It first, It last)
{
  using T = typename std::iterator_traits<It>::value_type;
  T unused{};
  while (first != last) {
    T m = unused + (last - first + 1)/2;
    auto p = std::partition(first, last,
                            [&] (const T& i) { return i < m; });
    if (p - first == m - unused) {
      unused = m;
      first = p;
    } else {
      last = p;
    }
  }
  return unused;
}

Here we’re using iterator_traits to discover the value_type of the iterators passed in. This works with anything that satisfies the Iterator concept. Consequently we’ve also now removed the ints that were scattered through the code and instead are using Ts. Note that we are value-initializing the T on line 6, which will properly zero-initialize integral types.

Since we aren’t actually calculating the initial value of unused, we need not assume it to be zero; we could let the caller pass it in, and default the argument for convenience. In order to do that, we’ll pull out T into a defaulted template argument.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// version 4: let the user supply the initial minimum
template <typename It,
          typename T = typename std::iterator_traits<It>::value_type>
inline T min_unused(It first, It last, T init = T{})
{
  while (first != last) {
    T m = init + (last - first + 1)/2;
    auto p = std::partition(first, last,
                            [&] (const T& i) { return i < m; });
    if (p - first == m - init) {
      init = m;
      first = p;
    } else {
      last = p;
    }
  }
  return init;
}

A caller-provided minimum might be quite handy – it’s common for zero to be a reserved value, or to have a non-zero starting point for values.

Iterators again

At this point, min_unused is starting to look more like a real STL algorithm, at least superficially, but it’s not quite there yet. Lines 7 and 10 are doing plain arithmetic, which assumes that the iterators passed in support that. A fundamental concept in the STL is the idea of iterator categories: concepts which define operations available on different kinds of iterators. Not all iterators support random access, and any time we are writing a generic algorithm, we had better use the appropriate functions to manipulate iterators rather than assuming valid operations:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// version 5: handle iterator arithmetic generically
template <typename It,
          typename T = typename std::iterator_traits<It>::value_type>
inline T min_unused(It first, It last, T init = T{})
{
  while (first != last) {
    T m = init + (std::distance(first, last)+1)/2;
    auto p = std::partition(first, last,
                            [&] (const T& i) { return i < m; });
    if (std::distance(first, p) == m - init) {
      init = m;
      first = p;
    } else {
      last = p;
    }
  }
  return init;
}

In the code above, the arithmetic on lines 7 and 10 has been replaced with calls to distance. This is a key change! Now we are no longer limited to random access iterators like those of vector or just plain pointers. Our algorithm now works on all kinds of iterators, but which ones exactly? It behooves us to define exactly what we require of these iterators being used as our arguments. Until C++ has Concepts, we can’t enforce this, but we can document it, typically by calling the template argument something more descriptive than ‘It‘. Looking at the references for distance and partition, we find that a forward iterator is required.

For many algorithms in the STL, there are more efficient versions available for stronger iterator categories. A good example is partition itself: with just a forward iterator, it may do N swaps, but with a bidirectional iterator, it need only perform, at most, N/2 swaps. Wherever we want to use different algorithms for different iterator categories, overloading with tag dispatch is a common technique; the STL provides iterator tag types for use as arguments to these overloads.

But with min_unused, we don’t have any differences in the top level algorithm that would give us greater efficiency based on iterator category. Both distance and partition do that work themselves.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// version 6: document the relaxed iterator category
template <typename ForwardIt,
          typename T = typename std::iterator_traits<ForwardIt>::value_type>
inline T min_unused(ForwardIt first, ForwardIt last, T init = T{})
{
  while (first != last) {
    T m = init + (std::distance(first, last)+1)/2;
    auto p = std::partition(first, last,
                            [&] (const T& i) { return i < m; });
    if (std::distance(first, p) == m - init) {
      init = m;
      first = p;
    } else {
      last = p;
    }
  }
  return init;
}

Strength reduction

Now is also a good time to perform some manual strength reduction on our code, making sure that each operation used is the weakest option to do the job. This doesn’t really apply to any operation instances in min_unused, but in practical terms it typically means avoiding postfix increment and decrement. If the algorithm implements more than trivial mathematics it may also require quite a bit of decomposition to discover all potential efficiencies and opportunities to eliminate operations. For more examples, I recommend watching any and all of Alex Stepanov’s excellent series of video lectures.

We do have one operation in this algorithm that stands out: on line 7 there is a divide by 2. Twenty years ago, I might have converted this to a shift, but in my opinion the divide is clearer to read, and these days there is no competitive compiler in the world that won’t do this trivial strength reduction. According to my experiments with the Compiler Explorer, MSVC and GCC do it even without optimization.

STL-ready?

So there we have it. Our algorithm is looking pretty good now by my reckoning – appropriately generic and, overall, a good addition to my toolkit.

Remaining are just a couple of final considerations. The first is ranges: the Ranges TS will change the STL massively, providing for new algorithms and changes to existing ones. A specific consequence of ranges is that begin and end iterators need not be the same type any more; how that may affect this algorithm I leave as an exercise to the reader.

Even more generic?

The other consideration takes us further still down the path of making our algorithm as generic as possible. A useful technique for discovering the constraints on types is to print out the code in question and use markers to highlight variables that interact with each other in the same colour. This can reveal which variables interact as well as which ones are entirely disjoint in usage – a big help in determining how they should be made into template parameters. In general, examining how variables interact can be a fruitful exercise.

Thinking about how the variables are used in our case, we see (on line 7) that we need the ability to add the result of distance to T, and (on line 10) that the difference between Ts is comparable to the result of distance. We are used to thinking about addition and subtraction on integers, where they are closed operations (i.e. adding two integers or subtracting two integers gives you another integer), but there is another possibility. And in fact, there is already a good example of it in the STL.

Arithmetic the chrono way

It is possible for the type produced by subtraction on Ts to be something other than T. This is what happens with chrono, with time_point and duration. Subtracting two time_points yields a duration. It is meaningless – and, therefore, illegal – to add two time_points, but durations may be added both to themselves and to time_points.

In mathematical language, chrono models a one-dimensional affine space. (Thanks to Gašper Ažman for that particular insight!)

This is also the case with min_unused: T is our time_point equivalent, and the result of distance is our duration equivalent, suggesting that we can pull out the difference type as a template parameter:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
template <typename ForwardIt,
          typename T = typename std::iterator_traits<ForwardIt>::value_type,
          typename DiffT = typename std::iterator_traits<ForwardIt>::difference_type>
inline T min_unused(ForwardIt first, ForwardIt last, T init = T{})
{
  while (first != last) {
    T m = init + DiffT{(std::distance(first, last)+1)/2};
    auto p = std::partition(first, last,
                            [&] (const T& i) { return i < m; });
    if (DiffT{std::distance(first, p)} == m - init) {
      init = m;
      first = p;
    } else {
      last = p;
    }
  }
  return init;
}

The advantage here is that we can now use min_unused to deal with any code that has different value and difference types, like chrono:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
// a vector of time_points, 1s apart
constexpr auto VECTOR_SIZE = 10;
std::vector<std::chrono::system_clock::time_point> v;
auto start_time = std::chrono::system_clock::time_point{};
std::generate_n(std::back_inserter(v), VECTOR_SIZE,
                [&] () {
                  auto t = start_time;
                  start_time += 1s;
                  return t;
                });
std::shuffle(v.begin(), v.end(), gen);
v.resize(3*VECTOR_SIZE/4);
 
for (auto i : v)
  std::cout
    << std::chrono::duration_cast<std::chrono::seconds>(i.time_since_epoch()).count()
    << '\n';
 
// we can now find the minimum time_point not present
auto u = min_unused<
  decltype(v.begin()), decltype(start_time), std::chrono::seconds>(
      v.begin(), v.end());
 
cout << "Min unused: "
     << std::chrono::duration_cast<std::chrono::seconds>(u.time_since_epoch()).count()
     << '\n';

See this code on wandbox.

This concludes our exercise in developing a generic algorithm – at least, for now. Good luck with your own experiments!

C++17 Class Templates: Deduced or Not?

June 15th, 2017

C++17 introduces class template deduction: a way for the compiler to deduce the arguments to construct a class template without our having to write a make_* function. But it’s not quite as straightforward as it seems.

Imagine we have a simple type that will tell us when it’s copied or moved, just for testing.

struct S
{
  S() = default;
  S(const S&) { std::cout << "copy\n"; }
  S(S&&) { std::cout << "move\n"; }
};

And likewise a very simple template like so:

template <typename T>
struct Foo
{
  Foo(const T& t_) : t(t_) {}
  Foo(T&& t_) : t(std::move(t_)) {}
 
private:
  T t;
};

Note that we provide two forms of constructor to deal with both rvalues and lvalues being passed in. With C++14, prior to class template deduction, we would write a make_foo function to instantiate this template something like this:

template <typename T>
auto make_foo(T&& t)
{
  return Foo<std::decay_t<T>>(std::forward<T>(t));
}

And call it like so:

int main()
{
  // rvalue: move
  auto f1 = make_foo(S());
 
  // lvalue: copy
  S s;
  auto f2 = make_foo(s);
}

The important thing here is that the template argument to make_foo is deduced, and the template argument to Foo is not (cannot be, in C++14). Furthermore, because the template argument to make_foo is deduced, make_foo‘s argument is a forwarding reference rather than an rvalue reference, and hence we use perfect forwarding to pass it along to the appropriate constructor for Foo.

So far so good. Now, with C++17, class template deduction is available to us, so we can get rid of make_foo. Now our main() function looks like this:

int main()
{
  // rvalue: move?
  Foo f3{S()};
 
  // lvalue: copy?
  S s;
  Foo f4{s};
}

Here’s the unintuitive part: the template arguments are being deduced now. But in that case, doesn’t that mean that Foo‘s constructor argument is being deduced? Which means that what looks like Foo‘s rvalue constructor is actually now a constructor with a forwarding reference that will outcompete the other constructor? If so, that would be bad – we could end up moving from an lvalue!

I woke up the other morning with this worrying thought and had to investigate. The good news is that even though the code looks like it would break, it doesn’t.

So in fact, yes, although Foo‘s template argument is being deduced, I think the crucial thing is that Foo‘s constructor still takes an rvalue reference – not a forwarding reference – at the point of declaration. And according to 16.3.1.8 [over.match.class.deduct] the compiler is forming an overload set of function templates that match the signatures of the constructors available, and it’s using the correct types. In other words, I think it’s doing something that we could not do: it’s forming a function template, for the purposes of deduction, whose argument is an rvalue reference rather than a forwarding reference.

As is often the case in C++, one needs to be careful to properly distinguish things. It is very easy to get confused over rvalue references and forwarding references, because they look the same. The difference is that forwarding references must be deduced… and in this case, even though it looks like it’s deduced, it isn’t.

Edit: Reddit user mps1729 points out that indeed, neither of the implicitly generated functions is using a forwarding reference, as clarified in 17.8.2.1/3 [temp.deduct.call]. Thanks for the clarification!

C++Now 2017 – Report

May 20th, 2017

C++Now 2017 just wrapped up in Aspen, CO. A great week of presentations and discussions once more. C++Now is a very different conference from the normal mainstream. For a start, it’s small – only 150 people. And the sort of content you find at C++Now is more niche material: challenging the status quo and trying to move the medium forward and push the boundaries of what’s possible in C++.

The keynotes

This year’s keynotes focused on what C++ can learn from other languages – Rust, Haskell, and D.

Rust: Hack Without Fear! – Niko Matsakis

The Rust keynote was really good, showcasing Rust’s ownership model that allows it to achieve memory safety and freedom from data races. Pretty cool stuff, and something that C++ is just beginning to take baby steps towards.

Competitive Advantage with D – Ali Çehreli

The D keynote was very underwhelming – I didn’t come away with a good sense of where D is better than C++ and why its features are desirable.

Haskell Taketh Away: Limiting Side Effects for Parallel Programming – Ryan Newton

The Haskell keynote was probably my favourite of the trio, and the presenter did a really good job of not going down the Haskell rabbit holes so common in such talks. He didn’t try to explain monads or anything like that. He didn’t explain how return is different. He did explain real usage of Haskell and the memory model and choices for achieving side effects, and he was obviously an experienced speaker. I know some Haskell, and I was surprised to see that he put up some code on slides that was pretty advanced for a C++ crowd of mostly Haskell novices. But I think he did a great job of keeping the talk concrete, and some of the compiler technology and directions he talked about toward the end of the talk were pretty amazing.

Selected talks

C++11’s Quiet Little Gem: <system_error> – Charley Bay

This was the first talk of the week after the opening keynote. Charley is a high-energy and entertaining speaker and this talk really opened my eyes to the goodness that is <system_error> and how to use it. Around 350 fast-moving slides! I’ll definitely be using and recommending <system_error> in future.

The Mathematical Underpinnings of Promises in C++ – David Sankel

This was the kind of talk you only find at C++Now and it was great. David engages the audience really well on esoteric topics and showed how a rigorous mathematical treatment of a subject can help ensure that an API is appropriately powerful, but also that a good API shouldn’t just be a simple reflection of the mathematical operations.

He followed up this talk with its complement, a practical talk about using the API (Promises in C++: The Universal Glue for Asynchronous Programs). I’m sold: std::future is crippled, and a library like his promise library is the right way to deal with asynchronous control flow.

The ‘Detection Idiom’: A Better Way to SFINAE – Marshall Clow

This was a repeat of Marshall’s recent ACCU talk, with the opportunity for the C++Now audience to ask plenty of questions. The detection idiom (developed by Walter Brown in various talks and papers over the last few years) is a pretty simple way to do one common kind of metaprogramming operation: detecting whether a given type supports something. I feel like this is exactly the kind of thing which is going to help mainstream C++ take advantage of metaprogramming recipes: fairly uncomplicated techniques that just make code better.

The Holy Grail: A Hash Array Mapped Trie For C++ – Phil Nash

Phil presented a followup to his Meeting C++ and ACCU talk “Functional C++ for Fun and Profit“. He showed a very cool persistent HAMT implementation. There was plenty of hard data and code to digest in this talk and like all the best talks, I walked away wanting to play with things I’d just seen. He also noted that “trie” is properly pronounced “tree” (although many people say “try” which is good for disambiguation) and that he’d be sticking to “tree” not only for that reason, but because he didn’t want to be known for both “try” and Catch!

Other thoughts

The annual week in Aspen for C++Now is the highlight of my professional year. The program this year was very strong and I’m sad that I had to miss talks and make hard decisions about what to see in any given slot. But the talks are only half of this conference: at least as important is the socialising and discussion that goes on. C++Now is deliberately scheduled with long sessions and long breaks between sessions so that attendees can chat, argue, and collaborate. At other conferences I learn a handful of new things over the course of a week. At C++Now I learn dozens of new techniques, tricks and titbits at the rate of several per hour. Just a few examples of things that happen at C++Now:

  • Library in a Week: Jeff Garland corrals whoever wants to get involved, and over the week people implement really cool things. It’s like a hackathon within a conference.
  • Having a problem with your benchmarking? Take your laptop to the hotel bar where Chandler Carruth will take a look at your code and tell you exactly how the optimizer sees it, and what you can do to make it twice as fast.
  • The student volunteer program is simply amazing and inspiring. Just look at the current crop and alumni and you see young men and women who are incredibly smart and doing amazing things with C++. Figuring out new tricks and ways to stretch the language that will become codified idiom, making the library you use in a couple of years faster, cleaner, and nicer to use.

Finally, my own talk (with Jason Turner) – entitled “constexpr ALL the things!” – was very well received. I’m heading home tomorrow, but the experiences from Aspen will go back with me, leading to improvements in my own code and that of my work colleagues, making lots of things better. And the friendships I’ve made and refreshed during this week will continue that flow of information and improvement through the year.

CHRONO + RANDOM = ?

October 24th, 2016

Being a quick sketch combining <chrono> and <random> functionality, with cryptarithmetic interludes…

At CppCon this year there were several good talks about randomness and time calculations in C++. On randomness: Walter Brown’s What C++ Programmers Need to Know About Header <random> and Cheinan Marks’ I Just Wanted a Random Integer! were both excellent talks. And Howard Hinnant gave several great talks: A <chrono> Tutorial, and Welcome to the Time Zone, a followup to his talk from last year, A C++ Approach to Dates and Times.

CHRONO + RANDOM = HORRID ?

That’s perhaps a little unfair, but recently I ran into the need to compute a random period of time. I think this is a common use case for things like backoff schemes for network retransmission. And it seemed to me that the interaction of <chrono> and <random> was not quite as good as it could be:

system_clock::duration minTime = 0s;
system_clock::duration maxTime = 5s;
uniform_int_distribution<> d(minTime.count(), maxTime.count());
// 'gen' here is a Mersenne twister engine
auto nextTransmissionWindow = system_clock::duration(d(gen));

This code gets more complex when you start computing an exponential backoff. Relatively straightforward, but clumsy, especially if you want a floating-point base for your exponent calculation: system_clock::duration has an integral representation, so in all likelihood you end up having to cast multiple times, using either static_cast or duration_cast. That’s a bit messy.

I remembered some code from another talk: Andy Bond’s AAAARGH!? Adopting Almost Always Auto Reinforces Good Habits!? in which he presented a function to make a uniform distribution by inferring its argument type, useful in generic code. Something like the following:

template <typename A, typename B = A,
          typename C = std::common_type_t<A, B>,
          typename D = std::uniform_int_distribution<C>>
inline auto make_uniform_distribution(const A& a,
                                      const B& b = std::numeric_limits<B>::max())
  -> std::enable_if_t<std::is_integral<C>::value, D>
{
  return D(a, b);
}

Of course, the standard also provides uniform_real_distribution, so we can provide another template and overload the function for real numbers:

template <typename A, typename B = A,
          typename C = std::common_type_t<A, B>,
          typename D = std::uniform_real_distribution<C>>
inline auto make_uniform_distribution(const A& a,
                                      const B& b = B{1})
  -> std::enable_if_t<std::is_floating_point<C>::value, D>
{
  return D(a, b);
}

And with these two in hand, it’s easy to write a uniform_duration_distribution that uses the correct distribution for its underlying representation (using a home-made type trait to constrain it to duration types).

template <typename T>
struct is_duration : std::false_type {};
template <typename Rep, typename Period>
struct is_duration<std::chrono::duration<Rep, Period>> : std::true_type {};
 
template <typename Duration = std::chrono::system_clock::duration,
          typename = std::enable_if_t<is_duration<Duration>::value>>
class uniform_duration_distribution
{
public:
  using result_type = Duration;
 
  explicit uniform_duration_distribution(
      const Duration& a = Duration::zero(),
      const Duration& b = Duration::max())
    : m_a(a), m_b(b)
  {}
 
  void reset() {}
 
  template <typename Generator>
  result_type operator()(Generator& g)
  {
    auto d = make_uniform_distribution(m_a.count(), m_b.count());
    return result_type(d(g));
  }
 
  result_type a() const { return m_a; }
  result_type b() const { return m_b; }
  result_type min() const { return m_a; }
  result_type max() const { return m_b; }
 
private:
  result_type m_a;
  result_type m_b;
};

Having written this, we can once again overload make_uniform_distribution to provide for duration types:

template <typename A, typename B = A,
          typename C = std::common_type_t<A, B>,
          typename D = uniform_duration_distribution<C>>
inline auto make_uniform_distribution(const A& a,
                                      const B& b = B::max()) -> D
{
  return D(a, b);
}

And now we can compute a random duration more expressively and tersely, and, I think, in the spirit of the existing functionality that exists in <chrono> for manipulating durations.

auto d = make_uniform_distribution(0s, 5000ms);
auto nextTransmissionWindow = d(gen);

CHRONO + RANDOM = DREAMY

I leave it as an exercise for the reader to solve these cryptarithmetic puzzles. As for the casting problems, for now, I’m living with them.

An algorithmic sketch: inplace_merge

March 13th, 2016

One of the things I like to do in my spare time is study the STL algorithms. It is easy to take them for granted and easy, perhaps, to imagine that they are mostly trivial. And some are: I would think that any decent interview candidate ought to be able to write find correctly. (Although even such a trivial-looking algorithm is based on a thorough understanding of iterator categories.)

Uncovering the overlooked

But there are some algorithms that are non-trivial, and some are important building blocks. Take inplace_merge. For brevity, let’s consider the version that just uses operator< rather than being parametrized on the comparison. The one easily generalizes to the other in a way that is not important to the algorithm itself.

template <typename BidirectionalIterator>
void inplace_merge(BidirectionalIterator first,
                   BidirectionalIterator middle,
                   BidirectionalIterator last);

It merges two consecutive sorted ranges into one sorted range. That is, if we have an input like this:

x_0 \: x_1 \: x_2 \: ... \: x_n \: y_0 \: y_1 \: y_2 \: ... \: y_m

Where \forall i < n, x_i \leq x_{i+1} and \forall j < m, y_j \leq y_{j+1}

We get this result occupying the same space:

r_0 \: r_1 \: r_2 \: ... \: r_{n+m}

Where \forall i < n+m, r_i \leq r_{i+1} and the new range is a permutation of the original ranges. In addition, the standard states a few additional constraints:

  • inplace_merge is stable - that is, the relative order of equivalent elements is preserved
  • it uses a BidirectionalIterator which shall be ValueSwappable and whose dereferent (is that a word?) shall be MoveConstructible and MoveAssignable
  • when enough additional memory is available, (last-first-1) comparisons. Otherwise an algorithm with complexity N log(N) (where N = last-first) may be used

Avenues of enquiry

Leaving aside the possible surprise of discovering that an STL algorithm may allocate memory, some thoughts spring to mind immediately:

  • Why does inplace_merge need a BidirectionalIterator?
  • How much memory is required to achieve O(n) performance? Is a constant amount enough?

And to a lesser extent perhaps:

  • Why are merge and inplace_merge not named the same way as other algorithms, where the more normal nomenclature might be merge_copy and merge?
  • What is it with the algorists' weasel-word "in-place"?

First thoughts about the algorithm

It seems that an O(n log n) algorithm should be possible on average, because in the general case, simply sorting the entire range produces the desired output. Although the sort has to be stable, which means something like merge sort, which leads us down a recursive rabbit hole. Hm.

At any rate, it's easy to see how to achieve inplace_merge with no extra memory needed by walking iterators through the ranges:

template <typename ForwardIt>
void naive_inplace_merge(
    ForwardIt first, ForwardIt middle, ForwardIt last)
{
  while (first != middle && middle != last) {
    if (*middle < *first) {
      std::iter_swap(middle, first);
      auto i = middle;
      std::rotate(++first, i, ++middle);
    } else {
      ++first;
    }
  }
}

After swapping (say) x_0 and y_0, the ranges look like this:

y_0 \: x_1 \: x_2 \: ... \: x_n \: x_0 \: y_1 \: y_2 \: ... \: y_m

And the call to rotate fixes up x_1 \: ... \: x_n \: x_0 to be ordered again. From there we proceed as before on the ranges x_0 \: ... \: x_n and y_1 \: ... \: y_m.

This algorithm actually conforms to the standard! It has O(n) comparisons, uses no extra memory, and has the advantage that it works on ForwardIterator! But unfortunately, it's O(n²) overall in operations, because of course, rotate is O(n). So how can we do better?

Using a temporary buffer

If we have a temporary buffer available that is equal in size to the smaller of the two ranges, we can move the smaller range to it, move the other range up if necessary, and perform a "normal" merge of the two ranges into the original space:

template <typename BidirIt>
void naive_inplace_merge2(
    BidirIt first, BidirIt middle, BidirIt last)
{
  using T = typename std::iterator_traits<BidirIt>::value_type;
 
  auto d1 = std::distance(first, middle);
  auto d2 = std::distance(middle, last);
 
  auto n = std::min(d1, d2);
  auto tmp = std::make_unique<char[]>(n * sizeof(T));
  T* begint = reinterpret_cast<T*>(tmp.get());
  T* endt = begint + n;
 
  if (d1 <= d2)
  {
    std::move(first, middle, begint);
    std::merge(begint, endt, middle, last, first);
  }
  else
  {
    std::move(middle, last, begint);
    auto i = std::move_backward(first, middle, last);
    std::merge(i, last, begint, endt, first);
  }
}

This is essentially the algorithm used by STL implementations if buffer space is available. And this is the reason why inplace_merge requires BidirectionalIterator: because move_backward does.

(This isn't quite optimal: the std::move_backward can be mitigated with reverse iterators and predicate negation, but the BidirectionalIterator requirement remains. Also, strictly speaking, std::merge is undefined behaviour here because one of the input ranges overlaps the output range, but we know the equivalent loop is algorithmically safe.)

Provisioning of the temporary buffer is also a little involved because we don't know that elements in the range are default constructible (and perhaps we wouldn't want to default-construct our temporaries anyway). So to deal correctly with non-trivial types here, std::move should actually be a loop move-constructing values. And when std::inplace_merge is used as a building block for e.g. std::stable_sort, it would also be nice to minimize buffer allocation rather than having an allocation per call. Go look at your favourite STL implementation for more details.

Thinking further

The literature contains more advanced algorithms for merging if a suitably-sized buffer is not available: the basis for the STL's choice is covered in Elements of Programming chapter 11, and in general the work of Dudzinski & Dydek and of Kim & Kutzner seems to be cited a lot.

But I knew nothing of this research before tackling the problem, and attempting to solve it requiring just ForwardIterator.

I spent a couple of evenings playing with how to do inplace_merge. I covered a dozen or more A4 sheets of squared paper with diagrams of algorithms evolving. I highly recommend this approach! After a few hours of drawing and hacking I had a really good idea of the shape of things. Property-based testing came in very handy for breaking my attempts, and eventually led me to believe that a general solution on the lines I was pursuing would either involve keeping track of extra iterators or equivalently require extra space. Keeping track of iterators seemed a messy approach, so an extra space approach is warranted.

How much extra space? Consider the "worst case":

x_0 \: x_1 \: x_2 \: ... \: x_n \: y_0 \: y_1 \: y_2 \: ... \: y_m

Assume for the moment that m \leq n. When y_m < x_0, we need extra space to hold all of x_0 \: ... \: x_m. If n \leq m then we will need extra space for x_0 \: ... \: x_n to likewise move them out of the way. Either way, the number of units of extra space we need is min(n, m).

As we move elements of x into temporary storage, we can see that in general at each stage of the algorithm we will have a situation something like this (using Z to mean a moved-from value):

... \: x_i \: ... \: x_n \: Z_0 \: ... \: Z_{j-1} \: y_j \: ... \: y_m

With some values of x moved into temporary storage:

x_{i-t} \: ... \: x_{i-1}

The temporary storage here is a queue: we always push on to the end and pop from the beginning, since the elements in it start, and remain, ordered. Since we know an upper bound on the number of things in the queue at any one time, it can be a ring buffer (recently proposed) over a regular array of values.

Sketching the start

From this, we can start sketching out an algorithm:

  1. Allocate a buffer of size min(m, n) - call it tmp
  2. We'll walk the iterators along the x (first) and y (middle) ranges
  3. The output iterator o will start at first
  4. The next x to consider will either be in-place in the x range, or (if tmp is not empty) in tmp - call it xlow
  5. If *y < *xlow move *x to tmp, move *y to o, inc y
  6. Otherwise, if *xlow is in tmp, move *x to tmp and *xlow from tmp to o
  7. inc o, inc x
  8. if y < last and o < middle, goto 4
  9. deal with the end(s) of the ranges

Dealing with the end

This gets us as far as exhausting the smaller range: after this, we will be in one of two situations.

Situation 1. If we have exhausted the y range, things look like this:

... \: x_i \: ... \: x_n \: Z_0 \: ... \: Z_m

With values of x in temporary storage:

x_{i-t} \: ... \: x_{i-1}

To fix up this situation, we can repeatedly swap the tmp range with the equivalent x range until we reach middle (i.e Z_0), and then simply move the remaining tmp values into place.

I originally wrote a loop repeatedly swapping the values in tmp right up to the end; but I realised that would involve swapping a moved-from object, which would be wrong (it might work… until it doesn’t). Moved-from objects should either be destroyed or made whole (assigned to); nothing else.

Situation 2. The possibility is that we have exhausted the x range, in which case things look like this:

... \: Z_0 \: ... \: Z_{n-i} \: y_j \: ... \: y_m

With values of x in temporary storage:

x_i \: ... \: x_n

To fix up this situation, we can just do a regular merge on the remaining y range and tmp, outputting starting at middle (i.e Z_0). (With the same proviso as before about undefined behaviour with overlapping ranges.) We know that it will be safe to do a loop equivalent to merge, because we have exactly the required amount of space before y_j to fit x_i \: ... \: x_n. This is the same as the STL’s normal buffered merge strategy.

Final thoughts

I tackled this exercise from scratch, knowing nothing about actual implementations of inplace_merge. This algorithm does some extra housekeeping under the hood, but:

  • it does the minimum number of comparisons
  • each element is moved at most twice: into tmp and out again
  • it needs only ForwardIterator

Optimization and benchmarking under differing conditions of range size, comparison and move cost is left as an exercise to the reader…

I cannot recommend Elements of Programming enough. I am partway through reading it; after this exercise I skipped to chapter 11 to see what it said. Every time I dive into the STL algorithms, I am re-impressed by the genius of Alex Stepanov: Paul McJones’ recent talk The Concept of Concept explains this well, in particular the key role of concepts in the STL in service of algorithmic purity. Alex knew about concepts from the start: it’s taken C++ over 2 decades to catch up.

After doing this work, I discovered a recent proposal that calls for weakening the iterator categories of inplace_merge and related algorithms.

An implementation of this algorithm is on github. It’s just a sketch, written for clarity rather than optimality. This has been a fun exercise.

ELI5: monoids

February 29th, 2016

(Resulting from my claim that “a child of 8 can understand monoids…”)

Wikipedia says: “In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single associative binary operation and an identity element.”

Wolfram says: A monoid is a set that is closed under an associative binary operation and has an identity element I ∈ S such that for all a ∈ S, Ia = aI = a.

Mathematics has to be precise, which is why it uses jargon. But what do these concise definitions mean in everyday language? Consider adding up numbers.

  • The set is the whole numbers (and we need zero). 0, 1, 2, 3 etc.
  • The associative binary operation is addition.
    • “binary” just means it’s a thing you do to two numbers.
    • “associative” means it doesn’t matter what order you group things in. 1 + 2 + 3 gives the same answer whether you add 1 and 2 first and then add 3, or add 2 and 3 first and then add the answer to 1.
  • The set being “closed” under addition means that when you add two numbers you get another number – you don’t get some other kind of thing. (You might think this is obvious, but in maths it has to be stated.)
  • The identity element is 0 – the thing that doesn’t make any difference when you add it. Anything plus zero is itself.

So adding whole numbers is a monoid. A mathematician would say that the non-negative integers form a monoid under addition. The important thing is that the numbers aren’t a monoid on their own; it’s the combination of the set (0, 1, 2, 3…) and the operation (+) that makes the monoid. If we chose another operation, we could get another monoid. Think about multiplication, for instance.

It turns out that lots of things behave the same way as addition on numbers, which is why the notion of a monoid is very useful to mathematicians and computer scientists.

Lameness Explained

December 9th, 2015

OK, more than one person wanted explanations of The C++ <random> Lame List, so here are some of my thoughts, if only to save people searching elsewhere.

  1. Calling rand() is lame because it’s an LCG with horrible randomness properties, and we can do better. And if you’re not calling rand(), there’s no reason to call srand().
  2. Using time(NULL) to seed your RNG is lame because it doesn’t have enough entropy. It’s only at a second resolution, so in particular, starting multiple processes (e.g. a bringing up bunch of servers) at the same time is likely to seed them all the same.
  3. No, rand() isn’t good enough even for simple uses, and it’s easy to do the right thing these days. The lower order bits of rand()‘s output are particularly non-random, and odds are that if you’re using rand() you’re also using % to get a number in the right range. See item 6.
  4. In C++14 random_shuffle() is deprecated, and it’s removed in C++17, which ought to be reason enough. If you need more reason, one version of it is inconvenient to use properly (it uses a Fisher-Yates/Knuth shuffle so takes an RNG that has to return random results in a shifting range) and the other version of it can use rand() under the hood. See item 1.
  5. default_random_engine is implementation-defined, but in practice it’s going to be one of the standard generators, so why not just be explicit and cross-platform-safe (hint: item 10)?. Microsoft’s default is good, but libc++ and libstdc++ both use LCGs as their default at the moment. So not much better than rand().
  6. It is overwhelmingly likely that whatever RNG you use, it will output something in a power-of-two range. Using % to get this into the right range probably introduces bias. Re item 3, consider a canonical simple use: rolling a d6. No power of two is divisible by 6, so inevitably, % will bias the result. Use a distribution instead. STL (and others) have poured a lot of time into making sure they aren’t biased.
  7. random_device is standard, easy to use, and should be high quality randomness. It may not be very well-performing, which is why you probably want to use it for seeding only. But you do want to use it (mod item 8).
  8. Just know your platform. It might be fine in desktop-land, but random_device isn’t always great. It’s supposed to be nondeterministic and hardware based if that’s available… trust but verify, as they say.
  9. Not handling exceptions is lame. And will bite you. I know this from experience with random_device specifically.
  10. The Mersenne twisters are simply the best randomness currently available in the standard.
  11. Putting mt19937 on the stack: a) it’s large (~2.5k) and b) you’re going to be initializing it each time through. So probably not the best. See item 17 for an alternative.
  12. You’re just throwing away entropy if you don’t seed the generator’s entire state. (This is very common, though.)
  13. Simply, uniform_int_distribution works on a closed interval (as it must – otherwise it couldn’t produce the maximum representable value for the given integral type). If you forget this, it’s a bug in your code – and maybe one that takes a while to get noticed. Not good.
  14. Forgetting ref() around your generator means you’re copying the state, which means you’re probably not advancing the generator like you thought you were.
  15. seed_seq is designed to seed RNGs, it’s that simple. It tries to protect against poor-quality data from random_device or whatever variable-quality source of entropy you have.
  16. Not considering thread safety is always lame. Threads have been a thing for quite a while now.
  17. thread_local is an easy way to get “free” thread safety for your generators.
  18. You should be using a Mersenne twister (item 10) so just use the right thing for max(). Job done.

If you want more, see rand() Considered Harmful (a talk by Stephan T Lavavej), or The bell has tolled for rand() (from the Explicit C++ blog), or see Melissa O’Neill’s Reddit thread, her talk on PCG, and the associated website.

And of course, cppreference.com.

The C++ <random> Lame List

December 7th, 2015

Network programmers of a certain age may remember the Windows Sockets Lame List.

I previously wrote a short “don’t-do-that-do-this” guide for modern C++ randomness, and I was recently reading another Reddit exchange featuring STL, author of many parts of Microsoft’s STL implementation, when it struck me that use of C++ <random> needs its own lame list to discourage using the old and busted C parts and encourage the using the new C++ hotness. So here, in no particular order, and with apologies to Keith Moore (wherever he may be) is an incomplete lame list for use of <random>.

  1. Calling rand() or srand(). Lame.
  2. Using time(NULL) to seed an RNG. Inexcusably lame.
  3. Claiming, “But rand() is good enough for simple uses!” Dog lame.
  4. Using random_shuffle() to permute a container. Mired in a sweaty mass of lameness.
  5. Using default_random_engine. Nauseatingly lame.
  6. Using % to get a random value in a range. Lame. Lame. Lame. Lame. Lame.
  7. Not using random_device to seed an RNG. Violently lame.
  8. Assuming that random_device is going to do the right thing on your platform. Uncontrollably lame.
  9. Failing to handle a possible exception from the construction or use of random_device. Totally lame.
  10. Using anything in the standard but mt19937 or mt19937_64 as a generator. Intensely lame.
  11. Putting mt19937 on the stack. In all my years of observing lameness, I have seldom seen something this lame.
  12. Seeding mt19937 with only one 32-bit word rather than its full state_size. Pushing the lameness envelope.
  13. Forgetting that uniform_int_distribution works on a closed interval. Thrashing in a sea of lameness.
  14. Passing random_device or a generator to generate_n() by value, forgetting to wrap it with ref(). Glaringly lame.
  15. Failing to use seed_seq to initialize a generator’s state properly. Indescribably lame.
  16. Not considering thread safety when using a generator. Floundering in an endless desert of lameness.
  17. Using a global generator without making it thread_local. Suffocating in self lameness.
  18. Using RAND_MAX instead of mt19937::max(). Perilously teetering on the edge of a vast chasm of lameness.

This list will undoubtedly grow as I continue to write lame code…

Amaze your friends, and confound your enemies!

November 4th, 2015

When I was young, I read lots of books with titles (or at least subheadings) along the lines of, “Amaze Your Friends and Confound Your Enemies” – a lot of them were filled with tricks and oddities like the Birthday Paradox, or the old saw about the elephant from Denmark.

It turns out that if you read and continue to read enough of this kind of thing, you can continue to amaze your friends well into adulthood! A lot of times this means appearing to be good at mental arithmetic. And the trick to being good at mental arithmetic is not to be especially fast at rote calculation. It lies in a web of knowledge about numbers.

An aside about maths teaching

I often think that number theory is poorly served by the mathematical curriculum almost everywhere. Kids learn times tables and are introduced to prime numbers, and then I think the number theory track more or less stops, and a student doesn’t meet it again until university-level maths. Which is a shame, because there are many interesting and fun problems in number theory that can be easily stated and understood by a ten-year-old but which still remain unsolved. Also, a more continuous (ha!) grounding in number theory would give us a better understanding of some very important features of the modern world – an obvious example being cryptography.

I was lucky enough, as a 12-13 year old, to have a recreational maths class at school for a year where we tackled problems together. It was distinctly constructivist in nature – whether by design or not – and it was a really fun class because we were typically all trying to solve a hard problem over the course of a few weeks. “Copying” other people’s work and building on it towards a solution was par for the course – as in real life! We took inspiration from the writings of people like Martin Gardner, Sam Loyd, Raymond Smullyan, Henry Dudeney and Eric Emmet. The teacher would ramp up the difficulty of problems as we went – and in mathematics there is almost always a way, having solved one problem, to remove a constraint or make it more general in some way to provide a step up to the next level.

A typical class exchange:

Teacher: Who can tell me how many squares there are on a chessboard?
Student A: 64!
Teacher: Ah yes. Correct. But I see you’re only counting the squares that are one square in size, as it were. I think there might be more squares there…
Student A: …
Student B: Wait a minute…
Class: *realization* *time passes, working-out*
Class: 204!
Teacher: Correct. For your chessboard there. But my chessboard has n squares on a side, not 8.
Class: *argh* *more time passes, probably a week*
Class: Um… (a few tries, and then)… n(n+1)(2n+1)/6 ?
Teacher: Right! But… hm… my chessboard got broken, now it’s not square any more, it’s n by m. Oh, and I want to know how many rectangles are on it.
Class: *mind blown*

Associations – it’s what brains do

Anyway, the human brain is fantastically good at constructing associations. And being “good at maths” is about having lots of those associations when it comes to numbers. Mathematicians often have this. Computer people have this facility, when it comes to powers of 2, and it looks astounding to muggles when it comes out in another context (eg Biology class):

Teacher (thinking he is asking a hard question): This germ divides in two every hour. If we start with just one germ here, how many will we have after a day?
Nerdette in the back row (instantly): 16,777,216
Rest of the class: How the hell…?

When you have enough associations, they start to overlap and provide multiple ways to an answer. I was recently out with a group of friends and at the end of the night we came to pay the bill. There were seven of us, and the bill was $195. I immediately knew it was about $28 each, and I didn’t have to calculate, because of some associations:

  • 196 is 14 squared, which is 7 * 28. Immediate answer. (Square numbers up to 20 or so – very useful to know.)
  • In the UK weight is often measured in stones. 14lbs = 1 stone, and I know that 98lbs = 7 stones or 7*14 = 98. And as 98*2 = 196, because (100-2)*2 = 200-4 = 196, so 14*2 = 28. Corroboration by overlapping association.

“Wizardry” with 1/7

Another second’s thought provides the exact amount per person, because 1/7 is a useful and interesting fraction to know. It has a recurring 6-digit pattern, and it cycles. Once you know the six digits of 1/7, it’s trivial to figure out/remember the other fractions:

  • 1/7 = 0.142857142857142857…
  • 2/7 = 0.285714…
  • 3/7 = 0.428571…
  • 4/7 = 0.571428…
  • 5/7 = 0.714285…
  • 6/7 = 0.857142…

This is a fun trick for kids: compute 142857 * 2, 142857 * 3, etc and see how the digits cycle. Then try 142857 * 7… and you get 999999. Neat.

Anyway, 28*7=196 but the bill is $195, which means actually everyone pays $28, less 1/7 of a dollar. So the exact figure is $27 and 85.714285… cents.

Fun for kids

When numbers are your friends, it’s easy to look like you’re a wizard. And it’s really just about forming those associations. When I see 41, I think of Euler’s famous expression x2 + x + 41 which is prime for every x from 0 to 39. When I see 153, I think, “Hello, 13 + 53 + 33!” And similarly for many other numbers and mathematical techniques, thanks to all that reading and playing with numbers as a child. These days I entertain my kids by having them do fun math tricks like:

Enter any three-digit number into your calculator (say 456)
Multiply by 7
Now multiply again by 11
Now multiply again by 13
The result is the original number “doubled up” (say 456456) – because 7 * 11 * 13 = 1001

I’m teaching them how to amaze their friends and confound their enemies!