(Start at the beginning of the series if you want more context.)

**Multiplying power series**

Now we come to something that took considerable thought: how to multiply ranges. Doug McIlroy’s Haskell version of range multiplication is succinct, lazily evaluated and recursive.

(f:ft) * gs@(g:gt) = f*g : ft*gs + series(f)*gt |

Which is to say, mathematically, that if we have:

then the product is given by:

In C++, this function can be naively written:

template <typename R1, typename R2> auto mult(R1&& r1, R2&& r2) { auto h1 = *ranges::begin(std::forward<R1>(r1)); auto h2 = *ranges::begin(std::forward<R2>(r2)); return view::concat( view::single(h1 * h2), add(mult(view::tail(r1), r2), view::transform( view::tail(r2), [] (auto x) { return x * h1; } ))); } |

If this could be compiled, it might even work. I think the lazy evaluation of the resulting range together with the `monoidal_zip`

behaviour of `add`

would terminate what looks like infinite recursion. But this doesn’t compile, because we don’t just have to deal with recursion at runtime: there is no termination of the recursion at compile-time resulting from the compiler trying to do type inference on the return value. Clang initially complained about exceeding the max template depth (default 256) and when I upped this to 1024, clang laboured over it for an impressive 50 seconds before crashing.

**Back to pencil and paper**

So I needed to think harder. It seemed that I would need to write a view representing multiplication of two series, and I started playing with polynomials – for the moment, assuming they were of the same degree. After a while I noticed something useful:

Each successive term is the sum of the terms for which multiplication produces the appropriate power of x. And the computation of any given term is:

This is `inner_product`

of a range with a reversed range! And we know we can reverse the range as long as it has a bidirectional iterator, because it’s just what we’ve already counted up to from the beginning – we already have the end iterator in hand. Next I had to think about how to deal with the cardinality of the product of two polynomials: it’s not enough to iterate to the end of both, even if they’re of the same degree, because there are higher powers of x that get generated in the “tail” of the product. The cardinality is actually the sum of the input cardinalities, minus one.

template<typename C1, typename C2> using series_mult_cardinality = std::integral_constant<cardinality, C1::value == infinite || C2::value == infinite ? infinite : C1::value == unknown || C2::value == unknown ? unknown : C1::value >= 0 && C2::value >= 0 ? static_cast<ranges::cardinality>(C1::value + C2::value - 1) : finite>; |

**A trick of the tail**

And once we reach the end of both range iterators we are multiplying, we need to go an extra distance equal to the length we’ve just covered, to obtain the higher power coefficients. I do this with a `length_`

variable that tracks the distance advanced, and a `tail_`

variable that increments after the ranges are exhausted until it reaches `length_`

, at which point we are at the end. The end sentinel comparison looks like this:

bool equal(cursor<IsConst> const &pos) const { return detail::equal_to(pos.it1_, end1_) && detail::equal_to(pos.it2_, end2_) && pos.tail_ == pos.length_; } |

The only remaining thing is to deal with different degree polynomials, for which I use the same approach as `monoidal_zip`

, keeping a `diff_`

variable and doing some bookkeeping. Computing the current value for the iterator becomes the inner product of one range leading up to the current position with the reverse of its counterpart in the other range (with appropriate adjustments to deal with the tail and the different degrees of the ranges):

auto compute_current() const { auto r1 = make_range( begin(rng_->r1_) + tail_ + (diff_ > 0 ? diff_ : 0), it1_); auto r2 = view::reverse( make_range( begin(rng_->r2_) + tail_ + (diff_ < 0 ? -diff_ : 0), it2_)); return inner_product(r1, r2, 0); } |

So `view::series_mult`

works. It isn’t perfect, because it imposes a bidirectional constraint on the arguments. It is O(n^{2}), which can’t be avoided (we must ultimately multiply every coefficient by every other coefficient). To avoid the bidirectional constraint and just traverse the sequences once, I would need to go back to the recursive definition and reformulate it, somehow avoiding the recursive type, and I can think of a few possible ways to do that:

- Explicitly write a recursive type function, which includes a base case.
- Type-erase the multiplied range somehow (this would involve throwing away cardinality and maybe some other things, so it might not lead to a good outcome).
- Break the type recursion with a technique akin to recursive_variant.
- Somehow collapse the type of the range, how I’m not sure… the only real way to do this I think is by copying to a container and making a fresh range from it.
- Write my own system for lazy evaluation… which obviates ranges, really.

None of these are easy (and some have undesirable consequences). I continue to think on the matter, because Doug McIlroy’s Haskell formulations are recursive and beautiful, and multiplication is the easiest of the recursive definitions. Things get even more tricky with division and composition…

**Tackling exponentiation**

As a footnote to multiplication, I tried to define exponentiation. (I recently read the excellent From Mathematics to Generic Programming, and Chapter 7 is very instructive in the process of generalizing exponentiation from the basis of multiplication.) But I ran into basically the same issue: writing recursive functions to produce ranges is tricky because of the recursive type expansion. Maybe in this case, the problem is defined enough that I can write a recursive type function counterpart to avoid the compiler’s non-terminating type inference.

For the next part of this series, I’ll look at an easier topic: differentiation and integration of power series.

I’m trying to follow this part but am not seeing it, could you give an example?

F = f_0 : f_1 : f_2 : …

G = g_0 : g_1 : g_2 : …

FG = f_0g_0 : f_0g_1 + f_1g_0 : f_0g_2 + f_1g_1 + f_2g_0 : …

Consider (3 + 2x + x²)².

F = 3 : 2 : 1

G = 3 : 2 : 1

FG = 3*3 : 3*2 + 2*3 : 3*1 + 2*2 + 1*3 : 2*1 + 1*2 : 1*1

= 9 : 12 : 10 : 4 : 1

http://www.wolframalpha.com/input/?i=+%283+%2B+2x+%2B+x%C2%B2%29%C2%B2