Die Meinung

Anatomy of a Forecast Error

Ten years ago, I confidently warned of an emerging inflation problem. I was wrong, but this experience taught me some of the most important mistakes we do when trying to forecast economic matters.

Dylan Grice

Deutsche Version

Ten years ago in a previous existence, I was a sell-side strategist at Société Générale working with my great friend Albert Edwards. It was the aftermath of the 2008 crash and most central banks had already started Quantitative Easing. One of the views I had at the time, which received quite a bit of attention, was that investors should brace themselves for an eventual inflation problem.

Dylan Grice is co-founder of Calderwood Capital Research, an investment company specialising in portfolio construction and alternative investments. Prior to founding Calderwood, Dylan was the Head of Liquid Investments at Calibrium, a prominent private investment office based in Zurich. There, he was responsible for the management of the firm’s liquid portfolio, the research underpinning that portfolio, and the teams involved in the effort. Prior to joining Calibrium in 2014, Dylan was a part of the consistently top-ranked Global Strategy team at Société Générale, and was individually ranked first in the Extel Survey of institutional investor opinion in 2011 and 2012. Dylan started his career at Dresdner Kleinwort Wasserstein as an international economist and later prop trader. He is a graduate of Strathclyde University and the London School of Economics.
Alles anzeigen

«Within ten years», I confidently proclaimed at client meetings, in conferences and in my written research, «we’ll see the first signs of a nascent CPI problem» (which I defined as annual inflation in the core CPI of greater than 4%).

At the time 30 year US Treasury breakevens were a scarcely-believable 2.65%. Today, they are 100bps lower. It wasn’t one of my best calls.

Ten years on, and I continue to find myself worrying about inflation: how it might return, what it might do to my portfolio and what protective steps (if any) I should take.

The problem is that when you’ve been so utterly wrong about something you felt so sure about, you can feel a bit apprehensive about your ability to get it right in the future. So I’ve been trying to retrace my steps, to see if I can understand what went wrong, and doing some soul searching along the way. Having done so, I must admit that it’s been a fruitful, even liberating, exercise. I’ve learned some interesting and I hope useful things along the way.

I know I’m not the only one in this industry who’s made a dud forecast either! So after mulling it over for a while, I thought I’d bare all and share with you, in all its gory detail, the folly of my past over-confident self.

What makes a good forecast?

It’s not as obvious a question as it sounds. For a start, in a probabilistic world one forecast error doesn’t say much about the thinking that went into the forecast, even though it’s the quality of the thinking that counts over time.

An even bigger problem is small sample size. How do we get enough data to know if our thought process is any good? Unlike Flash Boys who make hundreds of thousands of trades per hour and so can gather plenty of evidence to test their hypotheses, my ‘long-term’ forecast took ten years to garner just one data point.

Fortunately, data isn’t everything. Hayek reached his pioneering understanding that the economic system was what we would today call a «complex adaptive system», which processed information and had a spontaneous, undirected emergent order long before the mathematics of complexity and modern computer simulation had established such language. And he did so armed only with a priori reasoning. Einstein’s theory of relativity came about not by empirical testing but by ‘conducting’ a thought experiment.

I’m not setting the bar quite that high but it does illustrate an important point, which is that correct thinking can get you quite far. Just how far depends on the domain, of course, and how smart you are in the first place. And since our domain is investing, and our smartness is average, I think the insight in Charles Ellis’ classic Winning the Loser’s Game might be quite pertinent.

For those of you who aren’t familiar with Ellis, he famously explained that the game of tennis was composed of two games: a ‘winner's' game, played by elite athletes, and a ‘loser’s’ game played by everyone else. Elite athletes win by routinely completing difficult shots, and sometimes completing almost impossible ones. Amateur athletes lose by making more mistakes. They hit the ball into the net, miss the line and repeatedly double fault, as though their opponent is the game of tennis itself rather than the guy across the court.

Elite athletes play the ‘winner’s’ game, everyone else plays the ‘loser’s’ game and Ellis argued that nearly all players could improve not by practicing the harder and more spectacular cross-court winners, but by focusing on simply returning the ball safety back over the net. When playing the loser’s game, success goes to those who eliminate systematic mistakes.

When it comes to thinking, Hayek and especially Einstein were obviously players of the intellectual ‘winner’s’ game, playing winning shots with ease.

I however feel no shame in admitting that I am playing a loser’s game. There is a great deal of value to be had in not repeating stupid mistakes.

How not to make a prediction

Before I get down to the self-flagellation I should, in my defence, say that I did actually do something right ten years ago: I did at least make a forecast which was precise enough to be falsifiable.

Consider some of the vague predictions one might typically read daily in the news: «Trump risks losing American influence with his trade policy», or «Consumers risk talking themselves into a recession», or «Investors ignore the threat posed by Hong Kong’s insurrection at their peril.»

The problem with these, and views you often see which are like them, is that they are framed so vaguely that they are difficult to evaluate. For example, how exactly do we measure American «influence»? If there is a recession, how will we know it was caused by consumers «talking themselves into it»? And isn’t it a statement of the obvious that you ultimately ignore any risk at your peril, be it in Hong Kong or elsewhere?

Compare that to my own prediction: I had a well defined variable of interest (core CPI); a predicted value (>4%); and a forecast time horizon (10 years). So that was something…

The thesis: «Government insolvency will drive inflation»

My prediction was based on the following thesis:

  1. Governments were bust. As Jagadeesh Gokhale showed in a series of papers (which later formed the basis of a book called The Government Debt Iceberg), when the unfunded costs of public sector welfare promises were added to sovereign balance sheets, as any company is required to do by IFRS accounting standards, the true debt of the major economies was several multiples of that which was «on-balance-sheet».
  2. Bankrupt governments had historically resorted to debt monetisation. Here I leant on the work of Reinhart and Rogoff who showed that over the past century, the incidence of sovereign debt crises correlated closely with that of inflation crises. Rather than cut back on welfare promises, governments typically printed the money to pay for them.
  3. In Quantitative Easing (QE) this process was already underway. Initially the money created would be contained in financial markets, but since QE would prove an impossible habit for central banks to kick, it would eventually spill out into the CPI.

It seems a cogent enough narrative, and nothing was wrong with it in and of itself. The problem was more to do with how I calibrated that hypothesis (or didn’t, as it turned out), and how I defended it. I made four mistakes (actually more, but these will do for now).

Mistake #1: Ad hominem fallacy

An Ad Hominem attack is when you attack the person making the argument rather than the argument itself. It’s quite an embarrassing one because it's both basic, and frankly quite unattractive. When I’ve seen others making it in the past it’s been a red flag, indicating a bad faith player who isn’t actually interested in a rational discussion aimed at getting to the truth. So it was quite an odd feeling to realise that I’d been guilty of making it. But I did.

Obvious examples of the Ad Hominem fallacy would be something like «Of course X would say that! You know his parents are rich?» Or, «… well what does Y know about that? I hear she’s a single parent.» 

Interestingly, the Ad Hominem fallacy can bias thinking towards an incorrect argument too. For example, «Z makes a great point here, he cares so passionately about social justice.»

In what way was I falling into this trap? Well, here’s a quote from a piece I wrote for Société Générale in January 2010:

«James Montier said that Bernanke was the worst economist of all time. Now, I’m not sure I agree with James on this one because I can’t make up my mind, sometimes I think it’s the Bernanke, other times I think it’s Krugman. But usually I think nearly all economists to be the joint worst economists of all time. So I have a lot of sympathy with the idea that if the consensus macroeconomic opinion is worried about something, it probably isn't worth worrying about. In fact, if they worry about deflation, I'm going to worry about inflation.»

Those of you who know me know I’m not so keen on economists. The problem is that I have quite a passionate belief that economics is a science, and that the right techniques and tools can and will uncover the underlying mathematical structure of how an economic system works. I feel that macroeconomists are bad scientists, and it frustrates me.

But allowing this kind of bias in my thinking wasn’t very smart. Even if I do believe that guys like Paul Krugman and Ben Bernanke are intellectual phoneys whose opinions contain no information, that’s not the same as their opinions having negative information. Taking one side of an argument just because some economists were on the other side was pretty dumb.

Mistake #2: Denying the premise

This can be a tricky one too, because on the surface it seems so obvious. Suppose you have an argument A, which leads to conclusion B («if A then B»). The mistake is to fall into the trap of denying the conclusion because the premise isn’t true (not_A therefore not_B).

Consider the statement: «Peter is a man; men like cars» and then let’s evaluate the hypothesis «Anna likes cars». Since Anna is not a man, we know the premise to be untrue. Yet it would be absurd to then conclude that since Anna is not a man, Anna doesn’t like cars.

Obviously, this is a simplistic example, and one we’d think would be easy to avoid, but it often crops up in quite subtle ways in the real world, even during what looks and feels like a very rational discussion.

For example, a counter-argument I used to hear very often was, «the problem today is that central banks can’t create inflation, even though they want to. Therefore inflation is unlikely to materialise.»

Now, this argument can be seen to be flawed by considering what would happen if the Fed were to open a bank account for every US citizen and deposit one trillion Dollars into it. What do you think would happen to CPI inflation then? Answer: it would explode. Therefore, central banks can always create inflation if they really want to.

That bit was fine. I think it’s clear the premise is wrong. I fell into the trap of saying that since the premise was not true («central banks can’t create inflation»), the conclusion («inflation is unlikely to materialise») was wrong too.

I was falling into the «not_A therefore not_B» trap. I made that mistake quite a lot.

Mistake #3: The monocausal fallacy

This fallacy says that «A causes B, therefore only A causes B». One of the most important mistakes I made by focussing so completely on money printing and government solvency was that I ended up ignoring the many other potential explanations which might have contributed to future inflation. Einstein was right when he said imagination was more important than knowledge.

In extremis, as my trillion Dollar thought experiment showed, money printing will be the driver of inflation. But most situations aren’t extreme, and in those non-extreme situations (which most of us live in), other forces come into play. Other developments, like the post-WW2 wave of globalisation, the relentless penetration of e-commerce or the beginnings of demographic decline were all credible drivers of past disinflation. Where was my analysis of them?

Mistake #4: The toothbrush problem

This might have been the biggest mistake of all. People have a preference for their own ideas, a kind of «endowment effect» for hypotheses. Academics have a name for it too, they call it «toothbrush problem» because theories are like toothbrushes: everyone prefers their own.

The trouble with the behavioural psychology stuff is that being aware of your biases doesn’t seem to be of much help in escaping them. Similarly, I was aware of all of these logical errors at the time, yet still managed to make them. Why?

Lessons learned

When I reflect on it, the overriding problem was that I fell in love with my narrative. That was the one which opened the door to the others. I enjoyed presenting it, arguing it and finding more ways to confirm that this alone was the single most important thing everyone had to understand.

I drifted into a comfort zone, where the familiar warmth of my own little narrative was easier than the awkward vulnerability you have to embrace when truly wrestling with reality. All these mistakes – the ad hominem fallacy, denying of premises and the simplistic mono-cause – became my comfort blankets.

I see people do it now. I see people «defend» a thesis because they’ve allowed their personality to be somehow wrapped up in it: gold bugs saying the same stuff gold bugs said in the early 1980s without stopping to ask themselves what event would cause them to change their opinion; «value investors» trotting out the same mantra about the current irrationality of markets, without even questioning how they missed out on the enormous value creation in areas of the market they had sneered at as being for «growth idiots»; macro managers complaining that the Fed had compressed volatility so much they couldn’t make any money; and so on.

We have all done it, and we all do it. But the remedy should be obvious: slow down; evaluate as many arguments as you can as objectively as you can; depersonalise them; be wary what you do with a denied premise!

It all sounds so simple, because it is. It’s just not easy. It recalls the depth of Richard Feynman’s insight: «The first principle of science is not to fool yourself. The second principle is that you are the easiest person to fool.»