Memos from Howard Marks

The Illusion of Knowledge



I’ve been expressing my disregard for forecasts for almost as long as I’ve been writing my memos, starting with The Value of Predictions, or Where’d All This Rain Come From in February 1993.  Over the years since then, I’ve explained at length why I’m not interested in forecasts – a few of my favorite quotes echoing my disdain head the sections below – but I’ve never devoted a memo to explaining why making helpful macro forecasts is so difficult.  So here it is. 



Food for Thought


There are two kinds of forecasters: those who don’t know, and those who don’t know they don’t know.


– John Kenneth Galbraith


Shortly after putting the finishing touches on I Beg to Differ in July, I attended a lunch with a number of experienced investors, plus a few people from outside the investment industry.  It wasn’t organized as a social occasion but rather an opportunity for those present to exchange views regarding the investment environment. 


At one point, the host posed a series of questions: What’s your expectation regarding inflation?  Will there be a recession, and if so, how bad?  How will the war in Ukraine end?  What do you think is going to happen in Taiwan?  What’s likely to be the impact of the 2022 and ’24 U.S. elections?  I listened as a variety of opinions were expressed. 


Regular readers of my memos can imagine what went through my mind: “Not one person in this room is an expert on foreign affairs or politics.  No one present has particular knowledge of these topics, and certainly not more than the average intelligent person who read this morning’s news.”  None of the thoughts expressed, even on economic matters, seemed much more persuasive than the others, and I was absolutely convinced that none were capable of improving investment results.  And that’s the point.


It was that lunch that started me thinking about writing yet another memo on the futility of macro forecasting.  Soon thereafter a few additional inputs arrived – a book, a piece in Bloomberg Opinion, and a newspaper article – all of which supported my thesis (or perhaps played to my “confirmation bias” – i.e., the tendency to embrace and interpret new information in a manner that confirms one’s preexisting views).  Together, the lunch and these items inspired this memo’s theme: the reasons why forecasts are rarely helpful.


In order to produce something useful – be it in manufacturing, academia, or even the arts – you must have a reliable process capable of converting the required inputs into the desired output.  The problem, in short, is that I don’t think there can be a process capable of consistently turning the large number of variables associated with economies and financial markets (the inputs) into a useful macro forecast (the output).   


The Machine


The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge. 


– Daniel J. Boorstin


In my first decade or so working at First National City Bank, a word was in vogue that I haven’t heard in a long time: econometrics.  This is the practice of looking for relationships within economic data that can lead to valid forecasts.  Or, to simplify, I’d say econometrics is concerned with building a mathematical model of an economy.  Econometricians were heard from a great deal in the 1970s, but I don’t believe they are any longer.  I take that to mean their models didn’t work.


Forecasters have no choice but to base their judgments on models, be they complex or informal, mathematical or intuitive.  Models, by definition, consist of assumptions: “If A happens, then B will happen.”  In other words, relationships and responses.  But for us to willingly employ a model’s output, we have to believe the model is reliable.  When I think about modeling an economy, my first reaction is to think about how incredibly complicated it is.


The U.S., for example, has a population of around 330 million.  All but the very youngest and perhaps the very oldest are participants in the economy.  Thus, there are hundreds of millions of consumers, plus millions of workers, producers, and intermediaries (many people fall into more than one category).  To predict the path of the economy, you have to forecast the behavior of these people – if not for every participant, then at least for group aggregates. 


A real simulation of the U.S. economy would have to deal with billions of interactions or nodes, including interactions with suppliers, customers, and other market participants around the globe.  Is it possible to do this?  Is it possible, for example, to predict how consumers will behave (a) if they receive an additional dollar of income (what will be the “marginal propensity to consume”?); (b) if energy prices rise, squeezing other household budget categories; (c) if the price for one good rises relative to others (will there be a “substitution effect”?); or (d) if the geopolitical arena is roiled by events continents away? 


Clearly, this level of complexity necessitates the frequent use of simplifying assumptions.  For example, it would make modeling easier to be able to assume that consumers won’t buy B in place of A if B isn’t either better or cheaper (or both).  And it would help to assume that producers won’t price X below Y if it doesn’t cost less to produce X than Y.  But what if consumers are attracted to the prestige of B despite (or even because of) its higher price?  And what if X has been developed by an entrepreneur who’s willing to lose money for a few years to gain market share?  Is it possible for a model to anticipate the consumer’s decision to pay up and the entrepreneur’s decision to make less (or even lose) money?


Further, a model will have to predict how each group of participants in the economy will behave in a variety of environments.  But the vagaries are manifold.  For example, consumers may behave one way at one moment and a different way at another similar moment.  Given the large number of variables involved, it seems impossible that two “similar” moments will play out exactly the same way, and thus that we’ll witness the same behavior on the part of participants in the economy.  Among other things, participants’ behavior will be influenced by their psychology (or should I say their emotions?), and their psychology can be affected by qualitative, non-economic developments.  How can those be modeled?


How can a model of an economy be comprehensive enough to deal with things that haven’t been seen before, or haven’t been seen in modern times (meaning under comparable circumstances)?  This is yet another example of why a model simply can’t replicate something as complex as an economy.


Of course, a prime example of this is the Covid-19 pandemic.  It caused much of the world’s economy to be shut down, turned consumer behavior on its head, and inspired massive government largesse.  What aspect of a pre-existing model would have enabled it to anticipate the pandemic’s impact?  Yes, we had a pandemic in 1918, but the circumstances were so different (no iPhones, Zoom calls, etc. ad infinitum) as to render economic events during that time of little or no relevance to 2020.


In addition to the matter of complexity and the difficulty of capturing psychological fluctuations and dynamic processes, think about the limitations that bear on an attempt to predict something that can’t be expected to remain unchanged.  Shortly after starting on this memo, I received my regular weekly edition of Morgan Housel’s always-brilliant newsletter.  One of the articles described a number of observations from other arenas that have relevance to our world of economics and investing.  Here are two, borrowed from the field of statistics, that I think are pertinent to the discussion of economic models and forecasts (“Little Ways the World Works,” Morgan Housel, Collaborative Fund, July 20, 2022):


Stationarity: An assumption that the past is a statistical guide to the future, based on the idea that the big forces that impact a system don’t change over time.  If you want to know how tall to build a levee, look at the last 100 years of flood data and assume the next 100 years will be the same.  Stationarity is a wonderful, science-based concept that works right up until the moment it doesn’t.  It’s a major driver of what matters in economics and politics.  [But in our world,] “Things that have never happened before happen all the time,” says Stanford professor Scott Sagan.


Cromwell’s rule: Never say something cannot occur . . . .  If something has a one-in-a-billion chance of being true, and you interact with billions of things during your lifetime, you are nearly assured to experience some astounding surprises, and should always leave open the possibility of the unthinkable coming true. 


Stationarity might be fairly assumed in the realm of the physical sciences.  For example, thanks to the law of universal gravitation, under given atmospheric conditions, the speed at which an object falls can always be counted on to accelerate at the same rate. It always has, and it always will.  But few processes can be counted on to be stationary in our world, especially given the role played by psychology, emotion, and human behavior, and their propensity to vary over time.


Take, for example, the relationship between unemployment and inflation.  For roughly the last 60 years, economists relied on the Phillips curve, which holds that wage inflation will rise as the unemployment rate declines, because when there are fewer idle workers on the sidelines, employees gain bargaining power and can successfully negotiate for higher wages.  It was also believed for decades that an unemployment rate around 5.5% indicated “full employment.”  But unemployment fell below 5.5% in March 2015 (and reached a 50-year low of 3.5% in September 2019), yet there was no significant increase in inflation (in wages or otherwise) until 2021.  So the Phillips curve described an important relationship that was built into economic models for decades but, seemingly, didn’t apply over much of the last decade.


Cromwell’s rule is also relevant.  Unlike in the physical sciences, in markets and economies there’s very little that absolutely has to happen or definitely can’t happen.  Thus, in my book Mastering the Market Cycle , I listed seven terms that investors should purge from their vocabularies: “never,” “always,” “forever,” “can’t,” “won’t,” “will,” and “has to.”  But if it’s true that those words have to be discarded, then so too must the idea that one can build a model that can dependably predict the macro future.  In other words, very little is immutable in our world.


The unpredictability of behavior is a favorite topic of mine.  Noted physicist Richard Feynman once said, “Imagine how much harder physics would be if electrons had feelings.”  The rules of physics are reliable precisely because electrons always do what they’re supposed to do.  They never forget to perform.  They never rebel.  They never go on strike.  They never innovate.  They never behave in a contrary manner.  But none of these things is true of the participants in an economy, and for that reason their behavior is unpredictable.  And if the participants’ behavior is unpredictable, how can the workings of an economy be modeled?


What we’re talking about here is the future, and there’s simply no way to deal with the future that doesn’t require the making of assumptions.  Small errors in assumptions regarding the economic environment and small changes in participants’ behavior can make differences that are highly problematic.  As mathematician and meteorologist Edward Lorenz famously suggested, “The flapping of a butterfly’s wings in Brazil could set off a tornado in Texas.” (Historian Niall Ferguson references this remark in the article I discuss below.)


Thinking about all the above, can we ever consider a model of an economy to be reliable?  Can a model replicate reality?  Can it describe the millions of participants and their interactions?  Are the processes it attempts to model dependable?  Can the processes be reduced to mathematics?  Can mathematics capture the qualitative nuances of people and their behavior?  Can a model anticipate changes in consumer preferences, changes in the behavior of businesses, and participants’ reactions to innovation?  In other words, can we trust its output? 


Clearly, economic relationships aren’t hard-wired, and economies aren’t governed by schematic diagrams (which models try to simulate).  Thus, for me, the bottom line is that the output from a model may point in the right direction much of the time, when the assumptions aren’t violated.  But it can’t always be accurate, especially at critical moments such as inflection points . . . and that’s when accurate predictions would be most valuable. 



The Inputs


No amount of sophistication is going to allay the fact that all of your knowledge is about the past and all your decisions are about the future.


– Ian H. Wilson (former GE executive)


Having considered the incredible complexity of an economy and the need to make simplifying assumptions that decrease any economic model’s accuracy, let’s now think about the inputs a model requires – the raw materials from which forecasts are manufactured.  Will the estimated inputs prove valid?  Can we know enough about them for the resulting forecast to be meaningful?  Or will we simply be reminded of the ultimate truth about models: “garbage in, garbage out”?  Clearly, no forecast can be better than the inputs on which it’s based. 


Here’s what Niall Ferguson wrote in Bloomberg Opinion on July 17:


Consider for a moment what we are implicitly asking when we pose the question: Has inflation peaked? We are not only asking about the supply of and demand for 94,000 different commodities, manufactures and services. We are also asking about the future path of interest rates set by the Fed, which – despite the much-vaunted policy of “forward guidance” – is far from certain. We are asking about how long the strength of the dollar will be sustained, as it is currently holding down the price of U.S. imports.


But there’s more. We are at the same time implicitly asking how long the war in Ukraine will last, as the disruption caused since February by the Russian invasion has significantly exacerbated energy and food price inflation. We are asking whether oil-producing countries such as Saudi Arabia will respond to pleas from Western governments to pump more crude. . . .


We should probably also ask ourselves what the impact on Western labor markets will be of the latest Covid omicron sub-variant, BA.5.  UK data indicate that BA.5 is 35% more transmissible than its predecessor BA.2, which in turn was over 20% more transmissible than the original omicron.


Good luck adding all those variables to your model.  It is in fact just as impossible to be sure about the future path of inflation as it is to be sure about the future path of the war in Ukraine and the future path of the Covid pandemic.


I found Ferguson’s article so relevant to the subject of this memo that I’m including a link to it here.  It makes a lot of important points, although I beg to differ in one regard.  Ferguson says above, “It is in fact just as impossible to be sure about the future path of inflation as it is to be sure about the future path of the war in Ukraine and the future path of the Covid pandemic.”  I think accurately predicting inflation is “more impossible” (if there is such a thing) than predicting the outcomes of the other two, since doing so requires being right about both of those outcomes and a thousand other things.  How can anyone possibly get all these things right?


Here’s my rough description of the forecasting process from The Value of Predictions:


I imagine that for most money managers, the process goes like this: “I predict the economy will do A.  If A happens, interest rates should do B.  With interest rates of B, the stock market should do C.  Under that environment, the best performing sector should be D, and stock E should rise the most.”  The portfolio expected to do best under that scenario is then assembled.


But how likely is E anyway?  Remember that E is conditioned on A, B, C and D.  Being right two-thirds of the time would be a great accomplishment in the world of forecasting.  But if each of the five predictions has a 67% chance of being right, then there is a 13% probability that all five will be correct and that the stock will perform as expected. 


Predicting event E on the basis of assumptions concerning A, B, C and D is what I call single-scenario forecasting .  In other words, if what was assumed regarding A, B, C or D turns out to have been erroneous, the forecasted outcome for E is unlikely to materialize.  All of the underlying forecasts have to be right in order for E to turn out as predicted, and that’s highly improbable.  No one can invest intelligently without considering (a) the other possible outcomes for each element, (b) the likelihood of these alternative scenarios, (c) what would have to happen for one of them to be the actual outcome, and (d) what the impact on E would be.


Ferguson’s article raises an interesting question about economic modeling: What’s to be assumed regarding the general macro environment under which economic participants will operate?  Doesn’t this question indicate an insoluble feedback loop: To predict the overall performance of the economy, we need to make assumptions about, for example, consumer behavior.  But to predict consumer behavior, don’t we need to make assumptions regarding the overall economic environment?


In Nobody Knows II (March 2020), my first memo of the pandemic, I mentioned that in a discussion of the coronavirus, Harvard epidemiologist Marc Lipsitch had said there are (a) facts, (b) informed extrapolations from analogies to other viruses, and (c) opinion or speculation.  This is standard fare when we deal with uncertain events.  In the case of economic or market forecasts, we have a vast trove of history and lots of analogous past events from which to extrapolate (neither of which was the case with Covid-19).  But even when these things are used as inputs for a well-constructed forecasting machine, they’re still highly unlikely to be predictive of the future.  They may be useful fodder, or they may be garbage. 


To illustrate, people often ask me which of the past cycles I’ve experienced was most like this one.  My answer is that current developments bear a passing resemblance to some past cycles, but there is no absolute parallel.  The differences are profound in every case and outweigh the similarities.  And even if we could find an identical prior period, how much reliance should we put on a sample size of one?  I’d say not much.  Investors rely on historical references (and the forecasts they foster) because they fear that without them they’d be flying blind.  But that doesn’t make them reliable.



Unpredictable Influences


Forecasts create the mirage that the future is knowable. 


– Peter Bernstein


We can’t consider the reasonableness of forecasting without first deciding whether we think our world is one of order or of randomness.  Put simply, is it entirely predictable, entirely unpredictable, or something in between?  The bottom line for me is that it’s in between, but unpredictable enough that most forecasts are unhelpful.  And since our world is predictable at some times and unpredictable at others, what good are forecasts if we can’t tell which is which?


I learned a new word from reading Ferguson’s article: “deterministic.”  It’s defined by Oxford Languages as “causally determined by preceding events or natural laws.”  The world is much simpler when we deal with things that function according to rules . . . like Feynman’s electrons.  But, clearly, economies and markets aren’t governed by natural laws – thanks to the involvement of people – and preceding events may “set the stage” or “tend to repeat,” but events rarely unfold in the same way twice.  Thus, I believe the processes that constitute the operation of economies and markets aren’t deterministic, meaning they aren’t predictable.


Further, the inputs clearly are undependable.  Many are subject to randomness, such as weather, earthquakes, accidents, and deaths.  Others involve political and geopolitical issues – ones we’re aware of and ones that haven’t yet surfaced. 


In his Bloomberg Opinion article, Ferguson mentioned the English writer G. K. Chesterton.  That reminded me to include a Chesterton quote that I used in Risk Revisited Again (June 2015):


The real trouble with this world of ours is not that it is an unreasonable world, nor even that it is a reasonable one.  The commonest kind of trouble is that it is nearly reasonable, but not quite.  Life is not an illogicality; yet it is a trap for logicians.  It looks just a little more mathematical and regular than it is; its exactitude is obvious, but its inexactitude is hidden; its wildness lies in wait.  (Emphasis added)


Going back to the lunch described on page one, the host opened the proceedings roughly as follows: “In recent years, we’ve experienced the Covid-19 pandemic, the surprising success of the Fed’s rescue actions, and the invasion of Ukraine.  This has been a very challenging environment, since all of these developments arrived out of the blue.”  I imagine the implication for him was that the attendees should let themselves off the hook for the inaccuracy of their 2020-22 forecasts and go back to work predicting future events and betting on their judgments.  But my reaction was quite different: “The list of events that shaped the current environment is quite extensive.   Doesn’t the fact that no one was able to predict any of them convince those present that they should give up on forecasting?”


For another example, let’s think back to the fall of 2016.  There were two things that almost everyone was sure of: (a) Hillary Clinton would be elected president and (b) if for some reason Donald Trump were elected instead, the markets would tank.  Nonetheless, Trump won, and the markets soared.  The impact on the economy and markets over the last six years was profound, and I’m confident no forecast that took a conventional view of the coming 2016 election got the period since then correct.  Again, shouldn’t that be enough to convince people that (a) we don’t know what’s going to happen and (b) we don’t know how the markets will react to what happens?



Do Forecasts Add Value?


It ain’t what you don’t know that gets you into trouble.  It’s what you know for sure that just ain’t so.


– Mark Twain


As I mentioned in my recent memo Thinking About Macro , in the 1970s we used to describe an economist as “a portfolio manager who never marks to market.”  In other words, economists make forecasts; events prove them either wrong or right; they go on to make new forecasts; but they don’t keep track of how often they get it right (or they don’t publish the stats).


Can you imagine hiring a money manager (or being hired, if you are a money manager) without reference to a track record?  And yet, economists and strategists stay in business, presumably because there are customers for their forecasts, despite there being no published records.


Are you a consumer of forecasts?  Are there forecasters and economists on staff where you work?  Or do you subscribe to their publications and invite them in for briefings, as was the case with my previous employers?  If so, do you know how often each has been right?  Have you found a way to rigorously determine which ones to rely on and which to ignore?  Is there a way to quantify their contributions to your investment returns?  I ask because I’ve never seen or heard of any research along these lines.  The world seems incredibly short on information regarding the value added by macro forecasts, especially given the large number of people involved in this pursuit.


Despite the lack of evidence regarding its value, macro forecasting goes on.  Many of the forecasters are part of teams managing equity funds, or they provide advice and forecasts to those teams.  What we know for sure is that actively managed equity funds have been losing market share to index funds and other passive vehicles for decades due to the poor performance of active management, and as a result, actively managed funds now account for less than half of the capital in U.S. equity mutual funds.  Could the unhelpful nature of macro forecasts be part of the reason?



The only place I know to look for quantification regarding this issue is the performance of so-called macro hedge funds.  Hedge Fund Research (HFR) publishes broad hedge fund performance indices as well as a number of sub-indices.  Below is the long-term performance of a broad hedge fund index, a macro fund sub-index, and the Standard & Poor’s 500 Index. 



 HFRI Hedge Fund Index* HFRI Macro (Total) Index S&P 500 Index
5-year annualized return* 5.2% 5.0% 12.8%
10-year annualized return* 5.1 2.8 13.8

* Performance through July 31, 2022.  The broad hedge fund index shown is the Fund Weighted Composite Index.



What the table above shows is that, according to HFR, the average hedge fund woefully underperformed the S&P 500 in the period under study, and the average macro fund did considerably worse (especially in the period from 2012 to 2017).  Given that investors continue to entrust roughly $4.5 trillion of capital to hedge funds, they must deliver some benefit other than returns, but it’s not obvious what that could be.  This seems to be especially true for the macro funds.


To support my opinion regarding forecasts, I’ll cite a rare example of self-assessment: a seven-page feature that appeared in the Sunday Opinion section of The New York Times on July 24 titled “I Was Wrong.”  In it, eight Times opinion writers opened up about incorrect predictions they made and flawed advice they had given.  The most relevant here is Paul Krugman, who wrote a confession titled “I Was Wrong About Inflation.”  I’ll string together some excerpts:


In early 2021, there was an intense debate among economists about the likely consequences of the American Rescue Plan . . . .  I was on [the side that was less concerned about the impact on inflation].  As it turned out, of course, that was a very bad call. . . . 


. . . history wouldn’t have led us to expect this much inflation from overheating.  So something was wrong with my model . . . .  One possibility is that history was misleading . . . .  Also, disruptions associated with adjusting to the pandemic and its aftermath may still be playing a large role.  And of course both Russia’s invasion of Ukraine and China’s lockdown of major cities have added a whole new level of disruption. . . .


In any case, the whole experience has been a lesson in humility.  Nobody will believe this, but in the aftermath of the 2008 crisis, standard economic models performed pretty well, and I felt comfortable applying these models in 2021.  But in retrospect I should have realized that in the face of the new world created by Covid-19, that kind of extrapolation wasn’t a safe bet.  (Emphasis added)


I salute Krugman for this incredible bout of candor (although I have to say I don’t remember a lot of 2009-10 market forecasts that were optimistic enough to capture the reality of the subsequent decade).  Krugman’s explanation for his error is fine as far as it goes, but I don’t see any mention of abstaining from modeling, extrapolating, or forecasting in the future.


Humility may even be seeping into one of the world’s biggest producers of economic forecasts, the U.S. Federal Reserve, home of more than 400 Ph.D. economists.  Here’s what economist Gary Shilling wrote in Bloomberg Opinion on August 22:


The Federal Reserve’s forward guidance program has been a disaster, so much so that it has strained the central bank’s credibility.  Chair Jerome Powell seems to agree that providing estimates of where the Fed sees interest rates, economic growth and inflation at different points in the future should be junked. . . .


The basic problem with forward guidance is that it depends on data that the Fed had a miserable record of forecasting.  It was consistently too optimistic about an economic recovery after the 2007-2009 Great Recession.  In September 2014, policy makers forecast real gross domestic product growth in 2015 of 3.40% but were forced to constantly crank their expectations down to 2.10% by September 2015.


The federal funds rate is not a market-determined interest rate but is set and controlled by the Fed, and nobody challenges the central bank.  Yet the FOMC members were infamously terrible at forecasting what they themselves would do . . .  In 2015, their average projection of the 2016 federal funds rate was 0.90% and 3.30% in 2019.  The actual numbers were 0.38% and 2.38%. . . .


To be sure, many current events today have caused uncertainty in markets, but the Fed has been in there hot and heavy with its forward guidance.  Recall that early this year the central bank believed that inflation caused by frictions in reopening the economy after the pandemic and supply-chain disruptions was temporary.  Only belatedly did it reverse gears, raise rates and signal that further substantial hikes are coming.  Faulty Fed forecasts resulted in faulty forward guidance and increased financial market volatility.   (Emphasis added)


Lastly on this subject, where are the people who’ve gotten famous (and rich) by profiting from macro views?  I certainly don’t know everyone in the investment world, but among the people I do know or am aware of, there are only a few highly successful “macro investors.”  When the number of instances of something is tiny, it’s an indication, as my mother used to say, that they’re “the exceptions that prove the rule.”  The rule in this case is that macro forecasts rarely lead to exceptional performance.  For me, the exceptionalness of the success stories proves the general truth of that assertion.



Practitioners’ Need to Predict


Forecasts usually tell us more of the forecaster than of the future.


– Warren Buffett


How many people are capable of making macro forecasts that are valuable most of the time?  Not many, I think.  And how many investment managers, economists, and forecasters try?  Thousands, at a minimum.  That raises an interesting question: why?  If macro forecasts don’t add to investment success over time, why do so many members of the investment management industry espouse belief in forecasts and pursue them?  I think the reasons probably center on these:

  • It’s part of the job.

  • Investors have always done it.

  • Everyone I know does it, especially my competitors.

  • I’ve always done it – I can’t quit now.

  • If I don’t do it, I won’t be able to attract clients.

  • Since investing consists of positioning capital to benefit from future events, how can anyone expect to do a good job without a view regarding what those events will be?  We need forecasts, even if they’re imperfect.


This summer, at the suggestion of my son Andrew, I read an extremely interesting book: Mistakes Were Made (but Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts, written by psychologists Carol Tavris and Elliot Aronson.  Its topic is self-justification.  The authors explain that “cognitive dissonance” arises when people are confronted with new evidence that calls into question their pre-existing positions and that when it does, unconscious mechanisms enable them to justify and uphold those positions.  Here are some selected quotes:


If you hold a set of beliefs that guide your practice and you learn that some of them are incorrect, you must either admit you were wrong and change your approach or reject the new evidence.


Most people, when directly confronted by evidence that they are wrong, do not change their point of view or plan of action but justify it even more tenaciously. 


Once we are invested in a belief and have justified its wisdom, changing our minds is literally hard work.  It’s much easier to slot that new evidence into an existing framework and do the mental justification to keep it there than it is to change the framework.


The mechanisms that people generally employ when responding to evidence that throws their beliefs into doubt include these (paraphrasing the authors’ words):

  • an unwillingness to heed dissonant information;

  • selectively remembering parts of their lives, focusing on those parts that support their own points of view; and

  • operating under cognitive biases that ensure people see what they want to see and seek confirmation of what they already believe.


I have little doubt that these are among the factors that cause and enable people to continue making and consuming forecasts.  What specific form might they take in this case?

  • thinking of macro forecasts as an indispensable part of investing;

  • pleasantly recalling correct forecasts, especially any that were bold and non-consensus;

  • overestimating how often forecasts were right;

  • forgetting or minimizing the ones that were wrong;

  • not keeping records regarding forecasts’ accuracy or failing to calculate a batting average;

  • focusing on the “pot of gold” that will reward correct forecasts in the future;

  • saying “everyone does it”; and

  • perhaps most importantly, blaming unsuccessful forecasts on having been blindsided by random occurrences or exogenous events.  (But, as I said earlier, that’s the point: Why make forecasts if they’re so easily rendered inaccurate?)


Most people – even honest people with good intentions – take positions or actions that are in their own interests, sometimes at the expense of others or of objective truth.  They don’t know they’re doing it; they think it’s the right thing; and they have tons of justification.  As Charlie Munger often says, quoting Demosthenes, “Nothing is easier than self-deceit.  For what every man wishes, that he also believes to be true.” 


I don’t think of forecasters as crooks or charlatans.  Most are bright, educated people who think they’re doing something useful.  But self-interest causes them to act in a certain way, and self-justification enables them to stick with it in the face of evidence to the contrary.  As Morgan Housel put it in a recent newsletter:


The inability to forecast the past has no impact on our desire to forecast the future. Certainty is so valuable that we’ll never give up the quest for it, and most people couldn’t get out of bed in the morning if they were honest about how uncertain the future is.  (“Big Beliefs,” Collaborative Fund, August 24, 2022)


For my birthday several years ago, my Oaktree co-founder Richard Masson gave me one of his typical quirky gifts.  In this case, it consisted of some bound copies of The New York Times.  I’ve been waiting for an opportunity to write about my favorite sub-headline from the issue dated October 30, 1929, which followed two days on which the Dow Jones Industrial Average declined by a total of 23%.  It read, “Bankers Optimistic.”  (Less than three years later, the Dow was roughly 85% lower.)   Most bankers – and most money managers – seem to be congenitally optimistic about the future.  Among other things, it’s in their best interests, as it helps them do more business.  But their optimism certainly shapes their forecasts and their resulting behavior.



Can They or Can’t They?


I never think about the future – it comes soon enough.


– Albert Einstein


Consider the following aspects of macro forecasting:

  • the number of assumptions/inputs that are required,

  • the number of processes/relationships that have to be incorporated,

  • the inherent undependability and instability of those processes, and

  • the role of randomness and the likelihood of surprises.


The bottom line for me is that forecasts can’t be right often enough to be worthwhile.  I’ve described it many times, but just for the sake of completeness, I’m going to restate my view of the utility (or rather, futility) of macro forecasts:

  • Most forecasts consist of extrapolation of past performance.

  • Because macro developments usually don’t diverge from prior trends, extrapolation is usually successful.

  • Thus, most forecasts are correct.  But since extrapolation is usually anticipated by security prices, those who follow expectations based on extrapolation don’t enjoy unusual profits when it holds.

  • Once in a while, the behavior of the economy does deviate materially from past patterns.  Since this deviation comes as a surprise to most investors, its occurrence moves markets, meaning an accurate prediction of the deviation would be highly profitable.

  • However, since the economy doesn’t diverge from past performance very often, correct forecasts of deviation are rarely made and most forecasts of deviation turn out to be incorrect.

  • Thus, we have (a) extrapolation forecasts, most of which are correct but unprofitable, and (b) potentially profitable forecasts of deviation, which are rarely correct and thus are generally unprofitable.

  • Q.E.D.: Most forecasts don’t add to returns.


At the lunch described at the beginning of this memo, people were asked what they expected in terms of, for example, Fed policy, and how that influenced their investment stance.  One person replied with something like, “I think the Fed will remain very worried about inflation and thus will raise rates significantly, bringing on a recession.  So I’m in risk-off mode.”  Another said, “I foresee inflation moderating in the fourth quarter, allowing the Fed to turn dovish in January.  That will allow them to bring interest rates back down and stimulate the economy.  I’m very bullish on 2023.” 


We hear statements like these all the time.  But it must be recognized that these people are applying one-factor models: The speaker is basing his or her forecast on a single variable.  Talk about simplifying assumptions: These forecasters are implicitly holding everything constant other than Fed policy.  They’re playing checkers when they need to be playing 3-D chess.  Leaving aside the impossibility of predicting Fed behavior, the reaction of inflation to that behavior, and the reaction of markets to inflation, what about all the other things that matter?  If a thousand things play a part in determining the future direction of the economy and markets, what about the other 999?  What about the impact of wage negotiations, the mid-term elections, the war in Ukraine, and the price of oil? 


The truth is that humans can hold only a few things in their minds at any given time.  It’s hard to factor in a large number of considerations and especially to understand how a large number of things will interact (correlation is always the real stumper).  


Even if you somehow manage to get an economic forecast correct, that’s only half the battle.  You still need to anticipate how that economic activity will translate into a market outcome.  This requires an entirely different forecast, also involving innumerable variables, many of which pertain to psychology and thus are practically unknowable.  According to his student Warren Buffett, Ben Graham said, “In the short run, the market is a voting machine, but in the long run, it is a weighing machine.”  How can investors’ short-run choices be predicted?  Some economic forecasters correctly concluded that the actions of the Fed and Treasury announced in March 2020 would rescue the U.S. economy and trigger an economic recovery.  But I’m not aware of anyone who predicted the torrid bull market that lifted off well before the recovery got underway.


As I’ve described before, in 2016 Buffett shared with me his view of macro forecasts.  “For a piece of information to be desirable, it has to satisfy two criteria: It has to be important, and it has to be knowable.” 

  • Of course, the macro outlook is important.  These days it seems as if investors hang on every forecaster’s word, macro event, and twitch on the part of the Fed.  Unlike my early days in this business, it seems like macro is everything and corporate developments count for relatively little.

  • But I agree strongly with Buffett that the macro future isn’t knowable, or at least almost no one can consistently know more about it than the mass of investors, which is what matters in trying to gain a knowledge advantage and make superior investment decisions.

Clearly, Buffett’s name goes at the top of the list of investors who’ve succeeded by shunning macro forecasts and instead focusing on learning more than others about “the micro”: companies, industries and securities.



*            *            *



In a 2001 memo called What’s It All About, Alpha? , I introduced the concept of the “I know” school and the “I don’t know” school, and in 2004, I elaborated on this at length in Us and Them .  To close the current memo, I’m going to insert some of what I wrote in the latter about the two schools:


Most of the investors I’ve met over the years have belonged to the “I know” school.  This was particularly true in 1968-78, when I analyzed equities, and even in 1978-95, when I had switched to non-mainstream investments but still worked at equity-centric money management firms.


It’s easy to identify members of the “I know” school:

  • They think knowledge of the future direction of economies, interest rates, markets and widely followed mainstream stocks is essential for investment success.

  • They’re confident it can be achieved.

  • They know they can do it.

  • They’re aware that lots of other people are trying to do it too, but they figure either (a) everyone can be successful at the same time, or (b) only a few can be, but they’re among them.

  • They’re comfortable investing based on their opinions regarding the future.

  • They’re also glad to share their views with others, even though correct forecasts should be of such great value that no one would give them away gratis.

  • They rarely look back to rigorously assess their record as forecasters.


“Confident” is the key word for describing members of this school.  For the “I don’t know” school, on the other hand, the word – especially when dealing with the macro-future – is “guarded.”  Its adherents generally believe you can’t know the future; you don’t have to know the future; and the proper goal is to do the best possible job of investing in the absence of that knowledge.


As a member of the “I know” school, you get to opine on the future (and maybe have people take notes).  You may be sought out for your opinions and considered a desirable dinner guest . . . especially when the stock market’s going up. 


Join the “I don’t know” school and the results are more mixed.  You’ll soon tire of saying “I don’t know” to friends and strangers alike.  After a while, even relatives will stop asking where you think the market’s going.  You’ll never get to enjoy that 1-in-1,000 moment when your forecast comes true and The Wall Street Journal runs your picture.  On the other hand, you’ll be spared all those times when forecasts miss the mark, as well as the losses that can result from investing based on over-rated knowledge of the future.  But how do you think it feels to have prospective clients ask about your investment outlook and have to say, “I have no idea”? 


For me, the bottom line on which school is best comes from the late Stanford behaviorist, Amos Tversky: “It’s frightening to think that you might not know something, but more frightening to think that, by and large, the world is run by people who have faith that they know exactly what’s going on.” 


It’s certainly standard practice in the investment management business to come up with macro forecasts, share them on request, and bet clients’ money on them.  It also seems conventional for money managers to trust in forecasts, especially their own.  Not doing so would introduce enormous dissonance, as described above.  But is their belief justified by the facts?  I’m eager to hear what you think.



*            *            *



A few years ago, a highly respected sell-side economist with whom I became friendly during my early Citibank days called me with an important message: “You’ve changed my life,” he said.  “I’ve stopped making forecasts.  Instead, I just tell people what’s going on today and what I see as the possible implications for the future.  Life is so much better.”  Can I help you reach the same state of bliss?



September 8, 2022





Legal Information and Disclosures


This memorandum expresses the views of the author as of the date indicated and such views are subject to change without notice.  Oaktree has no duty or obligation to update the information contained herein.  Further, Oaktree makes no representation, and it should not be assumed, that past investment performance is an indication of future results.  Moreover, wherever there is the potential for profit there is also the possibility of loss.


This memorandum is being made available for educational purposes only and should not be used for any other purpose.  The information contained herein does not constitute and should not be construed as an offering of advisory services or an offer to sell or solicitation to buy any securities or related financial instruments in any jurisdiction.  Certain information contained herein concerning economic trends and performance is based on or derived from information provided by independent third-party sources.  Oaktree Capital Management, L.P. (“Oaktree”) believes that the sources from which such information has been obtained are reliable; however, it cannot guarantee the accuracy of such information and has not independently verified the accuracy or completeness of such information or the assumptions on which such information is based. 


This memorandum, including the information contained herein, may not be copied, reproduced, republished, or posted in whole or in part, in any form without the prior written consent of Oaktree.



© 2022 Oaktree Capital Management, L.P.