Discounted Cash Flows

RaplhCramden writes: Even in the case of investments, DCF is the value, right? But at what discount rate? And who’s prediction of future cash flows are we using? And how do we deal with the FACT that the future is always uncertain (at least until it becomes the past)? Buffett uses relatively low discount rates but demands a margin of safety, which means compared to people who use higher discount rates, Buffett is likely to be a buyer of things with greater future payouts and lesser near term payouts. This is just math, lower discount rates value future earnings much more highly than higher discount rates.

This is why I have never prepared a discounted cash flow to value a company and neither should you.

Mark me as an apostate.

And, probably, neither does Buffett.

I no longer have the exact quote of a discussion in an annual meeting that occurred several decades ago so I will have to paraphrase, but this gets the gist of it. Buffett was talking about discounted cash flows when Charlie remarked:

Charlie: Warren talks about these discounted cash flows but I have never seen him doing one.

Warren: Well, Charlie, there are some things that are better done in private. Here, have another piece of candy.

For all the reasons mentioned in the first paragraph above, discounted cash flow computations for operating companies are not very actionable. So, I don’t recommend them for valuing a company.
Instead, do what Munger recommends. Create mental models in your head and use them to estimate the value of an asset. For instance, a price-to-earnings ratio is just a short-hand method of computing a discounted cash flow. Run your assumptions of what you think you know, the probability that what you think you know is valid and what you know you don’t know through your models and produce a value range.
Warren and Charlie don’t just use a higher margin of safety when they find themselves capitalizing too much uncertainty in this procedure, they just throw it in the too-hard-pile and go to the next alternative. Repetition and experience will make you good at this.

From my unread treatise on investing, I discuss capitalizing uncertainty as follows: Using information known or surmised, he capitalizes cash flow numbers. For him, valuing an asset as the net present value of all its future free cash flows seems to be doing something real and determined. While it may appear that he is capitalizing results and numbers, what is really being capitalized are degrees of certainty, probabilities and perceptions. It makes perfect sense to increase our confidence in an asset’s value as its future becomes more certain and predictable and to decrease our confidence in an asset’s value as its future becomes less certain and predictable. So, it also makes as much sense to view the value or price of an asset as capitalized uncertainty as it does to view it as capitalized certainty. It is the same thing, only in reverse.

However, I do recommend creating a model and fiddling with the assumptions. Create your assumptions: normalized earnings, rate of growth of earnings and discount rate. Then make small changes in these assumptions and watch how much the net present value changes. Small changes can make huge differences when compounding over 50 years. So, when I see the results of computer models projecting, say, the effect of global warming over the next 50 years, I repeat the comments of a young boy in a cartoon. He was at the dinner table with his father and mother and looking at his plate. He remarks: I say it is spinach and I say the hell with it.

24 Likes

However, I do recommend creating a model and fiddling with the assumptions. Create your assumptions: normalized earnings, rate of growth of earnings and discount rate. Then make small changes in these assumptions and watch how much the net present value changes. Small changes can make huge differences when compounding over 50 years.

I recently read of a clever way to demonstrate this concept. Use your microwave. (Damn if I can remember where I read it. I’ll bet someone will.)

If you take a measuring cup with a handle, put a cup of water in it, and run it for 3 minutes the water will boil and the handle will be almost where you started. Great for making morning tea.

But if you run it for 6-8 minutes, you’ll find that the handle isn’t there anymore. Small errors in the positioning of the pot and the exact rotation rate compound and the handle continues to move away from its original spot.

This is the common problem with models. Small errors in the initial assumptions compound. It happens with hurricane forecasts - watch the path expand as the days increase. It happens with modeling O&G reservoirs. The fewer errors you can introduce - e.g. by using smaller but more numerous cells to approximate the reservoir - the better the simulations. But the required computing power increases, as does the cost and time to run them. Ditto for weather, including hurricane, forecasts. The American models use fewer cells per unit of area in the interest of lower costs and faster returns. The European model uses smaller cells per unit of area, so their initial assumptions have fewer errors - more accurate earlier compounding. Thus errors compound less, so are usually more accurate. But the model costs more to run and takes longer to deliver results. That’s the tradeoff. And they’re dealing with days - not years or decades.

In my, now very dated, experience with DCF models, their chief benefit was to demonstrate the sensitivity to a particular assumption. The greater the sensitivity, the more time I would spend trying to improve the “reality” of that assumption.

And you could always swing the answer to what you wanted by how you handled the terminal values. A little change there made a big difference.

You had to have them because people would ask what does “this model” say? So they were a fact of life in industry. (Buffett doesn’t face that issue.)

I always valued the mature judgments of experienced people more than I did DCF models.

Long way of saying I agree with EliasFardo

PS: A somewhat inside story. Lee Raymond, the past CEO of Exxon who is widely vilified by the environmentalists, was a PhD chemical engineer and very familiar with reservoir modeling. He well understood the difficulties of such models, and how the forecasts expanded based on differences in input assumptions. His problem was that long range climate forecasting is far more complex than reservoir simulation. Many more variables, physics & interactions less defined, much longer times, etc.

So he was well aware of the wide range of outputs that might happen based on such inputs. His caution was to be careful of the short range actions impacting the economies of the world based on the long range climate change forecasts of the time and outcomes. Balance needed. After his time, XOM actually showed the results of circa 25 such published forecasts in an Investor Day forecast and they varied widely in later years as would b expected. But that approach didn’t market well, so they’ve changed their approach to what can actually be done now that will help - and don’t argue about the future.

I did not have this discussion with Lee Raymond. But a good friend of mine did and we discussed it at length more than a couple of decades ago.

This is a major factor in my view that we must prepare for a wide range of outcomes on actual climate change. And that more emphasis is needed on what happens if he don’t/can’t control it. How do we defend?

6 Likes

So, when I see the results of computer models projecting, say, the effect of global warming over the next 50 years, I repeat the comments of a young boy in a cartoon. He was at the dinner table with his father and mother and looking at his plate. He remarks: I say it is spinach and I say the hell with it.
And you were doing so well until then. Likening a DCF with the modeling around climate change ignores a number of important distinctions between different model types. Namely:

  1. A lot of useful computer models are stable & bounded - exponential effects like you observe with a DCF will not occur. The model (and the system it represents) will achieve a one or a number of stable equilibria despite errors in inputs. This often occurs in physical systems where (for example) the system will try to exist at a minimum energy level, it occurs frequently in biology, in engineering, in many different fields. Furthermore as these are modeling physical systems the output can be reasonably physically bounded due to physical constraints. Over time the error in the system/model output reduces rather than compounds. Think Newtons method - you get a reducing error for each iteration or cycle and while for a given input there may be a greater error along the way, the end result is stable.

  2. Other models are chaotic - small deviations in inputs / conditions result in vastly different outputs. In that sense - depending upon how chaotic a system is and there are degrees of chaos - a discussion or calculation of error amounts may not even be useful. Chaotic systems frequently preclude modeling of exact states however en-mass there are many important characteristics that can be determined. For example Quantum Mechanics precludes understanding the exact physical state of a particle interaction, but measurement of particle weight is possible.

  3. Other useful models fall between those two states: I once did some work on particle flows & those systems have stable modes & others that are less stable and at an extreme level a particular model may not even apply. Bernoulli Equation works for some well understood fluid flow situations, but are poor explainers of turbulent flow - for that alternative models are required that are far closer to the Navier-Stokes equations (which have their own well-understood limitations).

All the above is a vast simplification and yet we can observe that DCFs are fairly chaotic - small inputs result in increasingly larger errors - due to compounding and yet can be somewhat bounded. It’s not possible for an entity to absorb the complete economic output of the Earth, but there is a lot of uncertainty below that (can Walmart feasibly get to 25% of total US retail?).

Climate modeling on the other hand is modeling a physical system - and a lot of the modeling involved is stable, fairly tightly bounded (e.g. temperature, energy being radiated by the sun) and important interactions are well understood. Slight variations in inputs don’t result in vastly different outcomes and the estimated errors reflect that. Simpler models deliver surprisingly similar outputs as later more sophisticated models that include many more better modeled systems. There has been plenty of writing describing how relatively less sophisticated modeling from the 70’s & 80’s has well characterized the subsequent physical warming, and contemporary modeling explains those interactions well (as opposed to the economic implications which are not well modeled and are potentially chaotic). This is a well understood modeling exercise and is not really in doubt (scientifically) except by those that don’t want to understand it.

Weather on the other hand is somewhat chaotic, the initial inputs aren’t finely measured and can result in measurably different outcomes. Does it rain here (and how much) on Tuesday or does it now rain 100km away is relevant & has meaning to us (storm tracks are another example). And even yet weather forecasting is surprisingly good out to 3-5 days as we endeavor to measure the inputs increasingly better & model the effects at lower levels. Weather <> Climate! Climate is at a much higher more averaged level than weather - the world economy vs. Walmart or DDOG - and so cannot be really compared from a modeling perspective.

And so once again this thread demonstrates the power of ‘Circles of Competence’. Mastery of one type of model doesn’t equal mastery of many types. Financial models often have little to do with physical ones. Expertise in one field doesn’t much qualify a person in another.

platykurtic (Engineering Science graduate - Applied Mathematician trained in a wide variety of mathmatical modeling including chaotic & stable systems, systems that transition between stable & chaotic states - fluid & thermodynamic flows - etc. I don’t often pull out the expertise card but given the subject matter seems appropriate.)

36 Likes

Climate modeling on the other hand is modeling a physical system - and a lot of the modeling involved is stable, fairly tightly bounded (e.g. temperature, energy being radiated by the sun) and important interactions are well understood.

I beg to differ. Climate is a physical system with plenty of poorly understood and perhaps chaotic inputs. Ask climate modelers their weakest, most poorly understood variable, and the answer will often be clouds.

I have worked with models throughout earth science, and have yet to see one in which wouldn’t spit out the answer you wanted, just by being slightly biased in a series of very reasonable and defensible inputs.

Too bad we can’t post images here. I have a nice spaghetti graph of 90 different published climate models compared to recent actual measurements. Over 90% of the models predicted more warming than observed. A fair amount of recent research has been published on the failure of the models. I won’t post more on an OT and controversial subject other than to suggest you start with McKitrick and Christy (2020) “Pervasive warming bias in CMIP6 tropospheric layers” Earth and Space Science. and Mitchell et al. (2020) “The vertical profile of recent tropical temperature trends: Persistent model biases in the context of internal variability” Environmental Research Letters. If they are wrong, please publish something to straighten them out.

17 Likes

Climate modeling on the other hand is modeling a physical system - and a lot of the modeling involved is stable, fairly tightly bounded (e.g. temperature, energy being radiated by the sun) and important interactions are well understood. Slight variations in inputs don’t result in vastly different outcomes and the estimated errors reflect that. Simpler models deliver surprisingly similar outputs as later more sophisticated models that include many more better modeled systems. There has been plenty of writing describing how relatively less sophisticated modeling from the 70’s & 80’s has well characterized the subsequent physical warming, and contemporary modeling explains those interactions well (as opposed to the economic implications which are not well modeled and are potentially chaotic). This is a well understood modeling exercise and is not really in doubt (scientifically) except by those that don’t want to understand it.

Great post. Specifically, Exxon did a bunch of climate research in the 1970s and 1980s, which is summarized in this 1982 document:

https://insideclimatenews.org/wp-content/uploads/2015/09/Exx…

Exxon’s predictions for climate in 2022 were bang-on. As they should, they presented a range of possibilities. Predicted temperature increase was right smack in the middle of the range, predicted CO2 was right in the middle, the date when the signal would be identified was almost perfectly predicted, oil consumption in 2020 was well predicted, the areas that would be most affected (polar regions) were identified, etc. They straight up nailed it. Obviously not perfectly. But absolutely enough for planning purposes.

19 Likes

< This often occurs in physical systems where (for example) the system will try to exist at a minimum energy level…>

Interesting. I found many investors on this board are engineers or scientists, they are rational by training.

3 Likes

Climate is a physical system with plenty of poorly understood and perhaps chaotic inputs.

Yes, and the biggest chaotic input is people’s behavior. Same with DCF. But even with the unknowns, a good analysis can inform better decisions. There are bounds on a climate system and on a stock price.

3 Likes

I beg to differ. Climate is a physical system with plenty of poorly understood and perhaps chaotic inputs. Ask climate modelers their weakest, most poorly understood variable, and the answer will often be clouds.

I have worked with models throughout earth science, and have yet to see one in which wouldn’t spit out the answer you wanted, just by being slightly biased in a series of very reasonable and defensible inputs.

Too bad we can’t post images here. I have a nice spaghetti graph of 90 different published climate models compared to recent actual measurements. Over 90% of the models predicted more warming than observed. A fair amount of recent research has been published on the failure of the models. I won’t post more on an OT and controversial subject other than to suggest you start with McKitrick and Christy (2020) “Pervasive warming bias in CMIP6 tropospheric layers” Earth and Space Science. and Mitchell et al. (2020) “The vertical profile of recent tropical temperature trends: Persistent model biases in the context of internal variability” Environmental Research Letters. If they are wrong, please publish something to straighten them out.
This might end up a bit of a ‘wall of text’, but everything you post about is a sign of competent professionals at work improving the modeling in a pretty well understood system.

I’ll start though by taking a step back. Firstly those differences - at a physical system level - are minuscule. That’s because the models are modeling temperature - it’s not +3°C or +1.5°C for a 100% error, rather it’s something like 296°K compared to 274.5°K a well understood & bounded system. Secondly any model improvement will need to be validated and so papers like those you present are a sign of competent and honest professionals looking to validate the improved models to ensure that it’s presenting as good a representation as possible. And that’s happening at the same time that the measurement is improving and so the target may be shifting!

Furthermore those professionals will be looking to validate, characterize and understand what the model is telling them. They’ll conduct some model runs with all the potential drivers maximized and others with them minimized and attempt to assign probabilities to inputs and so outcomes. If Europe, the Chinese & the US decide to halt climate mitigation activities and the developed world grows its economic activity as fast as possible but with existing energy solutions (rather than those we’ll need from now on) then what would that look like? Very bad. Is the probability high? No as the idea that Europe (for example) will take steps backward in regulations, embrace even more Russian energy etc. would be broadly unthinkable. Similar for China - will China unwind its investment in electric vehicles, batteries, wind power & its world leadership in solar (in production terms)? Absolutely not.

And so competent honest professionals will generate ‘spaghetti graphs’. And critique them. In great detail down to outputs of small elements of such important models. And past models may be pessimistic because - amongst other things - they are being acted upon. Europe has moderately significant countries (to give an example of which I’m familiar) with 50% renewable power, and others with a majority of electric vehicle sales. China is the largest electric vehicle producer (and accelerating) - good things happen! Does that mean they’re incorrect in the degree to which a DCF can be incorrect (by factors of 3x - 30x or more) - absolutely not!

Lastly - in no way is climate change across the time range in discussion ‘chaotic’. If I vary the surface temperature of the sea in one cell by 0.1°K it will not result in a 600°K difference in another far removed cell in 25 years! Which in modelling terms is what ‘chaotic’ means.

Once again - the chief lesson is in the power of ‘circle of competence’. Unless you’re a professional climate modeler the best we ‘interested parties’ can achieve is to follow the expert summary. We’re not really in a position to do anything else - no matter our popularity on social media etc.

28 Likes

Once again - the chief lesson is in the power of ‘circle of competence’. Unless you’re a professional climate modeler the best we ‘interested parties’ can achieve is to follow the expert summary. We’re not really in a position to do anything else - no matter our popularity on social media etc.
Just to be clear this comment isn’t directed at anyone here - either in-thread or on this board. It’s more a general comment that ‘almost exclusively’ commentary you read by people popular on things like social media etc. about complex expert topics like ‘climate change modeling’ is far more likely than not incorrect. The exception being an actual expert in that field being commented on, and even then it needs to be read in the context of other expert’s commentary. Even experts in ‘near’ fields can get stuff incorrect unless speaking in very general terms.

This is hard! Which explains why completely non-expert commentators like (for example) politicians (who these days are mostly lawyers in training) often get things pretty wrong. Would you like to go to space in a rocket designed by lawyers?

6 Likes

<rather it’s something like 296°K compared to 274.5°K a well understood & bounded system
Ugh - I meant to write ‘rather it’s something like 296°K compared to 294.5°K a well understood & bounded system’. Apologies!

3 Likes

Once again - the chief lesson is in the power of ‘circle of competence’.

Excellent point, though it needs to be recognized that it is a door which swings both ways. Many people who are highly skilled with numbers approach the natural world as if it is a physics lab, with limited appreciation of the complexity of the atmosphere and oceans. The end product may be mathematically very nicely executed but have limited relevance to what is actually going on. This is why one prominent researcher, with over 100 publications in the field recently concluded a presentation with “95% of the models agree…the data must be wrong”

1 Like

The end product may be mathematically very nicely executed but have limited relevance to what is actually going on.

Plato’s cave allegory and it’s modern variations (“Matrix”, “Are we in a Simulation”-Theory etc.) comes to mind.

We probably won’t ever know what is reality and not just a the data neatly fitting model which actually completely deceives us.

1 Like