1Year Berkshire Price Prediction - Inflation +17% From 10/22

I obtained a whole lotta value from Mungofitch’s posts predicting Berkshire Hathaway stock price; specifically, when to back up the truck. Throughout the years he’s described portions of the model he uses to predict stock price based on Berkshire book value, smoothing, and frequency of certain P/BV intervals. I have spent some effort recreating his model and, like Mungofitch, wanted to share.

However, I did not recreate his model. Clearly, I am not Jim. If his model is a decked out porsche then my model is an aggressively used yet still-not-dead yugo. Both have value but one is far more valuable. The model is a work in progress and far more granular than Jim’s but it’s informative enough to be useful.

The chart below shows the granular picture of the one year stock price returns from various book values going back to 2006. Where we are, P/BV of ~1.24, there is quite the spread in returns (about -15% to +30%) with a decentish likelihood of being inflation +17%.

I’m happy I was able to coattail Jim as long as I did, it sure was easy to simply read his posts and enjoy all of his hard work. However, I’m very glad the opportunity to create my own model arose because it helped me develop an understanding of why inflation adjustment is helpful, why P/BV binning was informative, along with some other things.

The predicted return of inflation +17% is pretty high; will Berkshire beat the SPY in the next year? Not a clue. I happen to think @dividends20 is on to something about the SPY being evergreen which makes it a wonderful candidate to beat Berkshire over the long term.


Thank you for sharing. I see the dashed vertical line indicating the current price to peak book value. Does the scatter plot and forward return prediction look appreciably different if you plot the forward returns against price to peak book value to date?

I know Jim has stated in recent years that he favors looking at price to peak book value - I believe under the assumption that blips downward in book value during recessions have historically been recovered during economic rebounds, putting book value growth back on trend.


Nice work! And it’s always nice to see something rational in market behavior. Jim’s original demonstration was fascinating, your more granular demonstration much appreciated.
Could you explain what binning you did?


I’m guessing it’d make some periods look like really great investment opportunities. I can certainly understand why mungofitch started looking at price to peak book value after putzing around with the data.

But back to the yugo v. porsche - I gotta get this jalopy in shape before I can start to do the fancy stuff. Right now, the data are extremely coarse, I have very poor precision to the month. I’d like to make that a lot more precise. But even with the month intervals the model is still useful.


Very coarse binning for the calculations. Just took average month stock prices, yearly CPI values, and quarterly book values to calculate inflation adjusted P/BV numbers. It’s clear the model is wrong but the model seems to be useful enough to provide some insight.

When I have more time I want to get to a point where this jalopy uses daily stock prices with monthly CPI and quarterly book. From there, calculating P/BV(peak), instantaneous P/BV, and then binning the frequency of P/BV intervals like mungofitch did would be pretty easy. Getting the (accurate) data is the slog.

I think it’s better to delivery pieces in fits and starts than deliver a full featured tool years from now.



What’s that called in software development today??? Sprints?

Sprint away! Thanks for sharing!


I’m stuck on the following point, no implied criticism, just want to understand your interesting work:

With four book values per year and roughly 16 years, there’s roughly 64 book values available.
Pairing forward annual returns on the date of each book value with the corresponding book value gives roughly 64 pairs.
But there are more than 64 dots in the plot.

There are various ways to address the issue of a limited number of book values compared to the annual forward return values that can result in more than 64 points.
Could you give a bit more detail on yours?
What binning or method was used to get more than 64 points?
Also, you mentioned ‘average month stock prices’: averaged over what?


what does same idea suggest for sp returns? maybe value as index/gdp ratio or sthing

1 Like

You have opened the busted up hood of the yugo to find more yugo.

The data are binned in months and quarterly book values are match to their respective months given ~200 data points (16x12). I plan on doing similarly with book and CPI as daily price data are added.

The monthly stock price data came from yahoo finance; I took the ‘high’ values because it was somewhere to start. This certainly influences the yugo model but will matter a whole lot less once I get around to adding daily price data.

I’m not a huge fan of smoothing data or making price adjustments simply because those are influences I end up guestimating and are more of a reflection of my beliefs, and my beliefs are terrible, so taking the lumpy report book, daily prices, and monthly CPI numbers using the same analysis to generate a lumpy output would be phenomenally better than what I have now.

All models are wrong, some are useful. Even though this is model is a yugo I still find it useful until I can get it’s successor up and running.


Thanks for this. I had started to do similar for a few months before Jim’s departure. Still cleaning up data sets and models (or I’d share mine as well) but it has given me a better appreciation of BRK and its long term reliability as a consistent money maker.

One nice point at current valuation, whether you use 2.5 column, p/book, or p/peak book, is that the potential negative returns really start to shave off if you start looking at 2 year returns, or 2.25 year returns. A helpful point if you’re into buying long dated calls.


Allow me to express my gratitude for sharing this work. Your sentiment about finding value in Jim’s forecasts matches mine.

Can I ask what your source for book value is? Is it from the quarterly reports, or your calculation, or something else? I imagine having that calculated in a consistent way is important to the model.

Just out of curiosity, what software are you using? I know Python best, R would be my second choice.


This is very useful and extremely interesting, thanks for posting!

If we get four ‘fresh’ book values per year on Jan, Apr, Jul, and Oct as follows
Jan Feb Mar Apr May Jun Jul Aug Sept Oct Nov Dec
then one could think of book values for Feb, Mar, May, Jun, Aug, Sept, Nov, Dec as ‘stale’.
For example, March’s book value is the ‘stale’ book value from January.
But it’s getting paired with a ‘fresh’ return value from March.
Prices and returns tend to squiggle a lot, so perhaps part of the scatter in the plot is due to this pairing of fresh price with stale book.
Again, that’s not meant in any way to be a criticism, I just want to be sure I understand precisely what’s plotted.

If so, then the question naturally arises of whether there’s less scatter if one plots just fresh book values with fresh return values (roughly 64 points). This plot would use less data, so there’d be a tendency for the plot to have less scatter solely due to that effect. But there are ways to account for that ‘data size’ effect w/o any fancy statistics and try to conclude whether or not there’s less scatter if one doesn’t pair stale book values with fresh return values. I’m not trying to propose any new work, just mentioning some thoughts that are prompted by your very interesting analysis.


As you show, it depends on the starting P/B, and also on the ending P/B. If BV/share grows at a similar rate over the next 12 months as it has over the period of your study, about 8.1%/yr, then the the P/B needs to increase to 1.34 in order to get a real, 17% price increase. This not unrealistic, but unfortunately neither is a an ending P/B of 1.14. My crude guess, FWIW, is that the operating businesses, which make up about half of BV, can grow 7%-8%, real, if the recession is not too deep, but that the equity portfolio will not grow that much. The equity portfolio, held down by the broad market, is likely to grow more like 3%, real. If we forecast overall BV/share growth to be 6%, real, then the P/B will have to increase to 1.37 in order to get a real price increase of 17%. I’ve got my fingers crossed. An ending P/B of 1.37 would still be about 8% below IV. As you point out, though, the uncertainty in any one year forecast is large.


IMO, this isn’t going to be much of an issue, since BRK’s stock price is generally so much more volatile than it’s book value. And price to peak book has supposedly been a better predictor of future returns than price to current or latest book, for reasons that make some sense when thinking of BRK’s investments and business model, so that further lessens worry about using ‘stale’ book values.


BenSolar, thanks.
BRK’s stock price is generally so much more volatile than it’s book value.
That was my concern, i.e. the pairing of ‘fresh’ with ‘stale’ might introduce scatter in the plot because of price volatility.
But I haven’t followed the ‘peak book’ discussion, which you suggest further lessens worry about using ‘stale’ book values, is there a link or precis so that I can get up to speed?


I think the point is that in a typical year these days, value probably increases 8% to 10% + inflation on average so a 3 month stale BV might be understated by 3% to 4%. This “error” is relatively small compared to the swings in value from say 1.55 x Book to 1.2 x Book (20% drop). The model will never give a perfect price prediction so I don’t think this reduces its usefulness all that much.


From the quarterly reports - no smoothing, no adjustments. Except the Q3 2018 report - the share count seems to be extremely wrong in their report an I have no idea why. The difference between Q2 and Q4 2018 isn’t enough for me to figure out what is going on with Q3.

Straight up, uncut, unadulterated, pure, Microsoft Excel.

This is an eminently reasonable perspective. I hear you and I understand. I think you’re probably right too.

The ‘bad corporate data sciencer’ in me wants to model “what do I know right now” as close to possible and make sanity adjustments to the output since that’s an extremely close approximation to real life. In my personal life, I don’t need to make pretty models which generate theoretical outputs within a tight range on a beautiful glide path while quietly creating a separate document explaining all the reasons why the theoretical model didn’t match reality at a later date to maintain my career so making a lumpy, bumpy, noisy model is my preference.

Here’s an example: picking 64 values on book report date would have missed the P/BVcurrent opportunity less than 1.1 in part of April and May 2020. Those opportunities are rare: the 1YR median (+30%), 1YR average (30%), 1YR min (+7%), and 1YR max (+65%) returns when P/BV is around 1.05-1.1 are all stellar. That kind of valuation is so rare it’s rare enough to be missed by the model entirely depending how the data are constructed.

When I look around the room I can find the most recently reported quarterly book value, monthly CPI value, and today’s stock price. That’s it. I, as an investor, will have to deal with stale data when faced with decisions and so will this model. I could certainly drop data or do some smoothing and short term forecasting of the inputs to generate prettier results but then I have an approximation of the historical reality driving the creation of a future prediction.

That’s all mungofitch. I’m just coat tailing his work. His approach and methodologies to valuation are probably one of the most interesting things I’ve read in my life and I’ve developed a much greater appreciation of his work when I set off to re-create my understanding of his intents.


I was fascinated by it when mungofitch posted, and it’s great to see your follow-up and generous sharing of results.

Technical discussion put aside for a moment, a general point that occurs to me is that the result seems to tie into ideas of ‘mechanical investing’, which has been struggling of late. The result demonstrates an empirical relation between being ‘cheap’ and future returns, at least for one company i.e. BRK. Many mechanical, value investing, approaches look to buy ‘good companies that are cheap’. Here, the ‘good’ part is pre-selected, it’s Berkshire after all (admittedly, what ‘good’ precisely means is not clear but clearly Buffett is good). What’s been shown for this pre-selected ‘good’ company is that buying (very) cheaply tends to have nice returns, while buying (very) expensively tends to have poor returns.

Anyway, one might conjecture, given the result, is that the hard part of value investing, i.e. buying ‘good’ companies on the ‘cheap’, isn’t so much defining ‘cheap’, but defining ‘good’.


100% agree. That’s why I tend to stick to the simple companies doing simple / understandable things for customers with an extremely heavy slant toward companies making or selling physical products a person could touch and feel.

I’ll forever miss out on the sky high returns asset light companies like google and Saul stocks but I’m a simple person needing simple investments.


I don’t have one offhand, but mungofitch mentioned it repeatedly on the old board. He apparently did regression of returns vs current book value and compared to regression of returns vs peak book value and found that peak book value was the better predictor.

As I remember, he explained this result as likely coming from the fact that current book deviates significantly from peak book primarily during heavy bear markets, when the stock portfolio takes a severe beating. But, those are the same times that Buffett has been able to capitalize on good deals available in those times of distress. I.e. the high interest loan made to Occidental during the Great Recession that came with oodles of warrants.

Also it presumes that the vast majority of Berkshire’s stocks and businesses are built for long term performance and won’t be significantly impaired long term by even a big economic dislocation.

So, when the stormy weather clears, Berkshire ends up in better position than when the bear struck and book value recovers nicely to its long term trend. If you were relying on current book value instead of peak book value, you might not have recognized a great buying opportunity.