- After a steep rally at the beginning of the year, followed by weeks of market turmoil, some investors are increasingly concerned that we may have witnessed the formation of a US equity bubble, perhaps on the verge of bursting.
- But these fears are likely exaggerated. While valuations are indeed at extreme levels by historical standards, the macro fundamentals have shifted. And a new regime of lofty price multiples is consistent with compressed discount rates.
- We model an S&P 500 fair value range over time, and attribute price changes across four fundamental factors. We find that the nine-year bull run can be largely explained (up to 70%) by rising dividends and falling real yields. And, without a significant repricing higher of long-end real yields, elevated valuations could be here to stay.
With thanks to Kiran Kotecha for his research assistance and contribution.
The relentless bull run
Global equities have been in a tireless bull run since the depths of the GFC (Global Financial Crisis), with US stocks leading the charge. From the February 2009 trough, the S&P 500 index has more than quadrupled in value (Figure 1), marking its second-largest and second-longest bull market since records began. On a total return basis, an investment made at the trough would have yielded a staggering 380% in just over nine years, or an annualised return of 18.8%!
So how have such stellar returns persisted for so long? Is this the sign of a flourishing economy and corporate sector, or yet another equity bubble? Or, alternatively, are there other macro drivers at play?
Figure 1: Since the February 2009 trough, the S&P 500 has more than quadrupled in value.
Sources: Bloomberg, Record. Monthly data to 30/04/18.
As the bull run has extended further and further, concerns of a potential bubble have intensified, backed by a host of valuation metrics at extreme levels. The pessimists fear that the next wave of turmoil will bring stocks crashing down to more historically consistent valuation levels. But the optimists are not convinced that current valuations are unwarranted, pointing to a strong US and global recovery, or to ever-lower interest rates as a shift in fundamentals. Indeed, the US expansion has now endured for more than eight years, at an annual pace of 1-4% throughout. And, as the discount rate (US yields) is now lower, higher valuation levels could be intrinsically justified, rendering historical comparisons inappropriate.
In this blog, we attempt to dissect the nine-year equity rally, to better understand today’s elevated valuation levels and the extent to which they might be explained by long-term shifts in macro fundamentals.
First, what do the various equity valuation models tell us about current stock prices, and how do they measure up against historical levels?
Robert Shiller’s Cyclically-Adjusted Price-to-Earnings ratio (CAPE, a.k.a. Shiller P/E) is one of the most commonly used measures. It shows the price of an equity index, divided by its average inflation-adjusted earnings over the previous ten years – essentially a simple price-earnings ratio, but smoothing out the effect of the business cycle and better reflecting sustainable earnings. Based on CAPE, US equities currently trade at their most elevated level since 2001: 33 times cyclically-adjusted earnings (Figure 2). While still short of the 45x reached at the height of the dot-com bubble, this is still one of the three most overvalued episodes since data began in 1883, matching the 33x seen at the onset of the Great Depression in 1929 (Figure 3). Outside of the US, meanwhile, other G4 markets (Japan, UK and Europe) are valued at much more historically consistent levels.
Figure 2: US CAPE most elevated since 2001… Figure 3: …the third-highest level since data began
Sources: Barclays, Shiller. Monthly data to 30/04/18.
As with any valuation measure, though, CAPE is not perfect. For one, it is a poor measure for valuing early-stage growth businesses that have not yet reached full earnings potential. It does not effectively account for cash generation, investment returns or capital structure variations, either. With no perfect model available, investment analysts monitor a host of other valuation measures too.
But in fact, the vast majority of popular valuation measures also point to an overvalued US market. EV-to-EBITDA (Enterprise Value to Earnings Before Interest, Tax, Depreciation and Amortisation), which better accounts for varying capital structures and cash generation levels, shows valuations approaching all-time record levels (EV/EBITDA, Figure 4). Price-to-Sales (P/S, Figure 5), which doesn’t rely on the more cyclically-natured earnings levels, also points to near-record valuations. Market-Cap-to-GDP (MC/GDP, Figure 6), likened to a P/S ratio for the entire country, tells a similar story.
Figure 4: EV/EBITDA approaching record valuation. Figure 5: P/S also near record levels.
Sources: Bloomberg, Record. Monthly data to 30/04/18.
Figure 6: MC/GDP suggests a similar story.
Sources: Macrobond, Federal Reserve, Record. Data to 31/12/2017, calculated using market capitalisations of the Tokyo stock exchange, FTSE All-share and Russell 3000 (prior to May 2000, Federal Reserve value of US nonfinancial corporate equities).
Other methodologies rely more directly on the replacement value of assets. Price-to-Book (P/B, Figure 7) and Tobin’s Q* (Figure 8) both suggest US valuations are at their highest since 2001, if somewhat short of dot-com-bubble levels.
Figure 7: US P/B at highest since 2001… Figure 8: …a similar story to Tobin’s Q* multiples.
Sources: Bloomberg, Federal Reserve, Record. Data to 30/04/18. *Tobin’s Q is total market value of nonfinancial equities divided by total net worth of nonfinancial equities.
Finally, there are at least a couple of well-known models suggesting that valuations are somewhat more reasonable. Price-to-Dividends (P/D) implies more historically consistent valuations across all G4 regions (Figure 9), and lower than most of the 1998-2007 period. Meanwhile, the Fed model actually suggests the S&P 500 may be considerably undervalued versus bonds (Figure 10).
Figure 9: P/D historically consistent across G4. Figure 10: Fed Model suggests undervaluation.
Source: Bloomberg, Record. Monthly data to 30/04/18.
While the US is the only G4 market that appears overvalued across a broad and relatively-consistent set of measures, there are still inherent dangers in relying on these measures alone. Regional stock markets carry their own nuances and idiosyncrasies (e.g. investor behaviour, sector weights, data and accounting inconsistencies), so direct comparisons are not always appropriate. For example, the US market’s higher concentration of technology and early-stage growth businesses, which have likely not yet reached full earnings potential, may be causing an unrepresentative skew towards higher US valuations. Besides, there are also conflicting valuation signals within the US market itself, which could be influenced by structural changes to the corporate landscape. P/D ratios for instance, which appear to have diverged from other more lofty valuation multiples, could be suppressed by elevated payout ratios in recent years, due to a shortage of corporate investment opportunities.
An additional pitfall in the interpretation of valuations is the assumption that they should necessarily revert to a long-term historical average, ignoring any regime changes along the way (such as changes to the risk free rate, and long-term expected growth rates). In much of the above data we observe lower average valuation levels prior to the early 90s, and considerably higher levels since. This might be entirely appropriate given the significant global changes witnessed in demographics, technology and macroeconomics (e.g. the internet, smart phones, big data, ageing populations, slowing productivity growth, low investment rates, savings gluts). It stands to reason that price multiples based on turnover and profitability should be highly dependent on the prevailing risk-free rate (the discount rate), expected earnings/dividend growth and the fair compensation for risk at any given time (ERP), all of which could be considerably affected by the above factors.
Ultimately no measure is faultless, and a clear fair value for any equity index is very difficult to infer, even with a combination of models. Moreover, the above metrics do not allow us to effectively attribute the equity bull run to the various possible macro drivers; lower risk-free yields (i.e. discount rates), inflation, growth prospects, or a repricing of stronger corporate earnings and dividends.
Dissecting the rally
To distinguish between the various forces at play we go back to the fundamentals. We simulate a fair value level for the S&P 500 index, using the Gordon Growth Model (GGM), which is based on the basic supposition that the fair value of a stock must equal the NPV (Net Present Value) of its future expected dividends (Figure 11); if this holds, the GGM formula should be an intrinsic certainty.
Too good to be true? Well, not quite. In order to be of any practical use, one must make a number of assumptions:
- First, the model assumes that market expectations of r and g are constant into perpetuity. While this is unlikely in practice, a close prediction of average expectations into perpetuity should remain informative. But P is extremely sensitive to r and g, so the accuracy of these estimates is critical.
- Second, an estimate of D1 (next year’s dividend) is necessary. This is less problematic, as it is rare for total S&P 500 dividends to deviate significantly from one year to the next, and longer-term valuation trends are unlikely to be significantly affected. But, to get around this, we also convert our GGM output into a P/D fair value level (Figure 12), which is then directly comparable to the observed P/D1
Based on the below input variables, we calculate fair values going back to 1984. And then, due to the sensitivity to r and g, we infer a fair-value band, allowing for a 50bp margin of error for our r-g input.
- r = real risk-free rate (rf) + Equity Risk Premium (ERP)
- rf = 30y inflation-linked Treasury yield (TIPS, 6-month rolling average). Prior to the existence of the TIPS market, we proxy for real yields by applying HP filters to 30y nominal yields and y/y CPI inflation, and subtracting one from the other (1984-1997).
- ERP = An average ex-post ERP is taken from 1972-2018 data, and held constant for our simulation. 1972-2018 ERP calculated as rolling 10-year realised total equity returns (S&P 500) minus the prevailing 10-year Treasury yield at the beginning of the 10-year rolling period.
- g = Real potential output y/y growth (Congressional Budget Office), smoothed with HP filter.
- D1 = Latest S&P 500 annual dividend (Shiller), inflated by 1+g to proxy for T+1 forecast.
Based on macro fundamentals, US stocks are not overvalued
So what does our model tell us? At first glance, it appears that the S&P 500 has traded neatly within our simulated fair value range for most of the time since 2012 (Figures 13 and 14).
Before 2012 however, the index appears to have traded consistently above our implied fair value range, and there are a number of plausible explanations for this. Most obviously, the extreme overvaluation seen during the dot-com bubble period is not surprising. Our simulated model output relies on the assumption of a fixed Equity Risk Premium throughout, but it is broadly accepted that risk premia were severely compressed during this time (1995-2000). Aside from this period, overvaluations were less exaggerated, peaking in 1987 (just before Black Monday), the summer of 2007 (the height of the pre-GFC real estate bubble), and 2010-11 (when stocks had rebounded from GFC lows, but the risk-free rate had not yet fully adjusted to its new low regime).
Figure 13: S&P 500 within our fair value range… Figure 14: …both in terms of price and P/D multiple.
Sources: Shiller, Bloomberg, Macrobond, Congressional Budget Office, Record. Data to 25/05/18.
But there are other factors that could explain a potential underestimate of fair value in the first 20 years of the sample. The model relies heavily on the estimated inputs, so a small misestimate can significantly move our fair value range. In Figure 14 we strip out the effects of D1, so that any deviations from fair value must be driven exclusively by rf, ERP or g. Which leaves three plausible explanations. One is that ERP were significantly compressed for the best part of two decades. The second is that, for practical purposes, the risk-free opportunity costs expected by investors during this period were actually materially lower than the risk-free rate in our model suggests (30-year real yields); but with such a liquid Treasury market in the US, this is difficult to believe. The third possible explanation is that the market’s expectation of long-run dividend growth was considerably higher than our model estimate. This seems particularly plausible given that the relationship between dividend growth and GDP growth in the US has evolved over time, so one may not always be an adequate proxy for the other. During the dot-com bubble in particular, dividend growth expectations would have been biased considerably higher by the low starting point of tech-stock payout ratios, and on the expectation of a steep rise in the future.
Ultimately, as with any other measure, we cannot fully rely on our model for a perfect fair value estimate at any given point in time, partly due to input estimation challenges. But it does provide a useful alternative indicator of fair value changes based on prevailing macro regimes.
Real yields and dividends could explain most of the nine-year rally
While our estimated fair value range can provide a useful additional macro anchor to valuations, the model is most effective in helping attribute medium-term changes in valuations to shifts in individual macro inputs. Ultimately, any stock market rally should be driven by one of rf, ERP, g or D1. By varying rf, g and D1 one at a time, while holding the others constant, we can estimate the magnitude of their corresponding expected effects on P for any given period. While we cannot run the same analysis with ERP due to the inherent difficulties in observing it1, we can attribute any residual changes in price to some form of market risk premia.
So what can we learn? Figure 15 shows our attribution of some of the key medium-term moves over the past 30 years. First of all, residual changes in price unexplained by our model (i.e. risk premia) play a significant part during all of the four periods that we measured. In 1984-2000, an acute compression of risk premia appears to have been the dominant driver of a steep increase in stock prices. During 2000-07, a rebound in risk premia was enough to offset the effects of higher dividends and lower yields, to prevent pre-GFC equity valuations from materially overshooting dot-com levels. And during 2007-09, risk premia once again re-rated higher, encapsulating a period of risk-off sentiment, to account for the equity sell-off.
But the most recent rally, from the depths of the GFC, appears to be far more complex, with a concoction of factors combining to continually drive stock prices higher. Risk premia have once again reversed from 2009 risk-aversion but, in addition, D1 and rf have also had outsized effects. Dividends have roughly doubled since the trough and real 30-year yields have declined by some 1.5%. Combined, these two factors can explain up to 70% of the S&P 500’s nine-year rally.
Figure 15: Lower real yields and higher dividends can explain up to 70% of the nine-year rally.
Sources: Shiller, Bloomberg, Macrobond, Record. Data to 25/05/18.
The long and short of it
So the stellar equity returns of the past nine years are not as surprising as they may first seem. They have likely been driven by a perfect combination of lower real yields, stronger corporate earnings and dividends, and improved risk appetite. This fundamental underpinning behind the bull market implies that stock prices may not be ‘expensive’ at all, given the contemporary macro environment. And there is certainly not enough evidence to fully substantiate claims of an asset ‘bubble’. US stocks might look historically expensive, but history is not necessarily indicative of future results.
 ERP is kept constant due to the inherent difficulties in estimating it. There is no one objective best-practice method of measuring ERP, which would make it the hardest input variable to rely on. Moreover, any ERP estimation also relies on some of our other input variables in our model, thus introducing an element of unwanted circularity.
By keeping ERP constant, our model provides a long-term fair value range estimate, driven only by long-term macro drivers (rf and g). Equity markets will obviously oscillate around long-term fair value due to short-to-medium-run cyclical movements in the ERP, but our model offers a longer-run estimate, to which equities would theoretically revert to.