May 11, 2012

JP Morgan Loss Bomb Confirms That It’s Time to Kill VaR

One of the amusing bits of the hastily arranged JP Morgan conference call on its $2 billion and growing “hedge” losses and related first quarter earning release was the way the heretofore loud and proud bank was revealed to have feet of clay on the risk management front. Jamie Dimon said that the bank had determined that its value at risk model was “inadequate” and it would be using an older model. And no wonder. The Financial Times report contained this bombshell:

JPMorgan also restated its “value at risk”, a measure of maximum possible daily losses, of the CIO [the unit that executed the trading strategy that blew up] in the first quarter from $67m to $129m

“Restating” greatly underplays the significance of what happened. VaR is a prospective risk metric. From ECONNED:

…the objective was to come up with a single figure that captured all the risks in a simple statistical fashion: what was the risk that the bank would lose a certain amount of money, specified to a threshold level of probability, in, say, the next 24 hours? The model output would say something like: “We have 95% odds of losing no more than $300 million dollars in the next 24 hours.”

It took seven years of refinements to reach that goal, which should have been seen as a warning that it might not be such a good idea.

While firms look at VaR over a range of time frames, daily VaR (what is the most I can expect to lose in the next 24 hours) to a 99% threshold is widely used.

So get this: VaR’s real use is prospective. The VaR for a big risk taking unit was found to have been nearly double the level reported two weeks ago (hat tip Joe Costello). Remember, this was the risk incurred in the first quarter; this change has nothing to do with the losses incurred in the last six weeks. It means the risk originally reported by the folks in risk management (in real time, for use in management decisions) was grossly off.

The fact that VaR is a lousy metric should not come as a surprise. Anyone who has paid much attention to financial firm risk management should know that it is not what it is cracked up to be. There is a tremendous bias towards scientism, towards undue faith in quantification and statistics (see a longer form discussion in “Management’s Great Addiction“) which leads to overconfidence. And when people are paid bonuses annually, with no clawbacks for losses, and banks show profits a fair bit of the time, who is going to question bad metrics when the insiders come out big winners regardless?

But VaR is a particularly troubling example, more so because it is sufficiently, dangerously simple minded enough that regulators and managers a step or two removed from markets have become overly attached to its deceptive simplicity.

For newbies to this site, JP Morgan created the widely used risk management tool Value at Risk (note to Felix Salmon: JP Morgan did NOT invent risk management, investment banks were doin’ it in the stone ages of the 1970s and 1980s. And the pioneer among banks wasn’t JP Morgan, but Bankers Trust, with its RAROC, or Return on Risk Adjusted Capital model). VaR set out to create a single risk measure across an entire firm. As we wrote in ECONNED:

….the objective was to come up with a single figure that captured all the risks in a simple statistical fashion: what was the risk that the bank would lose a certain amount of money, specified to a threshold level of probability, in, say, the next 24 hours? The model output would say something like: “We have 95% odds of losing no more than $300 million dollars in the next 24 hours.”

It took seven years of refinements to reach that goal, which should have been seen as a warning that it might not be such a good idea….

Using a single metric to sum up the behavior of complex phenomena is a dangerously misleading proposition…

The output formulation was designed around statistical convention, that of probability distributions. But the part of the distribution that the analysis cut off is the very part that will kill a leveraged firm. It was almost as if the team that produced VaR had drawn a map that simply marked the edge of the world with the legend “Beyond here lie dragons,”when the treasure seekers will inevitably venture into those uncharted waters.

That discussion actually understates how misleading VaR is. As mathematician Benoit Mandelbrot discovered in the 1960s, and Nassim Nicholas Taleb popularized in his book Black Swan, risks in financial markets do not have normal (Gaussian) distributions. Taleb, in his article The Fourth Quadrant, pointed out there are many situations where statistics are at best questionable and at worst unreliable: where you have non-Gaussian risk distributions (as you have in financial markets) and complex payoffs. Even if you have comparatively simple businesses, aggregating risk across businesses creates complex payoffs. And the risks in these business aren’t simple. Taleb indicative list of “very complex payoffs” includes:

Calibration of nonlinear models

Leveraged portfolios (around the loss point)

Derivative payoffs

Dynamically hedged portfolios

Kurtosis-based positioning (“volatility trading”)

JP Morgan and every big dealer bank is stuffed to the gills with risks like that.

Now VaR isn’t the only risk model JP Morgan is using, but it has served to allow the inmates to run the asylum. The fact that Dimon dwelled on VaR was likely not just to assign blame; it’s guaranteed to be a major tool in communicating with senior management and the board.

The good news is the regulators seem to be a step ahead of Dimon in turning their backs on VaR. FT Alphaville last week reported on the latest missive from the Basel Committee on Banking Supervision on capital requirements for bank trading operations. They said they don’t like VaR and want to move to other metrics:

….the Committee has considered alternative risk metrics, in particular expected shortfall (ES). ES measures the riskiness of a position by considering both the size and the likelihood of losses above a certain confidence level. In other words, it is the expected value of those losses beyond a given confidence level. The Committee recognises that moving to ES could entail certain operational challenges; nonetheless it believes that these are outweighed by the benefits of replacing VaR with a measure that better captures tail risk.

Note that this change will not win with Taleb’s approval. He has also written about the difficulty of measuring tail risk. He has shown in many markets how tail risk estimates are often (statistically) based mainly on one or two data points, and how fraught that is. His main point still holds: the type of risks embodied in trading books aren’t suited to statistical measurements. The best approach is likely to be to use a variety of measures and models and (gasp) apply judgment. But the authorities, and Dimon along with them, have not given up their hunt for a philosopher’s stone to turn lead into gold.

No comments:

Post a Comment