Since the Lehman credit default swaps settled without the sky falling, there has been a small wavelet of support for the once-obscure financial instruments that are widely blamed for amplifying the effects of the financial crisis, including a Forbes.com op-ed entitled “Credit Default Swaps Are Good for You.” I happen to agree that CDS can play a useful role in enabling bond investors to hedge against the risk of default, and thereby make it easier for some institutions to get credit. But it’s a bit premature to proclaim that all is well and good in swapland.
Most obviously, there is the troubling matter of AIG, which has recently received additional scrutiny from the likes of the New York Times and the Wall Street Journal (subscription required). AIG has already burned through most of its initial $85 billion loan from the government, has drawn down half of a separate $38 billion loan for its securities lending business, and recently got permission to sell up to $20 billion of commercial paper to the Fed. (And remember, when negotiations over the AIG bailout began around September 12, the company was saying it only needed $20 billion.)
Most of the cash has gone to post collateral for CDS deals in which AIG was guaranteeing various bonds against default. As the risk of default goes up, counterparties demand collateral (cash, or cash-like securities); the amount of collateral they want goes up with the likelihood that AIG would have to pay out on a default. (The WSJ article sheds some light on the negotiations that other banks had over collateral; Goldman Sachs, when it couldn’t get as much collateral as they wanted, hedged themselves by buying insurance on AIG’s debt, which is a clever move I wouldn’t have thought of.) If AIG hadn’t been bailed out, its counterparties would be looking at tens of billions of dollars in losses in the form of write-downs on their CDS portfolios, because a bankrupt AIG could not be counted on to pay off on those contracts. Not knowing who was bearing those losses would have increased the fear that for several weeks was paralyzing the credit markets. So arguably, the potential damage of CDS was only contained precisely because the government elected to bail out AIG.
Now, how did the brilliant minds at AIG Financial Products – and they are, or were, brilliant – get into this situation? Like every other financial institution in these markets, they were using models – models, in this case, that estimated the probability of default on the various bonds AIG was insuring by “selling” credit default swaps. The WSJ article says that AIG was (a) using default-prediction models to determine the likelihood that it would ever have to pay out on credit default swaps, but did not have models (until it was too late) for two other risks: (b) the risk that increasing probability of default (as reflected in CDS spreads) would trigger collateral calls by counterparties, and (c) the risk that increasing probability of default would show up as write-downs on AIG’s balance sheet.
I don’t buy this distinction. Risks (b) and (c) occur precisely because the underlying bonds are becoming more likely to default. In order to distinguish risk (a) from risks (b) and (c), you have to have a theory that (1) the probability of default of the underlying bonds is separate from (2) changes in prices of the credit default swaps on those bonds – but (2) is nothing more than the market’s assessment of (1). This amounts to saying that your default-prediction model is right and the market is wrong, even when the market is composed of other banks with similar models; that’s not an argument you’re likely to win.
More fundamentally, there is a question about how valid even the best of these models are. In the last two decades, a new discipline of risk management has been developed in the financial sector. The basic approach is to estimate the variance of the values of the different assets that make up a portfolio, and the variance of the events that can affect the values of those assets, taking into the account the correlations between all of these values and events (that is, the chances of GM defaulting and Ford defaulting are not independent events). Once you’ve done that, you can estimate the likelihood of your portfolio losing X% of its value; if you don’t like the answer you get, you can use hedging strategies to reduce that likelihood. (This movement toward risk management modeling was so successful that the 2004 Basel II Accord recommended that banks be allowed to use their internal models in determining their own capital requirements.)
The problem is that, in general (most of these models are proprietary secrets, so I can’t speak with complete confidence), these models are fed by historical data – because, by definition, that’s the only data you have. So estimates of price volatility or of other events are based on past experience – experience that may only cover a very short period of time, especially where new and complex financial instruments are concerned. More importantly, even a long period of time is not relevant if there is a fundamental difference between the period your data is from and the current moment. To sum this up: Let’s say housing prices have never declined by 30%. You can’t assume they won’t fall by 30% in the future, for two reasons. First, it could be that they only fall by 30% every 100 years, and you only have 50 years’ worth of data. Second, it could be that in the past housing prices couldn’t fall by 30%, but the world has changed in a significant way, and now housing prices can fall by 30%.
As a result, early in the crisis (back in 2007), you would hear people saying that they were seeing “six-standard-deviation” events, or events that should only happen every hundred thousand years. This is just a silly thing to say. As a statistical matter, if your model says that some event was virtually impossible, it is generally more likely that you made a mistake than that an extremely unlikely event occurred.
In any case, the scale of the losses that have occurred in the last year and a half, and the pronounced failure of every financial institution to anticipate them – see the successive earning calls of every large US bank in 2007 – are as good proof as we will ever find that their risk management models simply didn’t work. If something called a “risk management” model doesn’t work under the the most extreme conditions, what’s the point of having it?