Risk Metrics for Decision Makers – Part 2
Unfortunately, all too often conversations about risk management start with a discussion of off-the-shelf risk metrics, what is value at risk (VaR), or some other arcane aspect of risk methodologies and calculations. Risk management professionals feel it is their responsibility to educate the business to think about risk, and how risk measures are calculated. Since the introduction of VaR in the mid 1990’s, it has become the standard bearer of risk metrics. However, VaR by its text book definition, isn’t suitable for all circumstances.
The text book definition of VaR misses the intent of the concept. As a member of the JP Morgan team which introduced RiskMetrics and VaR to the world, I have had many lively conversations with some who view VaR has having limited application to business activities off of the trading floor.
My counterpoint has always been that it was introduced to address a much broader business challenge than what occurs on a trading floor. The primary objective behind the RiskMetrics concept was to provide a mechanism to introduce risk thinking into the vernacular of companies whether they were actively engaged in trading in the markets or not. The objective was to encourage the mindset that a range of outcomes are possible, and we can use rigorous VaR like analysis to determine that range, and improve decision making.
The objective of a risk professional should not be to introduce a new vocabulary to business managers regarding risk management, but to apply risk
management principles to make better decisions using the performance metrics they typically think about and measure performance against. As an example, I have worked with many regulated and municipally owned utilities where a common concern is impact on rates to their customers or constituents. Their goal is to provide the lowest possible rates with stability.While the traditional VaR measure might be a useful metric to control trading and hedging activity, it should also be applied in a way that provides insight into how trading activities and hedging decisions impact rates. In this instance a metric that reflects the impact of decisions on rates, e.g., rate at risk, would be more useful.
Less is More: Keeping Risk Relevant
Lord Kelvin, in addition to being the father of the Kelvin scale of temperature once said, “When you can measure what you are speaking about, and express it in numbers, you know something about it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind.” In other words, you can’t manage what you can’t measure.
For the most part, risk managers have been very effective at measuring things. However, to add to Lord Kelvin, I would say sometimes you can’t manage things when you over measure things. I have seen many an Enterprise Risk Management (ERM) program which prides itself on the dozens, if not more, of metrics and ways they measure risk. I do not argue the benefits of more detailed metrics to help business line managers understand and manage issues they encounter day to day. What is required to manage certain risk, e.g., operations or regulatory, at a granular level can be quite different than what is needed to aggregate risk at the enterprise level. However, without measures that are meaningful across business activities, the enterprise risk
program is lacking.
In some cases, too much information even when managing specific risk in a business, may result in the same inability to provide basic insight. As an example, back in my banking days at JP Morgan, the bank was transitioning from a commercial bank which made a lot of loans, to an investment bank that did a lot of trading. This transition made it obvious that thinking about credit exposure needed to evolve to reflect the new dynamics of the business. Initially, the derivative business was reporting credit exposure on a notional basis bucketed into various maturities as was the norm in the lending business. It became very apparent that this resulted in the inability to answer some basic questions posed by senior management regarding credit exposures.
The fusillade of metrics grouped into numerous maturity buckets across different businesses, wasn’t able to provide clear answers to basic management questions such as, where are our largest counterparty exposures, or has our exposure to a particular customer or segment increased or decreased? The solution to this conundrum was the creation of a VaR like measure which we initially referred to as Peak Loan Equivalent (PLE) to calculate exposure for the derivative business, which was compatible with the traditional notional lending exposure measures. This metric eventually became popularized within the industry as Potential Future Exposure (PFE), and provided management with clear insight into basic inquiries.
Understanding What’s Behind the Numbers
One of the big rubs regarding risk metrics and associated calculations has been the concern that the methodologies are complex, and while providing information, it isn’t necessary to fully understand what goes into the calculations. This attitude results in two potentially bad outcomes. The first is that risk measurement is rejected because of its complexities, or results in the an unquestioning reliance on the numbers without transparency behind the vulnerability of the numbers. I have argued in the past that the later was a prime cause behind the 2008 financial crisis and collapse of the CDO and credit markets.
For any risk metric to be most useful it is critical to understand the assumptions and inherent weaknesses of that number. This transparency enables management to apply experience and judgment to make informed decisions based on available information. It should never result in the outcome of if the model output is 10, we do A, vs. 9, whereas we do B. Abject reliance on a metric for decision making is probably worse than having no metric at all. In the development of any ERM program and subsequent risk based budgeting approach, the use of a limited set of business meaningful metrics is critical to the success of the program. The metrics utilized can be based on a “VaR think” stochastic methodology if applicable, but is not necessary. It should not be specific to any particular business activity, but should be applicable across all business activities and risks. Finally, it should provide insight into what the range of outcome might be, and how performance against potential outcomes can be determined, allowing management to make more informed decisions.
Bob Young, Partner, TechCXO, New York
Bob Young is a New York-based partner at TechCXO, and leads TechCXO’s risk management practice. See his full bio here.