The traditional models of risk management are based upon determining the risk profile of a given population by understanding the statistical characteristics of a sample. For centuries now, we have focused on improving the performance of our models by adding new variables, studying correlations, and expanding the size of the samples to get to higher confidence levels. Uncertainty, a function of our ability to process and comprehend information, bounded us to large, monolithic, and compounded models. And while our models have served us well, and while we have remained committed to these models forever now, we are entering the phase when we need to change the fundamental paradigm of risk management. This paradigm change is resulting from a recent and revolutionary change in our information processing ability. It is now enabling us to manage risk at an atomic level in a given population. In reality, it is enhancing our ability to manage risk from two directions: top-down (the traditional approach) and bottom-up (the new approach). We call it mass customization of risk management and this revolutionary change is being enabled by Big Data.
Big Data and Risk Management
Let us first understand what is big data? Big data is: Accessing and processing massive information to develop new insightful abstractionswhich result in giving a competitive advantage to an entity.
Due to various interdependent advances in technology, human civilization suddenly finds itself at a point when its ability to process and comprehend information has been greatly expanded. This change was not an evolutionary, step-wise, change in the otherwise normal progression of information technology. It is a revolutionary change that can be considered an inflexion point that will fundamentally redefine how we manage business. And this revolutionary change is now impacting how we manage risk.
The key issue is that as we develop our models, we try to simulate the behavior of the population as a whole but not necessarily the behavior of the atomic components of the population. So what we get is a snapshot of the composite behavior of the population. Clearly, when we manage risk at a compounded level, we run into many landmines. We often encounter modeling errors where the mismatch between the actual world and the representative world in our models loses connection. We fail to capture all or most of the critical variables. We get limited data and are sometimes unable to understand statistically valid correlations. In many cases, the communication channel capacity and bandwidth between populations and our models get clogged – resulting in an unfavorable signal to noise ratio. Decision latency follows, which often hampers our ability to preemptively act on the information.Finally, it sometimes forces us into undesirable risk zones (e.g. black swan events, fat tails, or outliers) that get us by surprise.
And the problems make our well-thought-out risk management strategies less efficient and less cost effective. The mismatch between a composite model and the actual reality implies a “leap of statistics” and that is where we define the excellence of our models.
Atomic Level Mass Customization
Going at the granular levels of the populations can alleviate some of those problems – and going at the atomic level can change the paradigm completely.
Progressive insurance offers its customers to install a device in their cars. It uses that device to capture the driving data of the customer at an atomic level. Now the insurance company can collect the data at an individual level, model the behavior of each customer, develop a customized package for each customer, and develop a far granular risk management strategy. This is an example of mass customization of risk management.
Notice the difference. Instead of operating at a 30,000 feet level and developing a model that may carry certain features of the population, Progressive can penetrate deep into the atomic level of the population and can close the uncertainty gap to model the risk at an individual behavior level.
Example: Applications in Mortgage Finance Industry
We have developed proprietary (Patent Pending) models for managing risk in the mortgage finance industry. Our solutions focus on prepayment, interest, credit, and counterparty risk management practice areas.
For example, in prepayment risk management, our models enable a company to study the risk at an individual borrower level. By getting critical information about the life, communications, and decisions of an individual we enable our clients to tag risk management strategy to an individual borrower. Our model enables mass customization of risk management tailored to a borrower level.
Another example of our proprietary model is in the domain of counterparty risk management. The problem with counterparty management is that by the time you find out that something bad has happened with a given counterparty, it is often too late to do anything. The information has spread and closed the arbitrage opportunity. In CPRM, we acquire, process, and manage critical information about counterparties. This information is obtained from non-traditional sources and is processed using advanced analytical techniques (e.g. semantic analysis). Our model then builds upon the signal and systematically assigns values to develop a risk profile for each counter party.
Mass Customization is not replacing existing models
It is important to understand that at this juncture, mass customization of risk management is being used as an additional barometer. It is not replacing the current models. This additional barometer is enabling companies to develop a new way to approach risk management.