Get the latest updates and news from FINCAD. Subscribe and never miss a post! 


Financial Architecture Series, Part 10: Model Subtleties
By Russell Goyder PhD | April 1, 2014

This is the third of three posts on the topic of Models. In the first, I argued that common usage of the word "model" is too narrow, and defined a Model as:

Model: The set of modelling assumptions required for a valuation.

In the second, I made this idea more concrete, describing what you should expect from a Model implementation and giving some example Models.

In this post, I'm going to throw some light on two dark and dusty corners of the Model concept.

A Model of Market Data

In my last post I explained that a Model contains a snapshot of market data. This may strike you as slightly weird. Perhaps you are thinking "curves and model parameters, that makes sense, but how can market data belong in a Model?". Market quotes certainly have to be involved at some point, because curves are built from them and model parameters are calibrated to them. But can't they be supplied as part of the calibration step? Do they have to live in the Model? Are we forced to put them in a Model as part of an engineering trade-off? In other words, even if they have to be part of a Model for some practical reasons, should they?

Let's start with the practical reasons. We'll get to the philosophical ones soon. Remember that a Model manages the relationships between model parameters, curves and the market data to which they are calibrated,  in order to remain self-consistent even when things get complicated, and in order to be computationally efficient. Market quotes are part of the dependency trees that encode such relationships (they are the leaves!), and so anything that claims to manage those trees needs to know just as much about market data as it does about curves and parameters. That's practical reason 1.

Practical reason 2 is, perhaps surprisingly, the computation of first-order risk. Any advanced analytics capability calculates sensitivities to market data analytically, without bumping. Without tracking the full relationship - every link in the chain - between a portfolio's value and the market, we'd have to resort to a finite-difference approach (bumping), which is approximate and many orders of magnitude slower.

Now the philosophy. When we say "the 5-year swap rate is 1.2%", what do we really mean? The reality is that, at any one time, various market makers are willing to enter vanilla interest rate swap agreements at different, but close, fixed rates. Each will state a different rate for paying the rate and receiving it. The reality is detailed and messy. Broker-dealers aggregate this information and market data vendors provide summary numbers, and at some point, an analytics library somewhere is given a nice, clean, single number, "the quote", from which it can bootstrap a curve point.

Single numbers like this, whether bid, ask or mid, form an abstract, schematic, conceptual representation of a reality that is a lot more complex. In other words, by definition, it is a model of activity in a market. In the strictest sense, "the 5-year swap rate is 1.2%" is a modeling assumption, and so deserves its place in a Model.

Generic Calibration

What is calibration? I suspect that most of us would say something like "adjusting curves or model parameters such that prices calculated by the model match market quotes, to some tolerance". If that was a sufficient definition, then we could move on and I wouldn't need to write this section.

It turns out that this idea of calibration is too restrictive. Calibrating a model to market data is what we need most of the time, not all the time. To make our definition of calibration sufficiently generic, we need to adjust it to the following: "adjusting curves or model parameters such that prices calculated by that Model match those calculated in another Model, to some tolerance".

Let's call the "other Model" the source, and let's call the Model whose parameters are adjusted the target. For European options, the source Model would specify Black pricing, based on a market vol. The target would be our model of choice for fitting the smile. For bootstrapping discount curves, the source Model would be one where each swap values to par automatically and the target Model would contain discounting, the expected value of Libor and other rates, sufficient for simple curve-based valuation.

Calibrating models to models is an incredibly useful capability. Not so much for canonical, production applications, but critical  for the quant work that gets you to such applications. Examples include calibrating a term-structure model to a SABR vol-cube, and finding the LMM equivalent of a collection of low-dimensional short rate models - studying the effect of ramping up short-rate vol, say.

So, we have that calibration requires two Models. But this goes against the definition of a Model as a central tracker and manager of the relationship between calibrated model parameters and market data. You can't have a single Model that tracks the relationship between artifacts in two different Models. How do we reconcile this?

We reconcile this conflict by making the following concession: A Model can hold the parameters of more than one model for the dynamics of the same underlying. In one Model, we can associate both Heston and stochastic-local volatility parameters with IBM, say. On the face of it, it seems like a backdoor, allowing inconsistency to creep in, when self-consistency has been the very watchword of Models all along. In fact it's OK, as long as in any one valuation, we are self-consistent.

The ingredients of a valuation are: a Product (what is being valued), a Model (assumptions for underlyings) and a Valuation Methodology (the numerical approach, Method for short), and it is to this last place we can turn for help. Typical Methods are closed-form, Monte Carlo, PDE, and characteristic function-based backward propagation. Let's introduce a table with two columns to the Method. In the first column we put the underlying (IBM) and in the second we put our choice of dynamics (SLV). Then, we can perform calibration in the same, single Model, by delegating the job of selecting the difference between source and target valuations to the Method.

It means that a Model, in the pure sense of the idea described so far, as a strictly unique set of modeling assumptions, only comes into being in light of a Method. In other words, a Model is allowed to bend the rules a little and have multiple specifications of the dynamics of the same underlying, but that's OK as long as we don't use them both in the same valuation, and you can't have a valuation without a Method.

This, however, is a subtlety. It only matters when we step outside of the canonical form of calibration, of model to market, into the sophisticated world of model to model. So we can relax, safe in the knowledge that everything I wrote about the Model concept holds true without modification, unless we are verging on rocket science territory. In other words, doing something that is itself so sophisticated and complex that, if we're in a position to get it right, means that we're also in a position to appreciate some of the subtleties that go with the territory of generic calibration.

About the author
Russell Goyder PhD
Russell Goyder PhD
Director of Quantitative Research and Development | FINCAD

Russell Goyder, PhD, is the Director of Quantitative Research and Development at FINCAD. Before joining FINCAD’s quant team in 2006, he worked as a consultant at The MathWorks, solving a wide range of problems in various industries, particularly in the financial industry. In his current role, Russell manages FINCAD’s quant team and oversees the delivery of analytics functionality in FINCAD’s products, from initial research to the deployment of production code. Russell holds a PhD in Physics from the University of Cambridge.