Get the latest updates and news from FINCAD. Subscribe and never miss a post! 


Financial Architecture Series, Part 9: What's in a Model?
By Russell Goyder PhD | December 12, 2013

In my last post, I asserted that common usage of the term model, in the context of derivative pricing, is too narrow and should be widened to include the full set of modelling assumptions relevant to a given valuation. When using the word in this sense, I will write Model with a capital M.

So what's in a Model? How do we interact with it?  Let's start by getting something out of the way - some important information that a Model does not supply.

Monte Carlo?

A Model doesn't say anything about the valuation methodology. If we are valuing a European call, we probably want to compute the price in closed-form. If we are pricing an Asian option on IBM, we probably want to run a Monte Carlo simulation. If we are pricing a barrier structure on IBM, we probably want to apply some form of backward evolution approach. The same mathematical model for the world is used in each case, but the numerical method is different.

Choosing a valuation methodology is not the Model's responsibility. It is important enough to call out as an explicit, first-class concept, which I'll cover in a subsequent post.

Model Parameters

When a valuation requires model parameters, such as SABR alpha, beta, rho and nu, then the Model provides the relevant values. Such values are usually time-dependent, forming curves. Other curves holding valuation-date expectations of quantities such as the time-value of money (ie discounting), rates, equity forward prices and so on, are also provided by the Model. A Model is a provider of curves.

How does it build the curves it provides? Model parameters are calibrated to market data, and so a Model is also a manager of calibrations and needs to know about the market data on which calibrations are based. Curves depend, often through calibration, on other curves and market data, so there is a dependency structure to track - also the job of a Model. If, after a calibration is run, the market data on which the results depend are updated, then it is the Model's responsibility to invalidate the results of that calibration, and re-run it when necessary. Models can be large and complex, and so lazy evaluation and caching are useful behaviours in any implementation of the Model concept, as is a uniform labelling scheme to track the many curves and parameters contained within.

In summary, a Model provides curves. Curves are calibrated, so a Model manages calibrations and the associated market data snapshot. Calibration is expensive, so curves are evaluated lazily and cached. Caching requires tracking and dependency management so Models do that too.

Example Models

Model 1 - discounting

For our first example Model, call it Model 1, suppose the USD discount rate is 1% compounded annually. That's it - it's a model of the time value of money in one currency. It is not a sophisticated model, but it is a Model nevertheless. It can be used to value any fixed future USD cash flow.

Model 2 - equity forward

Now suppose that the spot price of IBM stock is USD 200 and its effective funding rate (based on its repo rate and expected dividends) is 1.1%. We can now build a forward curve for IBM stock. Combined with our discount curve from Model 1, we can form Model 2 that can be used to value any linear payoff, such as a forward contract, on IBM, in addition to fixed USD flows.

Model 3 - equity vol

Next, suppose that future IBM stock prices are distributed log-normally with a volatility of 25%. Combining this assumption with those from Model 2, we can form Model 3. We might call Model 3 "the Black-Scholes model for IBM". It is sufficiently rich to value any payoff function of IBM's stock price, including European calls and puts.

Global Model?

We can continue in this vein, making assumptions about other equities, interest rates, bonds, credits and so on, until we have a sufficiently rich Model to value, or perform the required analysis on, our portfolio. We may need to encode correlations between these market observables to form the joint distribution for a collection of underlyings - a challenging hybrid modelling problem.

But I'm not saying that we need a "global model" - a single, self-consistent model of everything there is. Multiple Models are fine and do not have to be consistent with each other - in fact, such inconsistency is the very reason why they are different Models! It's fine to have, say, an equities Model and a rates Model. But it's not fine to make inconsistent assumptions within the same Model, because they would admit arbitrage. The Model idea is simply a tool to help manage distinct collections of self-consistent, arbitrage-free, modelling assumptions.

For example, consider Model 3 above, the Black-Scholes model for IBM. Suppose we form Model 1a, which says that the USD discount rate is 0.9% compounded annually (not 1% as in Model 1), then rebuild the Black-Scholes model using Model 1a as a base. We have now formed Model 3a, which is the same as Model 3 in every regard, except that the discount rate is different. It is not possible to say "the discount rate is 1.1%" and "the discount rate is 0.9%" simultaneously, in a single Model. These are inconsistent assumptions and so must live in different Models.

Other variations on Model 3 include

  1. The IBM spot price is 180, not 200.
  2. The IBM implied vol is 40% not 25%.
  3. The distribution of IBM stock is not log-normal, but some other shape.

With items 1 and 2, we can see that the Model concept helps with scenario analysis - each scenario is a different Model. Item 3 touches on model risk which, again, Models help organize and manage. For small portfolios and simple modelling assumptions it is usually possible to keep things together and consistent manually, but to scale up to large portfolios and complex collections of modelling assumptions, you need the idea of a Model.

About the author
Russell Goyder PhD
Russell Goyder PhD
Director of Quantitative Research and Development | FINCAD

Russell Goyder, PhD, is the Director of Quantitative Research and Development at FINCAD. Before joining FINCAD’s quant team in 2006, he worked as a consultant at The MathWorks, solving a wide range of problems in various industries, particularly in the financial industry. In his current role, Russell manages FINCAD’s quant team and oversees the delivery of analytics functionality in FINCAD’s products, from initial research to the deployment of production code. Russell holds a PhD in Physics from the University of Cambridge.