FINCAD Analytics Suite offers a powerful differential evolution algorithm for the calibration of financial models. The newly introduced differential evolution algorithm is virtually guaranteed to find the model parameters that lead to the best description of available market data.
For a free trial or a customized demonstration of the software, contact a FINCAD Representative today.
The pricing of a financial derivative often relies on a model that describes the evolution of the underlying process. Commonly used models for pricing interest rate derivatives, for example, include the Black-Karasinski model and the LIBOR market model, whereas the Heston model may be used for the pricing of equity derivatives. These models depend on a set of parameters that has to be chosen such that the model reproduces known market data as closely as possible. The process of finding the best model parameters is carried out by a calibration procedure. Only a successfully calibrated model will allow the calculation of reliable fair prices of the desired instruments. Model calibration is therefore the basis of any meaningful derivative valuation that applies a parameter-dependent model.
FINCAD Analytics Suite and FINCAD Analytics Suite Developer provide calibration functions for interest rate models (Hull-White, Ho-Lee, Black-Karasinski, Black-Derman-Toy, Two-Additive-Factor-Gaussian, the LIBOR Market Model) and for the Heston model. The calibration of these models to available market data is accomplished by minimizing the difference between the prices predicted by the given model and the prices quoted in the market. Its success depends crucially on the choice of the algorithm used for the minimization. FINCAD Analytics Suite provides a choice of three such algorithms: downhill simplex, Levenberg-Marquardt and differential evolution, the latter being a new addition in 2008.
The goal of the calibration routine is to find the set of model parameters that minimizes the difference between the model predictions and available market data. We therefore need to select an "error metric" that measures this difference. For example, we might compare the prices in the market with those predicted by the model and sum these differences to obtain an overall measure of the degree to which the model matches the data. The commonly used and mathematically well founded (chi-squared) metric implements such a comparison:
where the sum runs over the instruments in the market data set, the numerator is the squared difference between the market price yi of an instrument i and the price predicted by the model for a given set of model parameters (the single parameter a in this example). The weight ϖi measures the confidence with which the market price is known. A smaller weight implies less confidence. Therefore the chi squared metric quantifies the difference between the market data and the model prediction such that data with small associated uncertainties are given more weight than data with large uncertainties. If the model described the data perfectly, the chi-squared metric would be zero. In practice the metric will be positive and non-zero. The FINCAD Analytics Suite calibration functions implement the chi-squared error metric, as well as two other error metrics (a weighted L-1 metric and a weighted L-infinity metric).
Following our example, the calibration algorithm seeks to find the parameter a for which the error metric has the smallest possible value. An example of a chi squared metric as a function of a is shown in the following graph.
The metric is smallest near a=5.8. A robust minimization algorithm should find this global minimum, despite the local minimum near a=1.
There are two commonly used minimization algorithms, Levenberg-Marquardt and downhill simplex. Both begin their search for the best parameter a using some initial value. Roughly speaking, the algorithms find a better value by attempting to shift the current parameter towards smaller values in the error metric – the parameter is shifted "downhill". The algorithms terminate when a downhill shift cannot be achieved from the current parameter. In our example, the search would correctly terminate at the global minimum if it were started at a-5 or a-7. However, the result is very dependent on the initial value: The Levenberg-Marquardt algorithm would never find the global minimum if the initial value is less than about 3. The downhill simplex algorithm may find the global minimum for any initial value, but there is no guarantee that the search does not terminate at the local minimum.
The result of a calibration with the Levenberg-Marquardt or the downhill simplex algorithms therefore depends on the initial guess – a good initial guess for the parameter will generally lead to a successful calibration, whereas a calibration with an uneducated guess may fail.
Differential evolution on the other hand belongs to a class of evolution strategy optimizers and does not depend on an initial guess. Instead of a single initial value, the algorithm evolves many trial values (the population) in parallel. Thus differential evolution samples the entire parameter space and – if run long enough – is virtually guaranteed to find the global minimum.
This is illustrated in the figure below. Starting at the top, the rows of triangles show the iteration steps (every second step is shown). The algorithm sets up the initial population at random across the interval 0 < a < 7. Each trial vector in the population is then "evolved" through a mutation and crossover process1. The resulting parameter replaces the original, if its associated error metric is smaller. The distribution of triangles in each row illustrates that the population converges toward the minimum chi-squared value as the optimization progresses.
Within the population at each iteration the parameter with the smallest error metric is stored. In the graph, the evolution of the best value a is shown by the blue line. Clearly, the differential evolution algorithm finds the optimal value for a as the best parameter in the population.
The required runtime of the differential evolution is the only potential drawback of this algorithm. In general more iterations are required than for the Levenberg Marquardt or downhill simplex algorithms, and in each iteration step the metric for every value in the population has to be evaluated.
The FINCAD Analytics Suite calibration functions implement these three optimization algorithms. Due to the significant amount of time potentially required by the differential evolution algorithm, the functions also allow the user to compute an order-of-magnitude estimate of the time required for the function to finish. The use of these functions is illustrated by the following example, which shows how the LIBOR Market Model may be calibrated to market cap data.
The FINCAD Analytics Suite calibration functions allow the user to calibrate a variety of financial models to available market data. Three calibration algorithms are provided. The newly introduced differential evolution algorithm is virtually guaranteed to find the model parameters that lead to the best description of the market data.
Storn, R. (1997) Differential Evolution, A Simple and Efficient Heuristic Strategy for Global Optimization over Continuous Spaces, Journal for Global Optimization, Vol. 11, Dordrecht, pp. 341-359
1 A new parameter is mutated by forming the difference between two trial vectors and adding this difference to a third vector. With a certain crossover probability, the mutated parameter is kept as the new trial parameter.
Your use of the information in this article is at your own risk. The information in this article is provided on an "as is" basis and without any representation, obligation, or warranty from FINCAD of any kind, whether express or implied. We hope that such information will assist you, but it should not be used or relied upon as a substitute for your own independent research.
For more information or a customized demonstration of the software, contact a FINCAD Representative.