Is the use of a valuation tool, like our UnRisk, merely a means to inert ourselves into the market, recalibrate a model against the market and then repeat the procedure? If it comes to options, I let Ayache speak

*... It is only because the dynamic trader has to be hedging continuously his position in the option and therefore is following the option continuously; it is only because of this that he is basically entitled to compute something we call implied volatility - and implied volatility is the simplest instance of recalibration ... recalibrating the Black Scholes model to the option market price ....*

Traders do not price because of models, but get insight into the context that has been written into the BLANK contingency - that led to the price. A Black Swan then is only a context switching event of the same thing and a context switching regime is "just" another recalibration approach.

In the picture above, we have a Heston model, an extension of the BS model with stochastic volatility, in PDE form. To calibrate and frequently recalibrate such a model is really challenging.

You need to do things in seconds that might take hours with traditional environments and implementations.

We at UnRisk, do this with ultrafast numerical schemes, principal component application, grid-computing, CUDA support, ... all managed and linked by Mathematica, now on version 8.

How could we achieve this? We met Blank Swans before we entered into quant finance.

Let us take a numerically controlled metal sheet forming machine. It "trades" a shape and the return pays in accuracy of the shape. To implement the control, you need to understand the framework of the elastoplasticity theory and the complexity and limits of its mathematics derived from the mathematics of continuum mechanics. Deformation is decomposed into elastic and plastic parts and for simplicity decompositions shall determine stress and kinematical quantities.

The resulting PDE system can be solved by advanced numerical schemes.

They "only" need to be calibrated related to the physical properties of the material that are dependent on recipes, and properties that are result of the process? And work as predictive models for final shapes? NO.

If you buy a metal sheet you order it by standardized "names". And this is where the headache begins.

Standardized mass-steel specifications of one material instance allow ranges of physical properties of up to 15%. But each single sheet has its own metallurgical fingerprints.

It is impossible, to measure the properties before bending, because you needed to destroy part of the sheet.

So, you need to recalibrate your models during the forming process ("continuously"), say, by observing force-shape trajectories over the process. For explanation of the quantified elasto-plastic behavior in this step. The closer you come to the final shape the more your system knows about the concrete material and the better it can explain and predict a little into the future.

Calibration here is often result of a machine learning application with our mlf and recalibration uses the extracted knowledge (instead of volatility, we compute implied material parameters).

This is a paradigm for all kind of Blank Swan processes, where modeling, ironically, is not for prediction, but explanation and the computation of implied features.

Mathematica is the perfect environment for this. It let you create understandable and computational models, it supports cross-model testing and integrates your ultrafast solvers seamlessly.