### Talk Titles and Abstracts

**Liming Feng (Illinois)**

*Hilbert transform approach to options valuation *

A Hilbert transform approach to options valuation will be presented
in this talk. For many popular option pricing models with known
analytic characteristic functions for the underlying driving stochastic
processes, the Hilbert transform approach exhibits remarkable
speed and accuracy, with errors decaying exponentially in terms
of the computational cost. The pricing of discrete barrier, lookback
and Bermudan options will be illustrated. Applications in applied
probability will also be discussed.

_______________________________________________________

**Peter Forsyth (Waterloo)**

*Analysis of A Penalty Method for Pricing a Guaranteed Minimum
Withdrawal Benefit (GMWB)*

The no arbitrage pricing of Guaranteed Minimum Withdrawal Benefits
(GMWB) contracts results in a singular stochastic control problem
which can be formulated as a Hamilton Jacobi Bellman (HJB) Variational
Inequality (VI). Recently, a penalty method has been suggested
for solution of this HJB variational inequality (Dai et al, 2008).
This method is very simple to implement. In this talk, we present
a rigorous proof of convergence of the penalty method to the viscosity
solution of the HJB VI. Numerical tests of the penalty method
are presented which show the experimental rates of convergence,
and a discussion of the choice of the penalty parameter is also
included. A comparsion with an impluse control formulation of
the same problem, in terms of generality and computational complexity,
is also presented.

________________________________________________________

**Jim Gatheral (Merrill Lynch)**

*Optimal order execution*

In this talk, we review the models of Algmren and Chriss, Obizhaeva
and Wang, and Alfonsi, Fruth and Schied. We use variational calculus
to derive optimal execution strategies in these models, and show
that static strategies are dynamically optimal, in some cases
by explicitly solving the HJB equation. We present general conditions
under which there is no price manipulation in models with linear
market impact. Finally, we present some new generalizations of
the Obizhaeva and Wang model given in a recent paper by Gatheral,
Schied and Slynko, again deriving explicit closed-form optimal
execution strategies.

This is partially joint work with Alexander Schied and Alla Slynko.

_______________________________________________________

**Kay Giesecke (Stanford)**

*Asymptotically Optimal Importance Sampling For Dynamic Portfolio
Credit Risk*

Dynamic intensity-based point process models, in which a firm
default is governed by a stochastic intensity process, are widely
used to model portfolio credit risk. In the context of these models,
this paper develops, analyzes and evaluates an importance sampling
scheme for estimating the probability of large portfolio losses,
portfolio risk measures including value at risk and expected shortfall,
and the sensitivities of these quantities with respect to the
portfolio constituent names. The scheme is shown to be asymptotically
optimal. Numerical experiments demonstrate the advantages of the
algorithm for several standard model specifications.

________________________________________________________

**Mike Giles (Oxford)**

*Progress with multilevel Monte Carlo methods*

The multilevel Monte Carlo path simulation method combines simulations
with different levels of resolution to reduce the computational
cost for achieving a prescribed Mean Square Error.

Is this talk I will describe the latest progress with this technique,with
new applications to jump-diffusion models, multi-dimensional SDEs,
the calculation of Greeks, and a stochastic PDE arising from a
credit. I will also outline joint work with Kristian Debrabant
and Andreas Rossler on the numerical analysis of the multilevel
method using the Milstein discretisation.

________________________________________________________

**Garud Iyengar (Columbia)**

**A behavioral finance based tick-by-tick model for price
and volume**

We propose a model for jointly predicting stock price and volume
at the tick-by-tick level. We model the investor preferences by
a random utility model that incorporates several important behavioral
biases such as the status quo bias, the disposition effect, and
loss-aversion. The resulting model is a logistic regression model
with incomplete information; consequently, we are unable to use
the maximum likelihood estimation method and have to resort to
Markov Chain Monte Carlo (MCMC) to estimate the model parameters.
Moreover, the constraint that the volume predicted by the MCMC
model exactly match observed volume introduces serial correlation
in the stock price; consequently, standard MCMC techniques for
calibrating parameters do not work well. We develop new modifications
of the Metropolis-within-Gibbs method to estimate the parameters
in our model. Our primary goal in developing this model is to
predict the market impact function and VWAP (volume weighted average
price) of individual stocks.

________________________________________________________

**Petter Kolm (Mathematics in Finance M.S. Program, Courant Institute,
New York University)**

*Algorithmic Trading: A Buy-Side Perspective*

The traditional view of portfolio construction, risk analysis,
and execution holds that these three functions of money management
are separable. Portfolios are constructed without incorporating
the costs of execution, and execution is conducted without considering
portfolio level risk. With the explosive growth of algorithmic
trading, several mathematical and computational methodologies
have been proposed for unifying and improving traditional money
management functions. This presentation addresses some important
developments in this area, including incorporating market impact
costs into portfolio optimization, multi-period dynamic portfolio
analysis, and high-frequency simulation for dynamic portfolio
analysis.

________________________________________________________

**Ralf Korn (TU Kaiserslautern)**

*Recent advances in option pricing via binomial trees*

A survey on some new results obtained in joint work with S. Mueller
is given. In particular, we present an optimized 1-D-scheme (the
optimal drift model) that is based on overlaying a given binomial
scheme with an additional drift process and that obtains a higher
than advanced schemes such as the Tian- or the Chang-Palmer approach.
Further, we introduce the orthogonal decoupling approach to solve
n-D-valuation problems. This approach is based on a non-linear
transformation of the state space, always results in well-defined
probabilities in the approximating n-D binomial tree, and admits
a regular convergence behaviour.

________________________________________________________

**Ciamac Moallemi (Columbia)**

*A multiclass queueing model of limit order book dynamics*

We model the limit order book as system of two, coupled multiclass
queues. Specifically, each side of the book is modeled as a single
server, multiclass queue operating under a strict priority rule
defined by the prices associated with each limit order. We describe
the transient dynamics of this system, and formulate and solve
the optimal execution problem for a block of shares over a short
time horizon.

This is joint work with Costis Maglaras.

__________________________________________________

**Kumar Muthuraman (UT Austin)**

*Moving boundary approaches for solving free-boundary problems*

Free-boundary problems arise when the solution of a PDE and the
domain over which the PDE must be solved are to be determined
simultaneously. Three classes of stochastic control problems (optimal
stopping, singular and impulse control) reduce to such free-boundary
problems. Several classical examples including American option
pricing and portfolio optimization with transaction costs belong
to these classes. This talk describes a computational method that
solves free-boundary problems by converting them into a sequence
of fixed-boundary problems, that are much easier to solve. We
will illustrate application on a set of classical problems, of
increasing difficulty and will also see how the method can be
adapted to efficiently handle problems in large dimensions.

______________________________________________________

**Phillip Protter (Cornell)**

*Absolutely Continuous Compensators*

Often in applications (for example Survival Analysis and Credit
Risk) one begins with a totally inaccessible stopping time, and
then one assumes the compensator has absolutely continuous paths.
This gives an interpretation in terms of a ``hazard function''
process. Ethier and Kurtz have given sufficient conditions for
a given stopping

time to have an absolutely continuous compensator, and this condition
was extended by Yan Zeng to a necessary and sufficient condition.
We take a different approach and make a simple hypothesis on the
filtration under which all totally inaccessible stopping times
have absolutely continuous compensators. We show such a property
is stable under changes of measure, and under the expansion of
filtrations; and we detail its limited stability under filtration
shrinkage. The talk is based on research performed with Sokhna
M'Baye and Svante Janson.

_______________________________________________________

**Chris Rogers (Cambridge)**

*Convex regression and optimal stopping*

There are many examples, particularly in finance, of optimal
stopping problems where the state variable is some point in Euclidean
space, and the value function is convex in the state variable.
This then permits approximation of the value function as the maximum
of a sequence of linear functionals, an approach which has various
advantages. The purpose of this paper is to present the methodology
and explore its consequences.

______________________________________________________

**Birgit Rudloff (Princeton)**

*Hedging and Risk Measurement under Transaction Costs *

We consider a market with proportional transaction costs and
want to hedge a claim by trading in the underlying assets. The
superhedging problem is to find the set of d-dimendional vectors
of initial capital that allow to superhedge the claim. We will
show that in analogy to the frictionless case, the superhedging
price in a market with proportional transaction costs is a (set-valued)
coherent risk measure, where the supremum in the dual representation
is taken w.r.t. the set of equivalent martingale measures. To
do so, we extend the notion of set-valued risk measure to the
case of random solvency cones. Connections to recent results about
efficient use of capital when there are multiple eligible assets
are drawn. When starting with a vector of initial capital that
does not allow to superhedge, a shortfall at maturity is possible.
For an investor who finds a hedging error that is 'small enough'
still acceptable, good-deal-bounds under transaction costs can
be defined.

_______________________________________________________

**Georgios Skoulakis (Maryland)**

*Solving Consumption and Portfolio Choice Problems: The State
Variable Decomposition Method*

This paper develops a new solution method for a broad class of
discrete-time dynamic portfolio choice problems. The method efficiently
approximates conditional expectations of the value function by
using (i) a decomposition of the state variables into a component
observable by the investor and a stochastic deviation; and (ii)
a Taylor expansion of the value function. The outcome of this
State Variable Decomposition (SVD) is an approximate problem in
which conditional expectations can be computed efficiently without
sacrificing precision. We illustrate the accuracy of the SVD method
in handling several realistic features of portfolio choice problems
such as intermediate consumption, multiple risky assets, multiple
state variables, portfolio constraints, non-time-separable preferences,
and nonredun- dant endogenous state variables. We finally use
the SVD method to solve a realistic large-scale life-cycle portfolio
choice and consumption problem with predictable expected returns
and recursive preferences.

_______________________________________________________

**Jeremy Staum (Northwestern)**

*Déjà Vu All Over Again: Efficiency when Financial
Simulations are Repeated*

Many computationally intensive financial simulation problems
involve running the same simulation model repeatedly with different
values of its inputs. Such tasks include pricing exotic options
of the same type but of different strikes and maturities, valuation
of options given different values of the model’s parameters
during calibration, and measuring a portfolio’s risk as the
markets move. The basic approach is to run the simulation model
using each of the input values in which one is interested. In
this talk, we explore generic methods for solving a suite of repeated
simulation problems more efficiently, by estimating the answer
given one value of the inputs using information generated while
running the simulation model with different values of the inputs.

________________________________________________________

**Nizar Touzi (Ecole Polytechnique)**

*A Probabilistic Numerical Method for Fully Nonlinear Parabolic
PDEs *

We suggest a probabilistic numerical scheme for fully nonlinear
PDEs, and show that it can be introduced naturally as a combination
of Monte Carlo and finite differences scheme without appealing
to the theory of backward stochastic differential equations. Our
first main result provides the convergence of the discrete-time
approximation and derives a bound on the discretization error
in terms of the time step. An explicit implementable scheme requires
to approximate the conditional expectation operators involved
in the discretization. This induces a further Monte Carlo error.
Our second main result is to prove the convergence of the latter
approximation scheme, and to derive an upper bound on the approximation
error. Numerical experiments are performed for two and five-dimensional
(plus time) fully-nonlinear Hamilton-Jacobi-Bellman equations
arising in the theory of portfolio optimization in financial mathematics.

_________________________________________________________

Back to top