12th European Summer School in Financial Mathematics
Padova, September 2 - 6, 2019




Main Lecture Courses

The summer school will be structured around two main topics:

  • High Frequency Data. Minicourses will be given by Mark Podolskij (Aarhus University) and Roberto Renò (University of Verona).
  • Numerical Probability. Minicourses will be given by Gilles Pagès and Benedikt Wilbertz (Sorbonne University, Paris). .


High Frequency Data.

  • Mark Podolskij (Aarhus University) slides
    Abstract: Since the seminal work of Jean Jacod in the mid 90's high frequency statistics gained tremendous popularity in financial applications. This is due to the availability of (ultra) high frequency data in financial markets, which is much more informative than the classical daily observations scheme. One of the main focuses in financial econometrics is the volatility process, a measure of the variability, which can be statistically recovered from high frequency observations of an asset price. This lecture aims at demonstrating and explaining the limit theory for high frequency observations of semimartingales (a standard model in finance), presenting the classical estimation and testing procedures, and covering some future challenges. In the first part of the lecture we will learn the notion of stable convergence and derive some asymptotic results for high frequency statistics of semimartingales in the purely continuous and jump settings. In the second part we will concentrate on classical testing and estimation methods, including estimation of integrated or local volatility, testing for jumps, and estimation of the asymptotic variances among other problems. Finally, we will discuss some new challenges in high frequency statistics, with a particular focus on high dimensional data and the non-standard nature of statistical testing/estimation methods in the high frequency setting.
    1. Limit theorems for high frequency statistics
    2. Classical tests and estimation problems in high frequency
    3. New challenges for high frequency data
  • Roberto Renò (University of Verona):
    1. High-Frequency modeling with zeros slides
      The analysis of a flexible stochastic volatility (including jumps) is of central important for asset pricing. However, at the very high frequency, the traditional statistical models do not appear to be rich enough. The lecture discusses a generalization of the omnipresent Ito semimartingale which incorporates the possibility of zeros. We show this has a huge impact on the measurement of volatility, jumps and important implications for asset pricing. We complement the statistical model with an economic model in which agents with different levels of information interact in a market with costly transactions. The model is able to generate zero returns in the data. We discuss some testable implications of the model linking zeros to trading volume, volatility and transaction costs.
    2. Flash crashes slides
      While the literature on high-frequency data almost exclusively concentratated on variation measures (volatility and jumps), flash crashes need a drift dominating over volatility, that is an exploding drift. The lecture will discuss the "Drift Burst Hypothesis" and its consequences for arbitrage. With the drift burst hypothesis in place and the corresponding Ito semimartingale price process specified, we develop an effective identification strategy for the on-line detection of drift burst sample paths from intraday noisy high-frequency data. The method is nonparametric and can be viewed as a type of t-test that aims to establish whether the observed price movement is more likely generated by the drift than be the result of diffusive volatility. We discuss the implications of the empirical findings for the economic process of price formation. We also discuss and analyze the state-of-the-art of the literature on flash crashes, and in particular the role played by high-frequency traders in contemporary financial markets.

Numerical Probability.

  • Gilles Pagès (Sorbonne University, Paris) slides
    Introduction to recursive Stochastic Optimization
    1. Optimization and inverse problems:
      • The Strong Law of Large Numbers as a recursive procedure
      • Towards the paradigm of recursive stochastic algorithm
      • Simulation versus data sets: old and new.
      • Few examples.
    2. Stochastic gradient descent (SGD) and pseudo-gradient:
      • A.s. and L^2 convergence (Robbins-Monro and Robbins-Siegmund theorems)
      • Rates: weak (Central Limit Theorem) and Strong (L^2).
      • Ruppert-Polyak averaging principle
      • Martingles methods versus ODE methods
    3. Applications to Finance
      • Stochastic implicitation (volatility, correlation, etc).
      • Adaptive variance reduction
      • VaR/CVaR computation
      • Stochastic control
      • Bandit algorithms and optimal asset allocation
    4. From optimal quantization to unsupervised learning
      • Optimal quantization: a Wasserstein approach
      • Cubature formula
      • Quantization as a spatial discretization scheme
      • How to get optimal quantization: K-means and Competitive Leaning Vector Quantization
  • Benedikt Wilbertz (Sorbonne University, Paris):
    Stochastic Optimization: the machine learning viewpoint
    1. Supervised machine learning:
      • Problem formulation
      • Fundamental error dec omposition
      • Bias-variance trade-off
    2. Deep Learning: History to current state of the art
    3. Stochastic gradient descent (SGD) as fundamental tool in optimizing deep neural networks
      • Learning rate tuning for SGD
      • Challenges in scaling SGD to hundreds of GPUs
      • On flat and wide minima: the role of the batch size in SGD
    4. Progress of second-order methods and why they still have a hard time to beat SGD

    Bibliography.

    On part A.
    • Bardou, O ., Frikha, N., Pagès, G. '09 : Computing VaR and CVaR using stochastic approximation and adaptive unconstrained importance sampling. Monte Carlo Methods Appl. 15 (2009), no. 3, 173-210.
    • Benaïm M. '99: Dynamics of stochastic approx imation algorithms. Seminaire de Probabilites, XXXIII, 1-68, Lecture Notes in Math., 1709, Springer, Berlin, 1999
    • Callegaro G ., Fiorin L., Grasselli M. '17 : Pricing via recursive quantization in stochastic volatility models. Quant. Finance 17 (2017), no. 6, 855-872.
    • Duflo, Ma. ' 97: Random iterative models. Applications of Mathematics (New York), 34. Springer-Verlag, Berlin, 1997. xviii+385 pp.
    • Kushner H. J., Yin G. '03: Stochastic approximation and recursive algorithms and applications. 2nd edition. Applications of Mathematics (New York), 35. Stochastic Modelling and Applied Probability. Springer-Verlag, New York, 2003. xxii+474 pp.
    • Lamberton D., Pagès G., Tarrès P. '04: When can the two-armed bandit algorithm be trusted? Ann. Appl. Probab. 14 (2004), no. 3, 1424-1454.
    • Lemaire V., Pagès, G. '10: Unconstrained recursive importance sampling. Ann. Appl. Probab. 20 (2010), no. 3, 1029-1067.
    • Pagès G. '18: Numerical probability. An introduction with applications to finance. Universitext. Springer, Cham, 2018. xxi+579 pp.
    • Pagès, G. '13: Introduction to vector quantization and its applications for numerics. CEMRACS 2013-modelling and simulation of complex systems: stochastic and deterministic approaches, 29-79, ESAIM Proc. Surveys, 48, EDP Sci., Les Ulis.
    • Pagès G., Sagna A. '15 : Recursive marginal quantization of the Euler scheme of a diffusion process. Appl. Math. Finance 22 (2015), no. 5, 463-498.
    On part B.
    • Bollapragada et al. '18: A Progressive Batching L-BFGS Method for Machine Learning, https://arxiv.org/pdf/1802.05374.pdf
    • Bottou et al. '16: Optimization Methods for Large-Scale Machine Learning, https://arxiv.org/abs/1606.04838
    • Courville et al. '15: Deep Learning, https://www.deeplearningbook.org/
    • Goyal et al. '17: Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour, https://arxiv.org/abs/1706.02677
    • Hochreiter and Schmidhuber '97: Flat minima
    • Keskar et al. '16: On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima, https://arxiv.org/abs/1609.04836
    • Lafond et al. '17: Diagonal Rescaling For Neural Networks, https://arxiv.org/abs/1705.09319
    • Smith et al. '17: Don't Decay the Learning Rate, Increase the Batch Size, https://arxiv.org /abs/1711.00489
    • Smith and Le '17: A Bayesian Perspective on Generalization and Stochastic Gradient Descent, https://arxiv.org/abs/1710.0645

Contact: essfm19@math.unipd.it