Summary
The aim of this PhD project is to look at the interface between Statistical Modelling and Machine Learning to try and understand how to combine these approaches for uses in the extreme value domain, and potentially to achieve a better predictive power.
Full descriptionStatistical Modelling (SM) approach is based on choosing a "suitable" model (e.g. linear regression, times series, etc.), fitting it to the data and then using it to predict the future. Machine Learning (ML) approach is based on searching algorithmically for "typical" patterns in the data (e.g. via Random Forests, Neural Networks, Deep Learning, etc.) and then using such patterns to predict the future. SM allows a better interpretation of results but the choice of a model may be subjective and disputable. On the other hand, ML methods often have a better prediction power but act as a "black" box – we may be able to make a fairly good prediction but couldn’t explain why it is such.
There are ongoing discussions across these two communities on which of the approaches is preferable - with early ideas in favour of their "convergence" dating back to 1980s and advocated by some prominent statisticians such as Leo Breiman [1]. More recently, with the invention of Reinforced Learning [3,4], probabilistic concepts began to play a more significant part in ML algorithms, which are now focusing on predicting the distribution of a variable using iterated updates of the data (so-called training). This is reminiscent of the Bayesian approach in Statistics, and is worth exploring further. In this regard, analysis of extreme values raises interesting methodological questions. Extreme values are rare, but it is important and challenging to try and predict them due to potential high cost and undesirable impact. While there is a well-documented statistical theory for this purpose (see e.g. [2]), it is less clear if (and how) to use the ML technology there. The aim of this PhD project is to look at the interface of these two approaches to try and understand how to combine them and potentially achieve a better predictive power.
References
- Breiman, L.Statistical modeling: the two cultures. Statistical Science, 16 (2001), 199–231, https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726
- Gyarmati-Szabó, J., Bogachev, L.V., and Chen, H. Nonstationary POT modelling of air pollution concentrations: Statistical analysis of the traffic and meteorological impact. Environmetrics, 28 (2017), e2449; doi:10.1002/env.2449
- Ha, D. and Schmidhuber, J. World models. Zenodo (online), 2018; doi:10.5281/zenodo.1207631
- Kingma, D.P. and Welling, M. Auto-encoding variational Bayes. In: Proceedings of the 2nd International Conference on Learning Representations (ICLR, 2014); arXiv:1312.6114 (2013).
