Using a Bi-variate Reinforced Urn Process (B-RUP), a novel way of modeling the dependence of coupled lifetimes is introduced, with application to the pricing of joint and survivor annuities. In line with the machine learning paradigm, the model is able to improve its performances over time, but it also allows for the use of a priori information, like for example experts’ judgement, to complement the empirical data. Using a well-known Canadian data set, the performances of the B-RUP are studied and compared with the existing literature.
ESR1 Luis Souto
Reinforced Urn Processes (RUPs) represent a flexible class of Bayesian nonparametric models suitable for dealing with possibly right-censored and left-truncated observations. A reliable estimation of their hyper-parameters is however missing in the literature. We therefore propose an extension of the Expectation-Maximization (EM) algorithm for RUPs, both in the univariate and the bivariate case. Furthermore, a new methodology combining EM and the prior elicitation mechanism of RUPs is developed: the Expectation-Reinforcement algorithm. Numerical results showing the performance of both algorithms are presented for several analytical examples as well as for a large data set of Canadian annuities.
We introduce a novel way of modeling the dependence of coupled lifetimes, for the pricing of joint and survivor annuities. Using a well-known Canadian data set, our results are analyzed and compared with the existing literature, mainly relying on copulas. Based on urn processes and a one-factor construction, the proposed model is able to improve its performances over time, in line with the machine learning paradigm, and it also allows for the use of experts’ judgements, to complement the empirical data.
Reinforced Urn Processes (RUPs) represent a flexible class of Bayesian nonparametric models suitable for dealing with possibly right-censored and left-truncated observations. A reliable estimation of their hyper-parameters is however missing in the literature. We therefore propose an extension of the Expectation-Maximization (EM) algorithm for RUPs, both in the univariate and the bivariate case. Furthermore, a new methodology combining EM and the prior elicitation mechanism of RUPs is developed: the Expectation-Reinforcement algorithm. Numerical results showing the performance of both algorithms are presented using artificial and actual data.
We propose a new jump-diffusion process, the Heston-Queue-Hawkes (HQH) model, combining the well-known Heston model and the recently introduced Queue-Hawkes (Q-Hawkes) jump process.