American-type options. Volume 1, Stochastic approximation methods

Saved in:
Bibliographic Details
Main Author: Silʹvestrov, D. S. (Dmitriĭ Sergeevich)
Corporate Author: Ebooks Corporation
Format: Electronic eBook
Language:English
Published: Berlin ; Boston : De Gruyter, c2014.
Series:De Gruyter studies in mathematics ; 56.
Subjects:
Online Access:Connect to this title online (unlimited simultaneous users allowed; 325 uses per year)
Table of Contents:
  • Machine generated contents note: 1. Multivariate modulated Markov log-price processes (LPP)
  • 1.1. Markov LPP
  • 1.2. LPP represented by random walks
  • 1.3. Autoregressive LPP
  • 1.4. Autoregressive stochastic volatility LPP
  • 2. American-type options
  • 2.1. American-type options
  • 2.2. Pay-off functions
  • 2.3. Reward and log-reward functions
  • 2.4. Optimal stopping times
  • 2.5. American-type knockout options
  • 3. Backward recurrence reward algorithms
  • 3.1. Binomial tree reward algorithms
  • 3.2. Trinomial tree reward algorithms
  • 3.3. Random walk reward algorithms
  • 3.4. Markov chain reward algorithms
  • 4. Upper bounds for option rewards
  • 4.1. Markov LPP with bounded characteristics
  • 4.2. LPP represented by random walks
  • 4.3. Markov LPP with unbounded characteristics
  • 4.4. Univariate Markov Gaussian LPP
  • 4.5. Multivariate modulated Markov Gaussian LPP
  • 5. Convergence of option rewards
  • I
  • 5.1. Asymptotically uniform upper bounds for rewards
  • I
  • 5.2. Modulated Markov LPP with bounded characteristics
  • 5.3. LPP represented by modulated random walks
  • 6. Convergence of option rewards
  • II
  • 6.1. Asymptotically uniform upper bounds for rewards
  • II
  • 6.2. Univariate modulated LPP with unbounded characteristics
  • 6.3. Asymptotically uniform upper bounds for rewards
  • III
  • 6.4. Multivariate modulated LPP with unbounded characteristics
  • 6.5. Conditions of convergence for Markov price processes
  • 7. Space-skeleton reward approximations
  • 7.1. Atomic approximation models
  • 7.2. Univariate Markov LPP with bounded characteristics
  • 7.3. Multivariate Markov LPP with bounded characteristics
  • 7.4. LPP represented by multivariate modulated random walks
  • 7.5. Multivariate Markov LPP with unbounded characteristics
  • 8. Convergence of rewards for Markov Gaussian LPP
  • 8.1. Univariate Markov Gaussian LPP
  • 8.2. Multivariate modulated Markov Gaussian LPP
  • 8.3. Markov Gaussian LPP with estimated characteristics
  • 8.4. Skeleton reward approximations for Markov Gaussian LPP
  • 8.5. LPP represented by Gaussian random walks
  • 9. Tree-type approximations for Markov Gaussian LPP
  • 9.1. Univariate binomial tree approximations
  • 9.2. Multivariate binomial tree approximations
  • 9.3. Multivariate trinomial tree approximations
  • 9.4. Inhomogeneous in space binomial approximations
  • 9.5. Inhomogeneous in time and space trinomial approximations
  • 10. Convergence of tree-type reward approximations
  • 10.1. Univariate binomial tree approximation models
  • 10.2. Multivariate homogeneous in space tree models
  • 10.3. Univariate inhomogeneous in space tree models
  • 10.4. Multivariate inhomogeneous in space tree models.