Overview Statistic: PDF-Downloads (blue) and Frontdoor-Views (gray)

Approximative Policy Iteration for Exit Time Feedback Control Problems driven by Stochastic Differential Equations using Tensor Train format

  • We consider a stochastic optimal exit time feedback control problem. The Bellman equation is solved approximatively via the Policy Iteration algorithm on a polynomial ansatz space by a sequence of linear equations. As high degree multi-polynomials are needed, the corresponding equations suffer from the curse of dimensionality even in moderate dimensions. We employ tensor-train methods to account for this problem. The approximation process within the Policy Iteration is done via a Least-Squares ansatz and the integration is done via Monte-Carlo methods. Numerical evidences are given for the (multi dimensional) double well potential and a three-hole potential.

Export metadata

Additional Services

Share in Twitter Search Google Scholar Statistics - number of accesses to the document
Metadaten
Author:Konstantin FackeldeyORCiD, Mathias Oster, Leon Sallandt, Reinhold Schneider
Document Type:Article
Parent Title (English):SIAM Journal on Multiscale Modeling and Simulation
Volume:20
Issue:1
First Page:379
Last Page:403
Year of first publication:2022
ArXiv Id:http://arxiv.org/abs/2010.04465
DOI:https://doi.org/10.1137/20M1372500
Accept ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.