Library

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • 2020-2024  (5)
  • 2015-2019  (3)
  • 2010-2014  (3)
  • 2024  (5)
  • 2017  (3)
  • 2013  (3)
Source
Years
  • 2020-2024  (5)
  • 2015-2019  (3)
  • 2010-2014  (3)
Year
Language
  • 1
    Publication Date: 2020-03-20
    Description: Markov Decision Processes (MDP) or Partially Observable MDPs (POMDP) are used for modelling situations in which the evolution of a process is partly random and partly controllable. These MDP theories allow for computing the optimal control policy for processes that can continuously or frequently be observed, even if only partially. However, they cannot be applied if state observation is very costly and therefore rare (in time). We present a novel MDP theory for rare, costly observations and derive the corresponding Bellman equation. In the new theory, state information can be derived for a particular cost after certain, rather long time intervals. The resulting information costs enter into the total cost and thus into the optimization criterion. This approach applies to many real world problems, particularly in the medical context, where the medical condition is examined rather rarely because examination costs are high. At the same time, the approach allows for efficient numerical realization. We demonstrate the usefulness of the novel theory by determining, from the national economic perspective, optimal therapeutic policies for the treatment of the human immunodefficiency virus (HIV) in resource-rich and resource-poor settings. Based on the developed theory and models, we discover that available drugs may not be utilized efficiently in resource-poor settings due to exorbitant diagnostic costs.
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2020-03-20
    Description: We present the theory of “Markov decision processes (MDP) with rare state observation” and apply it to optimal treatment scheduling and diagnostic testing to mitigate HIV-1 drug resistance development in resource-poor countries. The developed theory assumes that the state of the process is hidden and can only be determined by making an examination. Each examination produces costs which enter into the considered cost functional so that the resulting optimization problem includes finding optimal examination times. This is a realistic ansatz: In many real world applications, like HIV-1 treatment scheduling, the information about the disease evolution involves substantial costs, such that examination and control are intimately connected. However, a perfect compliance with the optimal strategy can rarely be achieved. This may be particularly true for HIV-1 resistance testing in resource-constrained countries. In the present work, we therefore analyze the sensitivity of the costs with respect to deviations from the optimal examination times both analytically and for the considered application. We discover continuity in the cost-functional with respect to the examination times. For the HIV-application, moreover, sensitivity towards small deviations from the optimal examination rule depends on the disease state. Furthermore, we compare the optimal rare-control strategy to (i) constant control strategies (one action for the remaining time) and to (ii) the permanent control of the original, fully observed MDP. This comparison is done in terms of expected costs and in terms of life-prolongation. The proposed rare-control strategy offers a clear benefit over a constant control, stressing the usefulness of medical testing and informed decision making. This indicates that lower-priced medical tests could improve HIV treatment in resource-constrained settings and warrants further investigation.
    Language: English
    Type: article , doc-type:article
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2020-03-20
    Description: Well-mixed stochastic chemical kinetics are properly modelled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows to express various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed.
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2020-03-20
    Description: Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.
    Language: English
    Type: article , doc-type:article
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2020-03-20
    Description: This paper investigates the criterion of long-term average costs for a Markov decision process (MDP) which is not permanently observable. Each observation of the process produces a fixed amount of information costs which enter the considered performance criterion and preclude from arbitrarily frequent state testing. Choosing the rare observation times is part of the control procedure. In contrast to the theory of partially observable Markov decision processes, we consider an arbitrary continuous-time Markov process on a finite state space without further restrictions on the dynamics or the type of interaction. Based on the original Markov control theory, we redefine the control model and the average cost criterion for the setting of information costs. We analyze the constant of average costs for the case of ergodic dynamics and present an optimality equation which characterizes the optimal choice of control actions and observation times. For this purpose, we construct an equivalent freely observable MDP and translate the well-known results from the original theory to the new setting.
    Language: English
    Type: article , doc-type:article
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2020-03-20
    Language: English
    Type: doctoralthesis , doc-type:doctoralThesis
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2024-05-06
    Description: The multi-grid reaction-diffusion master equation (mgRDME) provides a generalization of stochastic compartment-based reaction-diffusion modelling described by the standard reaction-diffusion master equation (RDME). By enabling different resolutions on lattices for biochemical species with different diffusion constants, the mgRDME approach improves both accuracy and efficiency of compartment-based reaction-diffusion simulations. The mgRDME framework is examined through its application to morphogen gradient formation in stochastic reaction-diffusion scenarios, using both an analytically tractable first-order reaction network and a model with a second-order reaction. The results obtained by the mgRDME modelling are compared with the standard RDME model and with the (more detailed) particle-based Brownian dynamics simulations. The dependence of error and numerical cost on the compartment sizes is defined and investigated through a multi-objective optimization problem.
    Language: English
    Type: article , doc-type:article
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2024-06-11
    Description: This article addresses reaction networks in which spatial and stochastic effects are of crucial importance. For such systems, particle-based models allow us to describe all microscopic details with high accuracy. However, they suffer from computational inefficiency if particle numbers and density get too large. Alternative coarse-grained-resolution models reduce computational effort tremendously, e.g., by replacing the particle distribution by a continuous concentration field governed by reaction-diffusion PDEs. We demonstrate how models on the different resolution levels can be combined into hybrid models that seamlessly combine the best of both worlds, describing molecular species with large copy numbers by macroscopic equations with spatial resolution while keeping the stochastic-spatial particle-based resolution level for the species with low copy numbers. To this end, we introduce a simple particle-based model for the binding dynamics of ions and vesicles at the heart of the neurotransmission process. Within this framework, we derive a novel hybrid model and present results from numerical experiments which demonstrate that the hybrid model allows for an accurate approximation of the full particle-based model in realistic scenarios.
    Language: English
    Type: article , doc-type:article
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2024-06-11
    Description: This work explores a synchronization-like phenomenon induced by common noise for continuous-time Markov jump processes given by chemical reaction networks. Based on Gillespie’s stochastic simulation algorithm, a corresponding random dynamical system is formulated in a two-step procedure, at first for the states of the embedded discrete-time Markov chain and then for the augmented Markov chain including random jump times. We uncover a time-shifted synchronization in the sense that—after some initial waiting time—one trajectory exactly replicates another one with a certain time delay. Whether or not such a synchronization behavior occurs depends on the combination of the initial states. We prove this partial time-shifted synchronization for the special setting of a birth-death process by analyzing the corresponding two-point motion of the embedded Markov chain and determine the structure of the associated random attractor. In this context, we also provide general results on existence and form of random attractors for discrete-time, discrete-space random dynamical systems.
    Language: English
    Type: article , doc-type:article
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2024-06-10
    Description: This paper explores memory mechanisms in complex socio-technical systems, using a mobility demand model as an example case. We simplified a large-scale agent-based mobility model into a Markov process and discover that the mobility decision process is non-Markovian. This is due to its dependence on the system’s history, including social structure and local infrastructure, which evolve based on prior mobility decisions. To make the process Markovian, we extend the state space by incorporating two history-dependent components. Although our model is a very much reduced version of the original one, it remains too complex for the application of usual analytic methods. Instead, we employ simulations to examine the functionalities of the two history-dependent components. We think that the structure of the analyzed stochastic process is exemplary for many socio-technical, -economic, -ecological systems. Additionally, it exhibits analogies with the framework of extended evolution, which has previously been used to study cultural evolution.
    Language: English
    Type: article , doc-type:article
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...