Library

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • 2020-2024  (19)
  • 2020-2023  (3)
  • 2010-2014  (7)
Source
Years
  • 2020-2024  (19)
  • 2020-2023  (3)
  • 2010-2014  (7)
  • 2015-2019  (9)
Year
Language
  • 1
    Publication Date: 2020-03-20
    Language: English
    Type: article , doc-type:article
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2020-03-20
    Description: Markov Decision Processes (MDP) or Partially Observable MDPs (POMDP) are used for modelling situations in which the evolution of a process is partly random and partly controllable. These MDP theories allow for computing the optimal control policy for processes that can continuously or frequently be observed, even if only partially. However, they cannot be applied if state observation is very costly and therefore rare (in time). We present a novel MDP theory for rare, costly observations and derive the corresponding Bellman equation. In the new theory, state information can be derived for a particular cost after certain, rather long time intervals. The resulting information costs enter into the total cost and thus into the optimization criterion. This approach applies to many real world problems, particularly in the medical context, where the medical condition is examined rather rarely because examination costs are high. At the same time, the approach allows for efficient numerical realization. We demonstrate the usefulness of the novel theory by determining, from the national economic perspective, optimal therapeutic policies for the treatment of the human immunodefficiency virus (HIV) in resource-rich and resource-poor settings. Based on the developed theory and models, we discover that available drugs may not be utilized efficiently in resource-poor settings due to exorbitant diagnostic costs.
    Language: English
    Type: reportzib , doc-type:preprint
    Format: application/pdf
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2020-03-20
    Description: We present the theory of “Markov decision processes (MDP) with rare state observation” and apply it to optimal treatment scheduling and diagnostic testing to mitigate HIV-1 drug resistance development in resource-poor countries. The developed theory assumes that the state of the process is hidden and can only be determined by making an examination. Each examination produces costs which enter into the considered cost functional so that the resulting optimization problem includes finding optimal examination times. This is a realistic ansatz: In many real world applications, like HIV-1 treatment scheduling, the information about the disease evolution involves substantial costs, such that examination and control are intimately connected. However, a perfect compliance with the optimal strategy can rarely be achieved. This may be particularly true for HIV-1 resistance testing in resource-constrained countries. In the present work, we therefore analyze the sensitivity of the costs with respect to deviations from the optimal examination times both analytically and for the considered application. We discover continuity in the cost-functional with respect to the examination times. For the HIV-application, moreover, sensitivity towards small deviations from the optimal examination rule depends on the disease state. Furthermore, we compare the optimal rare-control strategy to (i) constant control strategies (one action for the remaining time) and to (ii) the permanent control of the original, fully observed MDP. This comparison is done in terms of expected costs and in terms of life-prolongation. The proposed rare-control strategy offers a clear benefit over a constant control, stressing the usefulness of medical testing and informed decision making. This indicates that lower-priced medical tests could improve HIV treatment in resource-constrained settings and warrants further investigation.
    Language: English
    Type: article , doc-type:article
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2020-03-20
    Description: Markov Decision Processes (MDP) or Partially Observable MDPs (POMDP) are used for modelling situations in which the evolution of a process is partly random and partly controllable. These MDP theories allow for computing the optimal control policy for processes that can continuously or frequently be observed, even if only partially. However, they cannot be applied if state observation is very costly and therefore rare (in time). We present a novel MDP theory for rare, costly observations and derive the corresponding Bellman equation. In the new theory, state information can be derived for a particular cost after certain, rather long time intervals. The resulting information costs enter into the total cost and thus into the optimization criterion. This approach applies to many real world problems, particularly in the medical context, where the medical condition is examined rather rarely because examination costs are high. At the same time, the approach allows for efficient numerical realization. We demonstrate the usefulness of the novel theory by determining, from the national economic perspective, optimal therapeutic policies for the treatment of the human immunodeficiency virus (HIV) in resource-rich and resource-poor settings. Based on the developed theory and models, we discover that available drugs may not be utilized efficiently in resource-poor settings due to exorbitant diagnostic costs.
    Language: English
    Type: article , doc-type:article
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2020-03-20
    Language: English
    Type: doctoralthesis , doc-type:doctoralThesis
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2022-02-10
    Description: Many real-world processes can naturally be modeled as systems of interacting agents. However, the long-term simulation of such agent-based models is often intractable when the system becomes too large. In this paper, starting from a stochastic spatio-temporal agent-based model (ABM), we present a reduced model in terms of stochastic PDEs that describes the evolution of agent number densities for large populations. We discuss the algorithmic details of both approaches; regarding the SPDE model, we apply Finite Element discretization in space which not only ensures efficient simulation but also serves as a regularization of the SPDE. Illustrative examples for the spreading of an innovation among agents are given and used for comparing ABM and SPDE models.
    Language: English
    Type: article , doc-type:article
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2022-10-07
    Description: Agent based models (ABMs) are a useful tool for modeling spatio-temporal population dynamics, where many details can be included in the model description. Their computational cost though is very high and for stochastic ABMs a lot of individual simulations are required to sample quantities of interest. Especially, large numbers of agents render the sampling infeasible. Model reduction to a metapopulation model leads to a significant gain in computational efficiency, while preserving important dynamical properties. Based on a precise mathematical description of spatio-temporal ABMs, we present two different metapopulation approaches (stochastic and piecewise deterministic) and discuss the approximation steps between the different models within this framework. Especially, we show how the stochastic metapopulation model results from a Galerkin projection of the underlying ABM onto a finite-dimensional ansatz space. Finally, we utilize our modeling framework to provide a conceptual model for the spreading of COVID-19 that can be scaled to real-world scenarios.
    Language: English
    Type: article , doc-type:article
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2022-11-28
    Language: English
    Type: book , doc-type:book
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2023-01-06
    Description: Spatiotemporal signal shaping in G protein-coupled receptor (GPCR) signaling is now a well-established and accepted notion to explain how signaling specificity can be achieved by a superfamily sharing only a handful of downstream second messengers. Dozens of Gs-coupled GPCR signals ultimately converge on the production of cAMP, a ubiquitous second messenger. This idea is almost always framed in terms of local concentrations, the differences in which are maintained by means of spatial separation. However, given the dynamic nature of the reaction-diffusion processes at hand, the dynamics, in particular the local diffusional properties of the receptors and their cognate G proteins, are also important. By combining some first principle considerations, simulated data, and experimental data of the receptors diffusing on the membranes of living cells, we offer a short perspective on the modulatory role of local membrane diffusion in regulating GPCR-mediated cell signaling. Our analysis points to a diffusion-limited regime where the effective production rate of activated G protein scales linearly with the receptor–G protein complex’s relative diffusion rate and to an interesting role played by the membrane geometry in modulating the efficiency of coupling
    Language: English
    Type: article , doc-type:article
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2023-01-03
    Description: The urea-urease clock reaction is a pH switch from acid to basic that can turn into a pH oscillator if it occurs inside a suitable open reactor. We numerically study the confinement of the reaction to lipid vesicles, which permit the exchange with an external reservoir by differential transport, enabling the recovery of the pH level and yielding a constant supply of urea molecules. For microscopically small vesicles, the discreteness of the number of molecules requires a stochastic treatment of the reaction dynamics. Our analysis shows that intrinsic noise induces a significant statistical variation of the oscillation period, which increases as the vesicles become smaller. The mean period, however, is found to be remarkably robust for vesicle sizes down to approximately 200 nm, but the periodicity of the rhythm is gradually destroyed for smaller vesicles. The observed oscillations are explained as a canard-like limit cycle that differs from the wide class of conventional feedback oscillators.
    Language: English
    Type: article , doc-type:article
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...