Library

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • neuro-dynamic programming  (2)
  • VC dimension  (1)
  • communication networks  (1)
  • 1
    Electronic Resource
    Electronic Resource
    Springer
    Queueing systems 32 (1999), S. 319-349 
    ISSN: 1572-9443
    Keywords: large deviations ; communication networks
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract In this paper we consider a stochastic server (modeling a multiclass communication switch) fed by a set of parallel buffers. The dynamics of the system evolve in discrete-time and the generalized processor sharing (GPS) scheduling policy of [25] is implemented. The arrival process in each buffer is an arbitrary, and possibly autocorrelated, stochastic process. We obtain a large deviations asymptotic for the buffer overflow probability at each buffer. In the standard large deviations methodology, we provide a lower and a matching (up to first degree in the exponent) upper bound on the buffer overflow probabilities. We view the problem of finding a most likely sample path that leads to an overflow as an optimal control problem. Using ideas from convex optimization we analytically solve the control problem to obtain both the asymptotic exponent of the overflow probability and a characterization of most likely modes of overflow. These results have important implications for traffic management of high-speed networks. They extend the deterministic, worst-case analysis of [25] to the case where a detailed statistical model of the input traffic is available and can be used as a basis for an admission control mechanism.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    Springer
    Machine learning 35 (1999), S. 225-245 
    ISSN: 0885-6125
    Keywords: nonparametric regression ; VC dimension ; convex optimization
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract We propose a convex optimization approach to solving the nonparametric regression estimation problem when the underlying regression function is Lipschitz continuous. This approach is based on the minimization of the sum of empirical squared errors, subject to the constraints implied by Lipschitz continuity. The resulting optimization problem has a convex objective function and linear constraints, and as a result, is efficiently solvable. The estimated function computed by this technique, is proven to convergeto the underlying regression function uniformly and almost surely, when the sample size grows to infinity, thus providing a very strong form of consistency. Wealso propose a convex optimization approach to the maximum likelihood estimation of unknown parameters in statistical models, where the parameters depend continuously on some observable input variables. For a number of classical distributional forms, the objective function in the underlying optimization problem is convex and the constraints are linear. These problems are, therefore, also efficiently solvable.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Springer
    Machine learning 22 (1996), S. 59-94 
    ISSN: 0885-6125
    Keywords: Compact representation ; curse of dimensionality ; dynamic programming ; features ; function approximation ; neuro-dynamic programming ; reinforcement learning
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract We develop a methodological framework and present a few different ways in which dynamic programming and compact representations can be combined to solve large scale stochastic control problems. In particular, we develop algorithms that employ two types of feature-based compact representations; that is, representations that involve feature extraction and a relatively simple approximation architecture. We prove the convergence of these algorithms and provide bounds on the approximation error. As an example, one of these algorithms is used to generate a strategy for the game of Tetris. Furthermore, we provide a counter-example illustrating the difficulties of integrating compact representations with dynamic programming, which exemplifies the shortcomings of certain simple approaches.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    Springer
    Machine learning 22 (1996), S. 59-94 
    ISSN: 0885-6125
    Keywords: Compact representation ; curse of dimensionality ; dynamic programming ; features ; function approximation ; neuro-dynamic programming ; reinforcement learning
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract We develop a methodological framework and present a few different ways in which dynamic programming and compact representations can be combined to solve large scale stochastic control problems. In particular, we develop algorithms that employ two types of feature-based compact representations; that is, representations that involve feature extraction and a relatively simple approximation architecture. We prove the convergence of these algorithms and provide bounds on the approximation error. As an example, one of these algorithms is used to generate a strategy for the game of Tetris. Furthermore, we provide a counter-example illustrating the difficulties of integrating compact representations with dynamic programming, which exemplifies the shortcomings of certain simple approaches.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...