Library

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Electronic Resource
    Electronic Resource
    Springer
    Machine learning 33 (1998), S. 87-104 
    ISSN: 0885-6125
    Keywords: mistake-bound learning ; self-directed learning ; VC-dimension ; teacher-directed learning
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract We study the self-directed (SD) learning model. In this model a learner chooses examples, guesses their classification and receives immediate feedback indicating the correctness of its guesses. We consider several fundamental questions concerning this model: the parameters of a task that determine the cost of learning, the computational complexity of a student, and the relationship between this model and the teacher-directed (TD) learning model. We answer the open problem of relating the cost of self-directed learning to the VC-dimension by showing that no such relation exists. Furthermore, we refute the conjecture that for the intersection-closed case, the cost of self-directed learning is bounded by the VC-dimension. We also show that the cost of SD learning may be arbitrarily higher that that of TD learning. Finally, we discuss the number of queries needed for learning in this model and its relationship to the number of mistakes the student incurs. We prove a trade-off formula showing that an algorithm that makes fewer queries throughout its learning process, necessarily suffers a higher number of mistakes.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Electronic Resource
    Electronic Resource
    Springer
    Machine learning 41 (2000), S. 153-174 
    ISSN: 0885-6125
    Keywords: computational learning theory ; sample complexity ; concept drift ; estimation ; tracking
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract This paper examines learning problems in which the target function is allowed to change. The learner sees a sequence of random examples, labelled according to a sequence of functions, and must provide an accurate estimate of the target function sequence. We consider a variety of restrictions on how the target function is allowed to change, including infrequent but arbitrary changes, sequences that correspond to slow walks on a graph whose nodes are functions, and changes that are small on average, as measured by the probability of disagreements between consecutive functions. We first study estimation, in which the learner sees a batch of examples and is then required to give an accurate estimate of the function sequence. Our results provide bounds on the sample complexity and allowable drift rate for these problems. We also study prediction, in which the learner must produce online a hypothesis after each labelled example and the average misclassification probability over this hypothesis sequence should be small. Using a deterministic analysis in a general metric space setting, we provide a technique for constructing a successful prediction algorithm, given a successful estimation algorithm. This leads to sample complexity and drift rate bounds for the prediction of changing concepts.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Electronic Resource
    Electronic Resource
    Springer
    Machine learning 29 (1997), S. 45-63 
    ISSN: 0885-6125
    Keywords: On-Line Learning ; Mistake-Bound ; Rank of Trees
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract We present an off-line variant of the mistake-bound model of learning. This is an intermediate model between the on-line learning model (Littlestone, 1988, Littlestone, 1989) and the self-directed learning model (Goldman, Rivest & Schapire, 1993, Goldman & Sloan, 1994). Just like in the other two models, a learner in the off-line model has to learn an unknown concept from a sequence of elements of the instance space on which it makes “guess and test” trials. In all models, the aim of the learner is to make as few mistakes as possible. The difference between the models is that, while in the on-line model only the set of possible elements is known, in the off-line model the sequence of elements (i.e., the identity of the elements as well as the order in which they are to be presented) is known to the learner in advance. On the other hand, the learner is weaker than the self-directed learner, which is allowed to choose adaptively the sequence of elements presented to him. We study some of the fundamental properties of the off-line model. In particular, we compare the number of mistakes made by the off-line learner on certain concept classes to those made by the on-line and self-directed learners. We give bounds on the possible gaps between the various models and show examples that prove that our bounds are tight. Another contribution of this paper is the extension of the combinatorial tool of labeled trees to a unified approach that captures the various mistake bound measures of all the models discussed. We believe that this tool will prove to be useful for further study of models of incremental learning.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    Electronic Resource
    Electronic Resource
    Springer
    Machine learning 32 (1998), S. 207-224 
    ISSN: 0885-6125
    Keywords: learning theory ; PAC ; Vapnik-Chervonenkis dimension ; localization ; identification ; recognition ; computer vision
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract How difficult is it to find the position of a known object using random samples? We study this question, which is central to Computer Vision and Robotics, in a formal way. We compare the information complexity of two types of tasks: the task of identification of an unknown object from labeled examples input, and the task of localization in which the identity of the target is known and its location in some background scene has to be determined. We carry out the comparison of these tasks using two measuring rods for the complexity of classes of sets; The Vapnik-Chervonenkis dimension and the ∈-entropy of relevant classes. The VC-dimension analysis yields bounds on the sample complexity of performing these tasks in the PAC-learning scenario whereas the ∈-entropy parameter reflects the complexity of the relevant learning tasks when the examples are generated by the uniform distribution (over the background scene). Our analysis provides a mathematical ground to the intuition that localization is indeed much easier than identification. Our upper-bounds on the hardness of localization are established by applying a new, algebraic-geometry based, general tool for the calculation of the VC-dimension of classes of algebraically defined objects. This technique was independently discovered by Goldberg and Jerrum. We believe that our techniques will prove useful for further VC-dimension estimation problems.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 5
    Electronic Resource
    Electronic Resource
    Springer
    Journal of mathematical imaging and vision 10 (1999), S. 27-49 
    ISSN: 1573-7683
    Keywords: learning theory ; PAC ; Vapnik-Chervonenkis dimension ; localization ; model-based recognition ; computer vision
    Source: Springer Online Journal Archives 1860-2000
    Topics: Mathematics
    Notes: Abstract We analyze the amount of data needed to carry out various model-based recognition tasks in the context of a probabilistic data collection model. We focus on objects that may be described as semi-algebraic subsets of a Euclidean space. This is a very rich class that includes polynomially described bodies, as well as polygonal objects, as special cases. The class of object transformations considered is wide, and includes perspective and affine transformations of 2D objects, and perspective projections of 3D objects. We derive upper bounds on the number of data features (associated with non-zero spatial error) which provably suffice for drawing reliable conclusions. Our bounds are based on a quantitative analysis of the complexity of the hypotheses class that one has to choose from. Our central tool is the VC-dimension, which is a well-studied parameter measuring the combinatorial complexity of families of sets. It turns out that these bounds grow linearly with the task complexity, measured via the VC-dimension of the class of objects one deals with. We show that this VC-dimension is at most logarithmic in the algebraic complexity of the objects and in the cardinality of the model library. Our approach borrows from computational learning theory. Both learning and recognition use evidence to infer hypotheses but as far as we know, their similarity was not exploited previously. We draw close relations between recognition tasks and a certain learnability framework and then apply basic techniques of learnability theory to derive our sample size upper bounds. We believe that other relations between learning procedures and visual tasks exist and hope that this work will trigger further fruitful study along these lines.
    Type of Medium: Electronic Resource
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 6
    Book
    Book
    New York, NY :Cambrige Univ. Press,
    Title: Understanding machine learning : from theory to algorithms
    Author: Shalev-Shwartz, Shai
    Contributer: Ben-David, Shai
    Edition: 1. publ.
    Publisher: New York, NY :Cambrige Univ. Press,
    Year of publication: 2014
    Pages: XVI, 397 S. : , Ill., graph. Darst.
    ISBN: 978-1-107-05713-5
    Type of Medium: Book
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...