ISSN:
1467-8640
Source:
Blackwell Publishing Journal Backfiles 1879-2005
Topics:
Computer Science
Notes:
Much research in machine learning has been focused on the problem of symbol-level learning (SLL), or learning to improve the performance of a program given examples of its behavior on typical inputs. A common approach to symbol-level learning is to use some sort of mechanism for saving and later reusing the solution paths used to solve previous search problems. Examples of such mechanisms are macro-operator learning, explanation-based learning, and chunking. However, experimental evidence that these mechanisms actually improve performance is inconclusive. This paper presents a formal framework for analysis of symbol-level learning programs, and then uses this framework to investigate a series of solution-path caching mechanisms which provably improve performance. The analysis of these mechanisms is illuminating in many respects; in particular, in order to obtain positive results, it is necessary to use a novel representation for a set of solution paths, and also to apply certain unusual optimizations to a set of solution paths. Several of the predictions made by the model have been confirmed by recently published experiments.
Type of Medium:
Electronic Resource
URL:
http://dx.doi.org/10.1111/j.1467-8640.1992.tb00370.x
Permalink