ISSN:
1434-6036
Keywords:
87.10.+e
;
02.50.+s
;
05.20.-y
Source:
Springer Online Journal Archives 1860-2000
Topics:
Physics
Notes:
Abstract We study the generalization abilityg Q ofQ-state Clock-model perceptrons for (i) Hebbian and for certain Non-Hebbian learning procedures, namely (ii) learning with maximal stability, (iii) zero stability and (iv) optimal generalization, for the case of random training sets. Among other results we find thatg Q behaves quite different in the Hebbian and in the Non-Hebbian cases in the limitQ→∞. E.g. in the Hebbian case for finite α,g Q vanishes always ∝1/Q, whereas in the Non-Hebbian cases considered,g Q converges forQ→∞ to a non-trivial continuous functiong ∞(α), which vanishes for α〈2, but increases rapidly for α〉2. This means that for (ii), (iii) and (iv), as a function of α atQ=∞, there is a 2nd-order phase transition from a non-generalizing phase for α≤2 to a generalizing phase for α〉2. Different behaviour of the Hebbian and Non-Hebbian cases, respectively, is also observed for the information gain obtained through learning. For the particular case of AdaTron Learning, which is identical to case (ii), we find a geometrical formulation forg Q (α), which is applicable to more general models.
Type of Medium:
Electronic Resource
URL:
http://dx.doi.org/10.1007/BF01313295
Permalink