Abstract
This paper demonstrates that recurrent neural networks can be used effectively to estimate unknown, complicated nonlinear dynamics. The emphasis of this paper is on the distinguishable properties of dynamics at the edge of chaos, i.e., between ordered behavior and chaotic behavior. We introduce new stochastic parameters, defined as combinations of standard parameters, and reveal relations between these parameters and the complexity of the network dynamics by simulation experiments. We then propose a novel learning method whose core is to keep the complexity of the network dynamics to the dynamics phase which has been distinguished using formulations of the experimental relations. In this method, the standard parameters of neurons are changed by the core part and also according to the global error measure calculated by the well-known simple back-propagation algorithm. Some simulation studies show that the core part is effective for recurrent neural network learning, and suggest the existence of excellent learning ability at the edge of chaos.
Similar content being viewed by others
References
Narendra KS, Parthasarathy K (1990) Identification and control of dynamical systems using neural networks. Neural Networks 1:4–27
Pineda FJ (1987) Generalization of back-propagation to recurrent neural networks. Phys Rev Lett 59:2229–2232
Williams RJ, Zipser D (1989) A learning algorithm for continually running fully recurrent neural networks. Neural Comput 1:270–280
Pearlmutter BA (1989) Learning state space trajectories in recurrent neural networks. Neural Comput 1:263–269
Unnikrishnan KP, Venugopal KP (1994) Alopex: a correlation-based learning algorithm for feedforward and recurrent neural networks. Neural Comput 6:469–490
Honma N, Abe K, Sato M et al. (1996) On emergent evolution of Holon networks using an autonomous decentralized method. Proceedings of the 13th World Congress of IFAC, vol I, pp 237–242
Kauffman SA (1991) Antichaos and adaptation. Sci Am August: 64–70
Langton CG (1990) Computation at the edge of chaos: phase transitions and emergent computation. Physica D 42:12–37
Kitagawa K, Honma N, Abe K (1997) An emergent learning method for recurrent neural networks (in Japanese). J SICE 33:1093–1098
Honma N, Kitagawa K, Abe K et al. (1997) An autonomous criterion of learning methods for recurrent neural networks. Proceedings of the 2nd ASCC, II, pp 219–222
Author information
Authors and Affiliations
Corresponding author
About this article
Cite this article
Honma, N., Kitagawa, K. & Abe, K. Effect of complexity on learning ability of recurrent neural networks. Artificial Life and Robotics 2, 97–101 (1998). https://doi.org/10.1007/BF02471163
Received:
Accepted:
Issue Date:
DOI: https://doi.org/10.1007/BF02471163