Skip to main content
Log in

Effect of complexity on learning ability of recurrent neural networks

  • Original Article
  • Published:
Artificial Life and Robotics Aims and scope Submit manuscript

Abstract

This paper demonstrates that recurrent neural networks can be used effectively to estimate unknown, complicated nonlinear dynamics. The emphasis of this paper is on the distinguishable properties of dynamics at the edge of chaos, i.e., between ordered behavior and chaotic behavior. We introduce new stochastic parameters, defined as combinations of standard parameters, and reveal relations between these parameters and the complexity of the network dynamics by simulation experiments. We then propose a novel learning method whose core is to keep the complexity of the network dynamics to the dynamics phase which has been distinguished using formulations of the experimental relations. In this method, the standard parameters of neurons are changed by the core part and also according to the global error measure calculated by the well-known simple back-propagation algorithm. Some simulation studies show that the core part is effective for recurrent neural network learning, and suggest the existence of excellent learning ability at the edge of chaos.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Narendra KS, Parthasarathy K (1990) Identification and control of dynamical systems using neural networks. Neural Networks 1:4–27

    Article  Google Scholar 

  2. Pineda FJ (1987) Generalization of back-propagation to recurrent neural networks. Phys Rev Lett 59:2229–2232

    Article  MathSciNet  Google Scholar 

  3. Williams RJ, Zipser D (1989) A learning algorithm for continually running fully recurrent neural networks. Neural Comput 1:270–280

    Google Scholar 

  4. Pearlmutter BA (1989) Learning state space trajectories in recurrent neural networks. Neural Comput 1:263–269

    Google Scholar 

  5. Unnikrishnan KP, Venugopal KP (1994) Alopex: a correlation-based learning algorithm for feedforward and recurrent neural networks. Neural Comput 6:469–490

    Google Scholar 

  6. Honma N, Abe K, Sato M et al. (1996) On emergent evolution of Holon networks using an autonomous decentralized method. Proceedings of the 13th World Congress of IFAC, vol I, pp 237–242

    Google Scholar 

  7. Kauffman SA (1991) Antichaos and adaptation. Sci Am August: 64–70

    Google Scholar 

  8. Langton CG (1990) Computation at the edge of chaos: phase transitions and emergent computation. Physica D 42:12–37

    Article  MathSciNet  Google Scholar 

  9. Kitagawa K, Honma N, Abe K (1997) An emergent learning method for recurrent neural networks (in Japanese). J SICE 33:1093–1098

    Google Scholar 

  10. Honma N, Kitagawa K, Abe K et al. (1997) An autonomous criterion of learning methods for recurrent neural networks. Proceedings of the 2nd ASCC, II, pp 219–222

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to N. Honma.

About this article

Cite this article

Honma, N., Kitagawa, K. & Abe, K. Effect of complexity on learning ability of recurrent neural networks. Artificial Life and Robotics 2, 97–101 (1998). https://doi.org/10.1007/BF02471163

Download citation

  • Received:

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02471163

Key words

Navigation