Skip to main content
Log in

Iterative pruning in second-order recurrent neural networks

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

An iterative pruning method for second-order recurrent neural networks is presented. Each step consists in eliminating a unit and adjusting the remaining weights so that the network performance does not worsen over the training set. The pruning process involves solving a linear system of equations in the least-squares sense. The algorithm also provides a criterion for choosing the units to be removed, which works well in practice. Initial experimental results demonstrate the effectiveness of the proposed approach over high-order architectures.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. L.B. Almeida. A learning rule for asynchronous perceptrons with feedback in a combinatorial environment,Proc. Int. Conference on Neural Networks, San Diego, CA, vol. 2, pp. 609–618, 1987.

    Google Scholar 

  2. J. Hertz, A. Krogh, R.G. Palmer.Introduction to the theory of neural computation. Addison-Wesley, Redwood City, CA, 1991.

    Google Scholar 

  3. R. Reed. Pruning algorithms — a survey,IEEE Trans. on Neural Networks, vol.4, no. 5, pp. 740–747, 1993.

    Google Scholar 

  4. C.L. Giles, C.W. Omlin. Pruning recurrent neural networks for improved generalization performance,IEEE Trans. on Neural Networks, vol. 5, no. 5, pp. 848–851, 1994.

    Google Scholar 

  5. M. Pelillo, A. M. Fanelli. A method of pruning layered feed-forward neural networks,New Trends in Neural Computation, J. Mira, J. Cabestany, A. Prieto, eds., pp. 278–283, Springer-Verlag, Berlin, 1993.

    Google Scholar 

  6. G. Castellano, A.M. Fanelli, M. Pelillo. Pruning in recurrent neural networks,Proc. Int. Conf. on Artificial Neural Networks (Sorrento, Italy), pp. 451–454, 1994.

  7. A. Björck, T. Elfving. Accelerated projection methods for computing pseudoinverse solutions of systems of linear equations,BIT vol. 19, 145–163, 1979.

    MathSciNet  Google Scholar 

  8. C.B. Miller, C.L. Giles. Experimental comparison of the effect of order in recurrent neural networks,International Journal of Pattern Recognition and Artificial Intelligence, vol. 7, no. 4, pp. 849–872, 1993.

    Google Scholar 

  9. C.L. Giles, C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun, Y.C. Lee. Learning and extracting finite state automata with second-order recurrent neural networks,Neural Computation, vol.4, pp. 393–405, 1992.

    Google Scholar 

  10. M. Tomita. Dynamic construction of finite-state automata from examples using hill-climbing,Proc. Fourth Annual Cognitive Science Conf., Ann Arbor, MI, pp. 105–108, 1982.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Castellano, G., Fanelli, A.M. & Pelillo, M. Iterative pruning in second-order recurrent neural networks. Neural Process Lett 2, 5–8 (1995). https://doi.org/10.1007/BF02309008

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02309008

Keywords

Navigation