Abstract
An iterative pruning method for second-order recurrent neural networks is presented. Each step consists in eliminating a unit and adjusting the remaining weights so that the network performance does not worsen over the training set. The pruning process involves solving a linear system of equations in the least-squares sense. The algorithm also provides a criterion for choosing the units to be removed, which works well in practice. Initial experimental results demonstrate the effectiveness of the proposed approach over high-order architectures.
Similar content being viewed by others
References
L.B. Almeida. A learning rule for asynchronous perceptrons with feedback in a combinatorial environment,Proc. Int. Conference on Neural Networks, San Diego, CA, vol. 2, pp. 609–618, 1987.
J. Hertz, A. Krogh, R.G. Palmer.Introduction to the theory of neural computation. Addison-Wesley, Redwood City, CA, 1991.
R. Reed. Pruning algorithms — a survey,IEEE Trans. on Neural Networks, vol.4, no. 5, pp. 740–747, 1993.
C.L. Giles, C.W. Omlin. Pruning recurrent neural networks for improved generalization performance,IEEE Trans. on Neural Networks, vol. 5, no. 5, pp. 848–851, 1994.
M. Pelillo, A. M. Fanelli. A method of pruning layered feed-forward neural networks,New Trends in Neural Computation, J. Mira, J. Cabestany, A. Prieto, eds., pp. 278–283, Springer-Verlag, Berlin, 1993.
G. Castellano, A.M. Fanelli, M. Pelillo. Pruning in recurrent neural networks,Proc. Int. Conf. on Artificial Neural Networks (Sorrento, Italy), pp. 451–454, 1994.
A. Björck, T. Elfving. Accelerated projection methods for computing pseudoinverse solutions of systems of linear equations,BIT vol. 19, 145–163, 1979.
C.B. Miller, C.L. Giles. Experimental comparison of the effect of order in recurrent neural networks,International Journal of Pattern Recognition and Artificial Intelligence, vol. 7, no. 4, pp. 849–872, 1993.
C.L. Giles, C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun, Y.C. Lee. Learning and extracting finite state automata with second-order recurrent neural networks,Neural Computation, vol.4, pp. 393–405, 1992.
M. Tomita. Dynamic construction of finite-state automata from examples using hill-climbing,Proc. Fourth Annual Cognitive Science Conf., Ann Arbor, MI, pp. 105–108, 1982.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Castellano, G., Fanelli, A.M. & Pelillo, M. Iterative pruning in second-order recurrent neural networks. Neural Process Lett 2, 5–8 (1995). https://doi.org/10.1007/BF02309008
Issue Date:
DOI: https://doi.org/10.1007/BF02309008