Skip to main content
Log in

Comparing networks with differing neural-node functions using transputer-based genetic algorithms

  • Artides
  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

Different neural net node computational functions are compared using feedforward backpropagation networks. Three node types are examined: the standard model, ellipsoidal nodes and quadratic nodes. After preliminary experiments on simple small problems, in which quadratic nodes performed very well, networks of differing nodes types are applied to the speech recognition 104 speaker E-task using a fixed architecture. Ellipsoidal nodes were found to work well, but not as well as the standard model. Quadratic nodes did not perform well on the larger task. To facilitate an architecture independent comparison a transputer-based genetic algorithm is then used to compare ellipsoidal and mixed ellipsoidal and standard networks with the standard model. These experiments confirmed the earlier conclusion that ellipsoidal networks could not compare favourably with the standard model on the 104 speaker E-task. In an evolutionary search in which layer node types were free to adjust ellipsoidal nodes had a tendency to become extinct or barely survive. If the presence of ellipsoidal nodes was enforced then the resulting networks again performed poorly when compared with the standard model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Jones AJ. The modular construction of dynamic nets. Neural Computing & Applications 1993; 1(1)

  2. Shepard RN. Stimulus and response generalisation: Deduction of the generalisation gradient from a trace model. Psychological Review 1958; 65: 242–256

    Google Scholar 

  3. Shepard RN. Towards a universal law of generalisation for psychological science. Science 1987; 237

  4. Tawel R. Does the neuron ‘learn’ like the synapse? In: Advances in Neural Information Processing 1, ed. D.S. Touretzky, San Mato, CA: Morgan-Kaufmann, 1989; 169–176

    Google Scholar 

  5. Jones AJ. Genetic algorithms and their applications to the design of neural networks. Neural Computing & Applications 1993; 1(1)

  6. Bimbot F. Speech processing and recognition using integrated neurocomputing techniques. Technical report BRA3228, Cap Gemini Innovation, 118 rue de Tocqueville, 75017 Paris, France

  7. Dodd N, Macfarlane D, and Marland C. Optimisation of artificial network structure using genetic techniques implemented on multiple transputers. In: Transputing 91, Vol 2. Eds D Styles, T Kunii, A Bakkers, IOS Press, 1991

  8. Macfarlane D, East I. An investigation of several parallel genetic algorithms. OUG-12: Tools and Techniques for Transputer Applications. IOS Press, 1990.

  9. Giles C, Griffin R, Maxwell T. Encoding geometric invariances in higher order neural networks. Neural Information Processing Systems, American Institute of Physics, New York, 1988; 301–309

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Jones, A.J., Macfarlane, D. Comparing networks with differing neural-node functions using transputer-based genetic algorithms. Neural Comput & Applic 1, 256–267 (1993). https://doi.org/10.1007/BF02098744

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02098744

Keywords

Navigation