Skip to main content
Log in

Residue systolic implementations for neural networks

  • Articles
  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

In this work we propose two techniques for improving VLSI implementations for artificial neural networks (ANNs). By making use of two kinds of processing elements (PEs), one dedicated to the basic operations (addition and multiplication) and another to evaluate the activation function, the total time and cost for the VLSI array implementation of ANNs can be decreased by a factor of two compared with previous work. Taking the advantage of residue number system, the efficiency of each PE can be further increased. Two RNS- based array processor designs are proposed. The first is built by look-up tables, and the second is constructed by binary adders accomplished by the mixed- radix conversion (MRC), such that the hardwares are simple and high speed operations are obtained. The proposed techniques are general enough to be extended to cover other forms of loading and learning algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Hopfield JJ. Neurons with graded response have collective computational properties like those of two-state neurons. Proc Nat Acid Sci USA 1984; 3088–3092

  2. White BA, Elmasry MI. The Digi-Neocognitron: A digital neocognitron neural network model for VLSI. IEEE Trans Neural Networks 1992; 3(1)

  3. Kung SY, Hwang JN. Systolic architectures for artificial neural nets. Proc Int Conf Neural Networks, San Diego, CA, 1988

  4. Lin WM, Prasanna VK, Przytula KW. Algorithmic mapping of neural network models onto parallel SIMD machines. IEEE Trans Comput 1991; 40: 1390–1401

    Google Scholar 

  5. Shams S, Przytula KW. Mapping of neural networks onto programmable parallel machines. Proc IEEE Int Symp Circuits Syst, New Orleans, LA, 1990

  6. Taylor FJ, Taylor Jr WD. A new residue to decimal converter. Proc IEEE 1985; 73: 378–380

    Google Scholar 

  7. Szabo MS, Tanaka RI. Residue Arithmetic and Its Applications to Computer Technology. McGraw-Hill, New York, 1967

    Google Scholar 

  8. Ulman ZD. Sign detection and implicit-explicit conversion of numbers in residue arithmetic. IEEE Trans Comput 1983; 32: 590–594

    Google Scholar 

  9. Zhang CN, Shirazi B, Yun DYY. An efficient algorithm and parallel implementations for binary and residue number systems. J Symbolic Computation 1993; 15: 451–462

    Google Scholar 

  10. Zhang CN, Cheng HD. High-speed single error correcting converter for residue number processing. IEE Proc Part E 1991; 138(4): 177–182

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhang, C.N., Wang, M. & Tseng, C.C. Residue systolic implementations for neural networks. Neural Comput & Applic 3, 149–156 (1995). https://doi.org/10.1007/BF01414076

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01414076

Keywords

Navigation