Overview Statistic: PDF-Downloads (blue) and Frontdoor-Views (gray)

On the Activation Space of ReLU Equipped Deep Neural Networks

  • Modern Deep Neural Networks are getting wider and deeper in their architecture design. However, with an increasing number of parameters the decision mechanisms becomes more opaque. Therefore, there is a need for understanding the structures arising in the hidden layers of deep neural networks. In this work, we present a new mathematical framework for describing the canonical polyhedral decomposition in the input space, and in addition, we introduce the notions of collapsing- and preserving patches, pertinent to understanding the forward map and the activation space they induce. The activation space can be seen as the output of a layer and, in the particular case of ReLU activations, we prove that this output has the structure of a polyhedral complex.

Export metadata

Additional Services

Share in Twitter Search Google Scholar Statistics - number of accesses to the document
Metadaten
Author:Mustafa Chaukair, Christof Schütte, Vikram Sunkara
Document Type:In Proceedings
Parent Title (English):Procedia Computer Science
Volume:222
First Page:624
Last Page:635
Year of first publication:2023
DOI:https://doi.org/10.1016/j.procs.2023.08.200
Accept ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.