19–21 Sept 2023
Alte Mensa
Europe/Berlin timezone

A Measure of the Complexity of Neural Representations based on Partial Information Decomposition

20 Sept 2023, 14:45
20m
Emmy Noether Room (Alte Mensa)

Emmy Noether Room

Alte Mensa

Wilhelmsplatz 3, 37073 Göttingen
Joint Session Plenary

Speaker

David Alexander Ehrlich (Campus Institute for Dynamics of Biological Networks)

Description

Despite their widespread adoption, the inner workings of Deep Neural Networks (DNNs) remain largely unknown. One key aspect of DNN learning is how the hidden layers' activation patterns encode the ground-truth label that the network is supposed to predict. This task-relevant information is represented jointly by groups of neurons within a layer. However, the specific way in which this mutual information about the classification label is distributed among the individual neurons is not well understood: While parts of it may only be obtainable from specific single neurons, other parts are carried redundantly or synergistically by multiple neurons. A better understanding of this representation may in the future help to identify problems during the training process or guide the selection of a network architecture.

We show how Partial Information Decomposition (PID), a recent extension of information theory, can disentangle these different contributions. From this, we introduce the measure of "Representational Complexity", which quantifies the difficulty of accessing information spread across multiple neurons. The quantity reflects the smallest number of neurons that need to be considered jointly to retrieve a piece of information on average: A representational complexity of C=1 means that all pieces of information can be obtained from single neurons, while C is equal to the total number of neurons if the information is encoded synergistically between all of them. We show how this complexity is directly computable for smaller layers. For larger layers, we propose subsampling and coarse-graining procedures and prove corresponding bounds on the latter.

Empirically, for quantized deep neural networks solving the MNIST and CIFAR10 image classification tasks, we observe that representational complexity decreases both through successive hidden layers and over training, and compare the results to related measures. Overall, we propose representational complexity as a principled and interpretable summary statistic for analyzing the structure and evolution of neural representations and complex systems in general.

Primary authors

David Alexander Ehrlich (Campus Institute for Dynamics of Biological Networks) Andreas Schneider (Max Planck Institute for Dynamics and Self-Organization)

Co-authors

Prof. Viola Priesemann (Max Planck Institute for Dynamics and Self-Organization) Prof. Michael Wibral (Campus Institute for Dynamics of Biological Networks) Dr Abdullah Makkeh (Campus Institute for Dynamics of Biological Networks)

Presentation materials

There are no materials yet.