Speaker
Description
Neural networks rely on coordination among individual neurons to perform complex tasks, but in the brain, they must operate within the constraints of locality for both computation and learning. Our research uses an information-theoretic approach to better understand how locality affects neural networks' structure and operation. We employ Partial Information Decomposition (PID) to quantify unique, redundant, and synergistic information contributions to a neuron's output from multiple groups of inputs. Using this conceptualization, we derive a general, parametric local learning rule. This rule allows for the construction of networks that consist of locally learning neurons, which can perform tasks from supervised, unsupervised, and associative memory learning. We have recently scaled our approach, demonstrating its potential as an alternative to deep neural networks. Our framework provides a powerful tool for investigating the information-theoretic principles underlying the operation of living neural networks and may facilitate the development of locally learning artificial neural networks that function more closely to the brain.