Speaker
Description
Training artificial neural networks using local learning rules remains an ongoing challenge in biologically inspired machine learning. In contrast to standard machine learning, which typically relies on global error signals and centralized optimization, the brain operates under strong locality constraints: synaptic updates depend only on information available at the neuron or synapse. Many solutions have been proposed, but they have limitations. For instance they are heuristically chosen or hard to interpret at the neuron level.
Building on the work of Kay and Philipp on coherent infomax and Makkeh on infomorphic networks, we propose a framework for training artificial spiking neural networks using local goal functions derived from information-theoretic principles. The foundation of the model is the assumption that neurons process input from functionally distinct compartments. Those compartments for example can be feedforward, feedback, and lateral inputs, each contributing differently to the neuron’s output.
By applying partial information decomposition, we decompose the information each compartment provides about the neuron's output into unique, redundant, and synergistic components. This decomposition enables each neuron to optimize a local objective that selectively enhances or suppresses specific information types.
To further increase biological plausibility, we model the neuron using leaky integrate-and-fire dynamics. The membrane serves as an additional compartment that acts as an intrinsic memory, integrating over past inputs. This extension pushes the framework toward a more realistic and interpretable model of local learning in spiking systems.
This work unifies information-theoretic principles with realistic neuronal dynamics, advancing the development of locally trainable spiking networks.