A detailed architecture characterization of Forward-Forward Convolutional Neural Networks

18 Sept 2024, 16:50
1h 30m
Emmy-Noether-Saal

Emmy-Noether-Saal

Speaker

Lira Yelemessova (Max Planck Institute for Multidisciplinary Sciences (MPINAT), Göttingen)

Description

Backpropagation, the standard for training Convolutional Neural Networks (CNNs), is not biologically plausible due to its reliance on forward and backward passes. In December 2022, Professor Geoffrey Hinton, a pioneer in the field, introduced the Forward-Forward (FF) algorithm as a potential alternative. FF avoids backpropagation throughout the entire network by using two forward passes, preventing hidden layers from receiving information from subsequent ones. In 2023, our team was the first to implement this innovative algorithm on CNNs [1].

Here, we provide a comprehensive characterization of FF-based CNNs, demonstrating their comparable accuracy to standard backpropagation on benchmark datasets. For instance, on the MNIST dataset, our model achieved a test accuracy of 99.16%. We used explainable artificial intelligence tools, such as Class Activation Maps, to show that FF-based CNNs make classifications based on real spatial features. Additionally, we found that each layer contributes unique and valuable information to the final classification.

Building on Hinton’s findings, which highlighted the slower convergence of FF-trained fully connected networks compared to backpropagation, we introduced novel strategies to accelerate FF convergence in CNNs without sacrificing accuracy. We took advantage of the versatility of the FF algorithm and, by independently tuning the hyperparameters of each hidden layer, we reduced the number of required epochs from 200 to 40, decreasing the training time by more than 50%.
FF´s advantages, such as its biological plausibility and lower memory requirements, together with our findings and improvements, suggest the potential to establish FF as a promising alternative to backpropagation for image analysis. We aim our work encourages wider adoption of the FF algorithm, leading to further exploration of this promising new paradigm.

[1] Scodellaro, R., Kulkarni, A., Alves, F., & Schröter, M. (2023). Training Convolutional Neural Networks with the Forward-Forward algorithm. arXiv preprint arXiv:2312.14924.

Primary authors

Lira Yelemessova (Max Planck Institute for Multidisciplinary Sciences (MPINAT), Göttingen) Frauke Alves (Max Planck Institute for Multidisciplinary Sciences (MPINAT) / UMG Göttingen) Matthias Schröter (Max Planck Institute for Multidisciplinary Sciences (MPINAT), UMG Göttingen, Max Planck Institute for Dynamics and Self-Organization (MDS)) Riccardo Scodellaro (Max Planck Institute for Multidisciplinary Sciences (MPINAT), Göttingen)

Presentation materials

There are no materials yet.