Dynamic Neural Fields as a Mathematical Framework to model Cognitive Brain Functions

Apresentação

Abstract
Dynamic Neural Fields (DNFs) formalized by nonlinear integro-differential equations have been originally introduced as a model framework for explaining basic principles of neural information processing in which the interactions of billions of neurons are treated as a continuum. The intention is to reduce the enormous complexity of neural interactions to simpler population properties that are tractable by mathematical analysis. More recently, complex models consisting of several connected DNFs have been developed to explain higher level cognitive functions (e.g., working memory, decision making, prediction and learning) and to implement these functionalities in artificial agents.
The goal of the workshop is to give an overview about the physiological motivation of DNFs, the mathematical analysis of their dynamic behavior, and their application in artificial cognitive agents. Starting with the seminal work by Amari [1], I will first discuss the existence and stability of self-sustained neural activity patterns (or bumps) which are introduced by transient input to the network. Bump solutions have been discussed as a neural correlate of a working memory (WM) function. I will then highlight limitations of the classical “continuous bump attractor” approach to WM which have motivated new mathematical work.
In the application part of the workshop, I will explain how we exploit the mathematical insight about the pattern formation in DNFs to endow artificial agents with the capacity to efficiently learn the serial order and the timing of sequential events. Example studies include a robot jointly executing an assembly task with a human user, a robot learning a short musical sequence by observation, and an intelligent driver assistant learning daily traveling routines from GPS data.
For interested participants, example MATLAB code related to Amari’s paper can be found at https://github.com/w-wojtak/neural-fields-matlab

References
[1] S.-I. Amari (1977). Dynamics of pattern formation in lateral-inhibition type neural fields, Biological Cybernetics 27 (2), 77-87.
[2] W. Erlhagen, E. Bicho (2006). The dynamic neural field approach to cognitive robotics, Journal of Neural Engineering 3 (3), R36
[3] F. Ferreira., W. Erlhagen, E. Bicho (2016). Multi-bump solutions in a neural field model with external inputs. Physica D: Nonlinear Phenomena, 326, 32-51.
[4] W. Wojtak, S. Coombes, D. Avitabile,, E. Bicho, W. Erlhagen, (2021). A dynamic neural field model of continuous input integration. Biological Cybernetics, 115(5), 451-471.
[5] F. Ferreira, W. Wojtak, E. Sousa, L. Louro, E. Bicho, W. Erlhagen, W. (2021). Rapid learning of complex sequences with time constraints: A dynamic neural field model. IEEE Transactions on Cognitive and Developmental Systems. 13(4), 853-864.
[6] Lima, P. M., Erlhagen, W., Kulikova, M. V., & Kulikov, G. Y. (2022). Numerical solution of the stochastic neural field equation with applications to working memory. Physica A: Statistical Mechanics and its Applications, 596, 127166.