Random network models for synaptic plasticity

In the past decade, the deep-learning community has achieved revolutionary breakthroughs in image and speech recognition, natural language processing and reinforcement learning. Since deep learning relies on recognizing complex patterns by successively decomposing them into simpler structures, this mechanism is reminiscent of learning in the human brain. Despite these parallels, there remain fundamental differences in the way that artificial and real neural networks operate. This leads to the question:

What properties should a dynamic network have to support the learning of complex patterns?

A promising explanation is offered by reinforced processes on networks, which are closely related to the phenomenon of neuroplasticity: If a synaptic connection has proven to be useful in the past, it should be strengthened and selected with preference in the future.

This idea motivates the paper Strongly reinforced Pólya urns with graph-based competition by R. van der Hofstad, M. Holmes, A. Kuznetsov, and W. Ruszel, who introduce a dynamical random network that provides a basic interacting-particle based model for learning in neural networks. Neurons fire randomly and the weight of one of the incident synapses is incremented. The selection of the synapse is proportional to its weight to a power $\alpha > 0$. Depending on whether $\alpha$ is larger, equal or smaller than 1, the process is in the superlinear, linear or sublinear regime. In a paper/ follow-up with M. Holmes and V. Kleptsyn, we study the model on countable graphs and investigate whether it is possible to arrive at a percolating but sparse network of relevant edges.

Thanks to the seminal work by A. Galves and E. Löcherbach there is by now a solid mathematical understanding of the long-time equilibrium behavior of a network of spiking neurons. However, this work already starts from a given collection of synaptic weights. We consider a conceptually simplified random network model for the phenomenon of synaptic plasticity, where we think of the synaptic weight as a stochastic process on a fixed graph of neurons interconnected by synapses. Additionally, the neurons feature fitnesses.

Each of the neurons is equipped with a Poisson clock. When the clock rings at neuron at time , then the corresponding neuron fires and the weight of one of the incident synapses leading to the next layer increases by 1. The selection mechanism incorporates two effects. First, the fitter the neuron, the higher the probability that it is selected. Second, if a synapse was used frequently in the past, then it means that it is of importance and should be selected again with a higher probability in the future.

Furthermore, in a paper/ code with M. Heydenreich, we develop a hierarchical network model reflecting the layered structure from learning architectures in biological and artificial neural networks. In this setting, we study small-world phenomena in the hierarchical network of relevant synapses that are reinforced a positive proportion of time. The initial model could establish the emergence of a small-world network in a setting of nodes equipped with fitnesses, which act as a powerful attractor to aggregate potentially distant nodes.

However, this initial model relied on a strong a priori assumption on the structure of the underlying network. Therefore, in a follow-up paper/ code, start with a hierarchical network that is much closer to the idea of tabula rasa and can still achieve similar effects by letting the fitnesses also steer the possible scope over which links emerge.

Christian Hirsch
Christian Hirsch
Assistant Professor for Topological Data Analysis