Recent work on bats flying over long distances has revealed that single hippocampal cells have multiple place fields of different sizes. At the network level, a multi-scale, multi-field place cell code outperforms classical single-scale, single-field place codes, yet the performance boundaries of such a code remain an open question. In particular, it is unknown how general multi-field codes compare to a highly regular grid code, in which cells form distinct modules with different scales.
In this work, we address the coding properties of theoretical spatial coding models with rigorous analyses of comprehensive simulations. Starting from a multi-scale, multi-field network, we performed evolutionary optimization. The resulting multi-field networks sometimes retained the multi-scale property at the single-cell level but most often converged to a single scale, with all place fields in a given cell having the same size. We compared the results against a single-scale single-field code and a one-dimensional grid code, focusing on two main characteristics: the performance of the code itself and the dynamics of the network generating it.
Our simulation experiments revealed that, under normal conditions, a regular grid code outperforms all other codes with respect to decoding accuracy, achieving a given precision with fewer neurons and fields. In contrast, multi-field codes are more robust against noise and lesions, such as random drop-out of neurons, given that the significantly higher number of fields provides redundancy. Contrary to our expectations, the network dynamics of all models, from the original multi-scale models before optimization to the multi-field models that resulted from optimization, did not maintain activity bumps at their original locations when a position-specific external input was removed.
Optimized multi-field codes appear to strike a compromise between a place code and a grid code that reflects a trade-off between accurate positional encoding and robustness. Surprisingly, the recurrent neural network models we implemented and optimized for either multi- or single-scale, multi-field codes did not intrinsically produce a persistent “memory” of attractor states. These models, therefore, were not continuous attractor networks.