AUTHOR=Dodds Eric McVoy , DeWeese Michael Robert TITLE=On the Sparse Structure of Natural Sounds and Natural Images: Similarities, Differences, and Implications for Neural Coding JOURNAL=Frontiers in Computational Neuroscience VOLUME=13 YEAR=2019 URL=https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2019.00039 DOI=10.3389/fncom.2019.00039 ISSN=1662-5188 ABSTRACT=

Sparse coding models of natural images and sounds have been able to predict several response properties of neurons in the visual and auditory systems. While the success of these models suggests that the structure they capture is universal across domains to some degree, it is not yet clear which aspects of this structure are universal and which vary across sensory modalities. To address this, we fit complete and highly overcomplete sparse coding models to natural images and spectrograms of speech and report on differences in the statistics learned by these models. We find several types of sparse features in natural images, which all appear in similar, approximately Laplace distributions, whereas the many types of sparse features in speech exhibit a broad range of sparse distributions, many of which are highly asymmetric. Moreover, individual sparse coding units tend to exhibit higher lifetime sparseness for overcomplete models trained on images compared to those trained on speech. Conversely, population sparseness tends to be greater for these networks trained on speech compared with sparse coding models of natural images. To illustrate the relevance of these findings to neural coding, we studied how they impact a biologically plausible sparse coding network's representations in each sensory modality. In particular, a sparse coding network with synaptically local plasticity rules learns different sparse features from speech data than are found by more conventional sparse coding algorithms, but the learned features are qualitatively the same for these models when trained on natural images.