AUTHOR=Balduzzi David TITLE=Grammars for Games: A Gradient-Based, Game-Theoretic Framework for Optimization in Deep Learning JOURNAL=Frontiers in Robotics and AI VOLUME=2 YEAR=2016 URL=https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2015.00039 DOI=10.3389/frobt.2015.00039 ISSN=2296-9144 ABSTRACT=

Deep learning is currently the subject of intensive study. However, fundamental concepts such as representations are not formally defined – researchers “know them when they see them” – and there is no common language for describing and analyzing algorithms. This essay proposes an abstract framework that identifies the essential features of current practice and may provide a foundation for future developments. The backbone of almost all deep learning algorithms is backpropagation, which is simply a gradient computation distributed over a neural network. The main ingredients of the framework are, thus, unsurprisingly: (i) game theory, to formalize distributed optimization; and (ii) communication protocols, to track the flow of zeroth and first-order information. The framework allows natural definitions of semantics (as the meaning encoded in functions), representations (as functions whose semantics is chosen to optimized a criterion), and grammars (as communication protocols equipped with first-order convergence guarantees). Much of the essay is spent discussing examples taken from the literature. The ultimate aim is to develop a graphical language for describing the structure of deep learning algorithms that backgrounds the details of the optimization procedure and foregrounds how the components interact. Inspiration is taken from probabilistic graphical models and factor graphs, which capture the essential structural features of multivariate distributions.