Skip to main content

EDITORIAL article

Front. Neurorobot., 06 January 2023
This article is part of the Research Topic Toward and Beyond Human-Level AI, Volume II View all 5 articles

Editorial: Toward and beyond human-level AI, volume II

\nWitali Dunin-Barkowski
Witali Dunin-Barkowski1*Alexander Gorban,Alexander Gorban2,3
  • 1Department of Neuroinformatics, Center for Optical Neural Technologies, Scientific Research Institute for System Analysis, Russian Academy of Sciences, Moscow, Russia
  • 2Department of Mathematics, University of Leicester, Leicester, United Kingdom
  • 3Scientific and Educational Mathematical Center “Mathematics of Future Technology,” Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia

Editorial on the Research Topic
Toward and beyond human-level AI, volume II

AI systems have surpassed human-level in a lot of computational competencies, and the number of these competencies is growing almost daily. Still, modern AI systems are specialized gadgets which are aimed at solving specific particular problems or limited sets of problems. The universal human type intellectuality for these systems still remains unattained. In the opinion of many professionals, obtaining practical solution for that general problem seems to be very hard (Zhang et al., 2022) and it is not clear when it can be achieved (Stein-Perlman et al., 2022). Also, this task seems to be tough for general public (Schmidt, 2022).

In the first paper of this topic (Rivera, Popek et al.), authors explore a novel framework called PICO to detect presence of novel behaviors and construct a library of behavior primitives from unlabeled demonstrations. A gap in trajectory is defined as a region where actions cannot be predicted with high enough probability in a current behavior model. When such a gap occurs a new behavior primitive is added. The approach is evaluated using a reach-grab-lift task using a robotic arm, showing better label and reconstruction accuracies when compared to similar approaches.

The paper Rivera, Staley et al. is devoted to multi-agent reinforcement learning in complex environments such as dense urban defense-related scenarios. The authors introduce the AI Arena framework. There, different agents control tanks and fight with each other in a 5 vs. 5 tank combat game. It is shown that agents of the same team converge to a cooperative set of behaviors. Thus emergence of cooperative behavior has been demonstrated in the work.

In the third paper (Limbacher and Legenstein), authors demonstrated the emergence of clustering of temporally correlated inputs on dendritic branches in a setting with a generic stochastic rewiring principle with a simple synaptic plasticity rule. The mechanism is demonstrated in a computational model and is backed up with a heavy theoretical analysis. The hypothesis is proposed that such clustering might serve to protect memories from catastrophic forgetting on a medium time scale.

Finally, Krauss and Maier gives a brief review on theories of consciousness in neuroscience and AI. The general philosophical perspective on the problem is given, the main directions in philosophy of consciousness are described. Some noteworthy experiments in neurophysiology of consciousness are reviewed. Schmidhuber, 1991 as well as other popular theories of consciousness are discussed. Overall, many interesting experiments, theories and ideas are described in the paper.

Still it is not clear what can be done to achieve the goal of AGI. One of the most significant problems in moving toward general intelligence is the speed of learning. In this respect, a topic of special importance is one or few-shot learning. This kind of learning really works in practice in technical and biological systems, despite the apparent contradiction to what classical statistical theory states. It turns out that it is possible to find a general approach to the modern mathematical formulation of the problem (Tyukin et al., 2021a). In some cases it is possible to show the relationship between intrinsic dimensions of the transformed data and the probabilities to learn successfully from few demonstrations (Gorban et al., 2021b; Tyukin et al., 2022).

The problem of learning from a small number of examples is strongly related to the recently discovered phenomenon of dimensionality blessing (Gorban et al., 2016) as opposed to the “curse of dimensionality” (Sutton et al., 2022). The connection between these two problems has a fundamental mathematical nature (Gorban and Tyukin, 2017). In particular it is connected to the Hilbert's sixth problem (Gorban and Tyukin, 2018) and the fact of surprising effectiveness of small neural ensembles in high-dimensional brain (Gorban et al., 2019). These properties can be effectively applied to concrete features of the live brain hippocampus (Tyukin et al., 2019).

The further analysis reveals the existence of a fundamental tradeoff between complexity and simplicity in high-dimensional spaces (Gorban et al., 2020) and effectively uses the geometry of few-shot learning (Tyukin et al., 2021c). Among the important tasks for creating practical AI, it is important to note the development of new methods (Akinduko et al., 2016; Mirkes et al., 2022; Zhou et al., 2022) and tools (Rybnikova et al., 2020, 2021; Bac et al., 2021) for creation of AI systems.

In search of AGI one of the most important clusters of problems is connected with medical applications. Even in such seemingly routine problems as child health monitoring (Roland et al., 2021) there can be inherent obstacles. In the more complicated cases of psychological profiling within the context of drug abuse avoidance machine learning yields impressive results (Fehrman et al., 2015). The general consideration of adaptability of individuals and technical complexes also yields useful hints for solving the problem of AGI (Gorban et al., 2021a). Finally, the safety of AI systems should be prioritized without any doubts (Tyukin et al., 2021b).

The grand modern trend in developing AI systems is coming back to attempts to understand mechanisms of real brain functioning as exposed in recent Nature editorial (Mehonic and Kenyon, 2022) and in appeal of world prominent researchers in AI and computational neuroscience (Zador et al., 2022). There it has been argued that much more studies of natural neural intelligence need to be done for obtaining real general intelligence in technological systems. In this respect, the role of glia in information processing in the brain has been revealed in thorough studies (Gordleeva et al., 2019). Also more attention is needed toward probably unduly overlooked ideas about the operation principles and functions of the cerebellum (Dunin-Barkowski and Wunsch, 2000; Shakirov, 2022). Besides motion control the cerebellum deals with human emotions (Adamaszek et al., 2022) and with organism's rewards (Kostadinov and Häusser, 2022) etc. It deals with many cognitive problems especially of those connected with vision (Vaina et al., 2001) including creative tasks (Saggar et al., 2015). Unfortunately, even the crude details of cerebellar circuit operation present the subject of disagreement between theoreticians (Willshaw et al., 2015) and experimentalists (Streng et al., 2018).

Author contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Funding

The work is financially supported by State Program of SRISA RAS No. FNEF-2022-0003.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Adamaszek, M., Manto, M., and Schutter, D. J. L. G. (2022). The Emotional Cerebellum. New York, NY: Springer.

Google Scholar

Akinduko, A. A., Mirkes, E. M., and Gorban, A. N. (2016). SOM: Stochastic initialization versus principal components. Inf. Sci. 364–365, 213–221. doi: 10.1016/j.ins.2015.10.013

CrossRef Full Text | Google Scholar

Bac, J., Mirkes, E. M., Gorban, A. N., Tyukin, I., and Zinovyev, A. (2021). Scikit-dimension: a Python package for intrinsic dimension estimation. arXiv. 23, 1–12. doi: 10.3390/e23101368

PubMed Abstract | CrossRef Full Text | Google Scholar

Dunin-Barkowski, W. L., and Wunsch, D. C. (2000). Phase-based cerebellar learning of dynamic signals. Neurocomputing. 32–33, 709–725. doi: 10.1016/S0925-2312(00)00236-8

CrossRef Full Text | Google Scholar

Fehrman, E., Muhammad, A. K., Mirkes, E. M., Egan, V., and Gorban, A. N. (2015). The five factor model of personality and evaluation of drug consumption risk. arXiv:1506.06297. 1–57. doi: 10.48550/arXiv.1506.06297

CrossRef Full Text | Google Scholar

Gorban, A. N., Grechuk, B., Mirkes, E. M., Stasenko, S. V., and Tyukin, I. Y. (2021b). High-dimensional separability for one- and few-shot learning. Entropy. 23, 1090. doi: 10.3390/e23081090

PubMed Abstract | CrossRef Full Text | Google Scholar

Gorban, A. N., Makarov, V. A., and Tyukin, I. Y. (2019). The unreasonable effectiveness of small neural ensembles in high-dimensional brain. Phys Life Reviews. 29, 55–88. doi: 10.1016/j.plrev.2018.09.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Gorban, A. N., Makarov, V. A., and Tyukin, I. Y. (2020). High-dimensional brain in a high-dimensional world: blessing of dimensionality. Entropy. 22, 82. doi: 10.3390/e22010082

PubMed Abstract | CrossRef Full Text | Google Scholar

Gorban, A. N., and Tyukin, I. Y. (2017). Stochastic separation theorems. arXiv. doi: 10.1016/j.neunet.2017.07.014

PubMed Abstract | CrossRef Full Text | Google Scholar

Gorban, A. N., and Tyukin, I. Y. (2018). Blessing of dimensionality: mathematical foundations of the statistical physics of data. Phil. Trans. R. Soc. A. 376:20170237. doi: 10.1098/rsta.2017.0237

PubMed Abstract | CrossRef Full Text | Google Scholar

Gorban, A. N., Tyukin, I. Y., and Romanenko, I. (2016). The blessing of dimensionality: separation theorems in the thermodynamic limit. IFAC. 49–24, 64–69. doi: 10.1016/j.ifacol.2016.10.755

CrossRef Full Text | Google Scholar

Gorban, A. N., Tyukina, T. A., Pokidysheva, L. I., and Smirnova, E. V. (2021a). Dynamic and thermodynamic models of adaptation. arXiv. doi: 10.1016/j.plrev.2021.03.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Gordleeva, S. Y., Lotareva, Y. A., Krivonosov, M. I., Zaikin, A. A., Ivanchenko, M. V., and Gorban, A. N. (2019). Astrocytes Organize Associative Memory. Advances in Neural Computation, Machine Learning, and Cognitive Research III: Selected Papers from the XXI International Conference on Neuroinformatics, October 7–11, 2019, Dolgoprudny, Moscow Region, Russia. New York: Springer. p. 384–391. doi: 10.1007/978-3-030-30425-6_45

CrossRef Full Text | Google Scholar

Kostadinov, D., and Häusser, M. (2022). Reward signals in the cerebellum: Origins, targets, and functional implications. Neuron. 110, 1290–1303. doi: 10.1016/j.neuron.2022.02.015

PubMed Abstract | CrossRef Full Text | Google Scholar

Mehonic, A., and Kenyon, A. J. (2022). Brain-inspired computing needs a master plan. Nature. 604, 255–260. doi: 10.1038/s41586-021-04362-w

PubMed Abstract | CrossRef Full Text | Google Scholar

Mirkes, E. M., Bac, J., Fouché, A, Stasenko, S. V., Zinovyev, A., and Gorban, A. N. (2022). Domain adaptation principal component analysis: base linear method for learning with out-of-distribution data. arXiv.

Google Scholar

Roland, D., Suzen, N., Coats, T. J., Levesley, J., Gorban, A. N., and Mirkes, E. M. (2021). What can the randomness of missing values tell you about clinical practice in large data sets of children's vital signs? Pediatr Res. 89, 16–21. doi: 10.1038/s41390-020-0861-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Rybnikova, N., Mirkes, E. M., and Gorban, A. N. (2021). CNN-based spectral super-resolution of panchromatic night-time light imagery: city-size-associated neighborhood effects. Sensors. 21, 7662. doi: 10.3390/s21227662

PubMed Abstract | CrossRef Full Text | Google Scholar

Rybnikova, N., Portnov, B. A., Mirkes, E. M., Zinovyev, A., Brook, A., and Gorban, A. N. (2020). Coloring panchromatic nighttime satellite images: comparing the performance of several machine learning methods. arXiv:2008.09303. 1–68. doi: 10.48550/arXiv.2008.09303

CrossRef Full Text | Google Scholar

Saggar, M., Quintin, E., Kienitz, E., Bott, N. T., Sun, Z., Hong, W., et al. (2015). Pictionary-based fMRI paradigm to study the neural correlates of spontaneous improvisation and figural creativity. Sci. Rep. 5, 10894. doi: 10.1038/srep10894

PubMed Abstract | CrossRef Full Text | Google Scholar

Schmidhuber, J. (1991). “A Possibility for implementing curiosity and boredom in model-building neural controllers,” in Proceedings of the first international conference on simulation of adaptive behavior on From animals to animats. (Paris: MIT Press) p. 222–227.

Google Scholar

Schmidt, H. (2022). Mit dem Roboter Optimus will Tesla wieder einmal die Welt retten. Available online at: https://www.nzz.ch/mobilitaet/tesla-humanoidroboter-optimus-soll-millionenfach-verkauft-werden-ld.1705371 (accessed October 31, 2022).

Google Scholar

Shakirov, V. V. (2022). Advances in Neural Computation, Machine Learning, and Cognitive Research VI. Studies in Computational Intelligence. Cham: Springer.

Google Scholar

Stein-Perlman, Z., Weinstein-Raun, B., and Grace, K. (2022). Expert Survey on Progress in AI. AI Impacts. Available online at: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai (accessed December 7, 2022).

Google Scholar

Streng, M. L., Popa, L. S., and Ebner, T. J. (2018). Complex Spike Wars: a New Hope. The Cerebellum. 17, 735–746. doi: 10.1007/s12311-018-0960-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Sutton, O. J., Gorban, A. N., and Tyukin, I. Y. (2022). Towards a mathematical understanding of learning from few examples with nonlinear feature maps. arXiv. 1–18. doi: 10.48550/arXiv.2211.03607

CrossRef Full Text | Google Scholar

Tyukin, I. Y., Higham, D. J., Woldegeorgis, E., and Gorban, A. N. (2021b). The feasibility and inevitability of stealth attacks. arXiv. 1–26. doi: 10.48550/arXiv.2106.13997

CrossRef Full Text | Google Scholar

Tyukin, I. Y., Gorban, A. N., Alkhudaydi, M. H., and Zhou, Q. (2021a). Demystification of few-shot and one-shot learning. arXiv. 1–7. doi: 10.1109/IJCNN52387.2021.9534395

PubMed Abstract | CrossRef Full Text | Google Scholar

Tyukin, I. Y., Gorban, A. N., Calvo, C., Makarova, J., and Makarov, V. A. (2019). High-dimensional brain: a tool for encoding and rapid learning of memories by single neurons. Bull. Math. Biol. 81, 4856–4888. doi: 10.1007/s11538-018-0415-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Tyukin, I. Y., Gorban, A. N., McEwan, A. A., Meshkinfamfard, S., and Tang, L. (2021c). Blessing of dimensionality at the edge and geometry of few-shot learning. Inf. Sci. 564, 124–143. doi: 10.1016/j.ins.2021.01.022

CrossRef Full Text | Google Scholar

Tyukin, I. Y., Sutton, O., and Gorban, A. N. (2022). Learning from few examples with nonlinear feature maps. arXiv. doi: 10.48550/arXiv.2203.16935

CrossRef Full Text | Google Scholar

Vaina, L. M., Solomon, J., Chowdhury, S., Sinha, P., and Belliveau, J. W. (2001). Functional neuroanatomy of biological motion perception in humans. PNAS Biol. Sci. 98, 11656–11661. doi: 10.1073/pnas.191374198

PubMed Abstract | CrossRef Full Text | Google Scholar

Willshaw, D. J., Dayan, P., and Morris, R. G. M. (2015). Memory, modelling and Marr: a commentary on Marr (1971) ‘Simple memory: a theory of archicortex'. Phil. Trans. R. Soc. B370, 20140383. doi: 10.1098/rstb.2014.0383

PubMed Abstract | CrossRef Full Text | Google Scholar

Zador, A., Richards, B., Ölveczky, B., Escola, S., Bengio, Y., Boahen, K., et al. (2022). Toward next-generation artificial intelligence: catalyzing the neuroAI revolution. arXiv. 1–11. doi: 10.48550/arXiv.2210.08340

CrossRef Full Text | Google Scholar

Zhang, B., Dreksler, N., Anderljung, M., Kahn, L., Giattino, C., Dafoe, A., and Horowitz, M. C. (2022). Forecasting AI progress: evidence from a survey of machine learning researchers. arXiv:2206.04132.

Google Scholar

Zhou, Q., Gorban, A. N., Mirkes, E. M., Bac, J., Zinovyev, A., and Tyukin, I. Y. (2022). Quasi-orthogonality and intrinsic dimensions as measures of learning and generalisation. arXiv. doi: 10.1109/IJCNN55064.2022.9892337

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: artificial intelligence, strong AI, learning to learn, real time learning, sample-efficient learning

Citation: Dunin-Barkowski W and Gorban A (2023) Editorial: Toward and beyond human-level AI, volume II. Front. Neurorobot. 16:1120167. doi: 10.3389/fnbot.2022.1120167

Received: 09 December 2022; Accepted: 13 December 2022;
Published: 06 January 2023.

Edited and reviewed by: Alois C. Knoll, Technical University of Munich, Germany

Copyright © 2023 Dunin-Barkowski and Gorban. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Witali Dunin-Barkowski, yes d2xkYmFyJiN4MDAwNDA7Z21haWwuY29t

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.