In the past few decades, research in Machine Learning (ML) has exploded due to the overwhelming success of its methodologies on a wide variety of tasks, including playing Go, performing classification tasks, image segmentation, and natural language processing to name but a few. Nonetheless, there remains a significant gap between the success of machine learning methods in practice and our ability to understand them at a fundamental mathematical level.
This Research Topic is devoted to articles that provide insight into developing mathematical foundations for machine learning broadly. Instead of seeking new methods, we hope to (1) gain insight into current problems and methods, (2) find conceptual, understandable examples that elucidate the critical elements of machine learning’s success, (3) characterize why complexity and “big data” are essential to the success of neural nets, and (4) provide other results that bridge the gap between the utility of machine learning and the ability to describe mathematically how and why machine learning works. An effort will be made for these articles to be readable by a wide audience by carefully defining common terms and emphasizing clarity of exposition.
Topics of interest include, but are not limited to, articles or review papers addressing:
· Approximation power of Neural Networks
· Structural analysis and design of Neural Networks
· Kernel methods in ML
· Signal Processing techniques in ML
· Interpretability of ML models
· Principles in scalable ML
· The role of optimization in ML
In the past few decades, research in Machine Learning (ML) has exploded due to the overwhelming success of its methodologies on a wide variety of tasks, including playing Go, performing classification tasks, image segmentation, and natural language processing to name but a few. Nonetheless, there remains a significant gap between the success of machine learning methods in practice and our ability to understand them at a fundamental mathematical level.
This Research Topic is devoted to articles that provide insight into developing mathematical foundations for machine learning broadly. Instead of seeking new methods, we hope to (1) gain insight into current problems and methods, (2) find conceptual, understandable examples that elucidate the critical elements of machine learning’s success, (3) characterize why complexity and “big data” are essential to the success of neural nets, and (4) provide other results that bridge the gap between the utility of machine learning and the ability to describe mathematically how and why machine learning works. An effort will be made for these articles to be readable by a wide audience by carefully defining common terms and emphasizing clarity of exposition.
Topics of interest include, but are not limited to, articles or review papers addressing:
· Approximation power of Neural Networks
· Structural analysis and design of Neural Networks
· Kernel methods in ML
· Signal Processing techniques in ML
· Interpretability of ML models
· Principles in scalable ML
· The role of optimization in ML