Matrix and tensor computations are at the core of a multitude of applications in diverse domains of scientific computing and data science. These computations present several challenges due to their complexity, high computational cost, and large memory footprint. While there is a widespread availability and use of general-purpose high-performance libraries for matrix computations on multicore CPUs and GPUs, the same is not true for tensor computations. In many different domains, application developers are largely developing domain-specific libraries and frameworks with little shared usability across disciplines resulting into significant replication of effort. In addition, researchers working on tensors computations lack a dedicated specialized publication outlet and tend to publish their results in journals across many different fields. As such, scientific advances on tensors are scattered and, despite there is a growing community working on tensors computations, researchers in this field have a rather limited visibility and tend to work in a compartmentalized fashion.
This Research Topic aims at bringing together leading experts from distinct domains such as Computational Chemistry, Condensed Matter Physics, Scientific Computing and Machine Learning, just to name a few, to uncover computational challenges, bottlenecks, and advances in high-performance tensor computations arising in those disciplines. The aim is to enhance understanding of the similarities and differences in the tensor operations and computational tasks across these fields, and to seek pathways to general purpose software libraries and frameworks for high-performance tensor computations. The ambition of this topic is to close the gap between the many different conventions used to represent tensor computations and create a common language both in terms of abstractions and granularity of computational kernels. This also takes into account the analytical and algebraic foundations of structured tensor decompositions and approximations. In the long term, the vision is to create a framework for tensor computations that is understandable by the many practitioners and can be utilized to carry out efficient, parallel, and high-performance tensor calculations.
In this call for contributions, we welcome original research manuscripts and review manuscripts that lie at the intersection of tensor computations, high-performance scientific computing and machine learning, focusing, but not limited to:
• High-performance tensor contractions in scientific computing
• Massively parallel tensor calculations
• Tensor networks in quantum physics and machine learning
• Tensor decompositions and approximations
• Tensor abstractions and representations
• High-performance algorithms and implementations of tensor operations
• Tensor libraries and compilers
• Optimization of algorithms for tensor computations
• Optimization methods in numerical multi-linear algebra
• Tensor methods in applied computational domains
• Tensor operations with application to Machine and Deep learning
• Emerging topics such as tensor benchmarking and tensor generation
Image Credit: Melinda Green and Andrey Astrelin
Matrix and tensor computations are at the core of a multitude of applications in diverse domains of scientific computing and data science. These computations present several challenges due to their complexity, high computational cost, and large memory footprint. While there is a widespread availability and use of general-purpose high-performance libraries for matrix computations on multicore CPUs and GPUs, the same is not true for tensor computations. In many different domains, application developers are largely developing domain-specific libraries and frameworks with little shared usability across disciplines resulting into significant replication of effort. In addition, researchers working on tensors computations lack a dedicated specialized publication outlet and tend to publish their results in journals across many different fields. As such, scientific advances on tensors are scattered and, despite there is a growing community working on tensors computations, researchers in this field have a rather limited visibility and tend to work in a compartmentalized fashion.
This Research Topic aims at bringing together leading experts from distinct domains such as Computational Chemistry, Condensed Matter Physics, Scientific Computing and Machine Learning, just to name a few, to uncover computational challenges, bottlenecks, and advances in high-performance tensor computations arising in those disciplines. The aim is to enhance understanding of the similarities and differences in the tensor operations and computational tasks across these fields, and to seek pathways to general purpose software libraries and frameworks for high-performance tensor computations. The ambition of this topic is to close the gap between the many different conventions used to represent tensor computations and create a common language both in terms of abstractions and granularity of computational kernels. This also takes into account the analytical and algebraic foundations of structured tensor decompositions and approximations. In the long term, the vision is to create a framework for tensor computations that is understandable by the many practitioners and can be utilized to carry out efficient, parallel, and high-performance tensor calculations.
In this call for contributions, we welcome original research manuscripts and review manuscripts that lie at the intersection of tensor computations, high-performance scientific computing and machine learning, focusing, but not limited to:
• High-performance tensor contractions in scientific computing
• Massively parallel tensor calculations
• Tensor networks in quantum physics and machine learning
• Tensor decompositions and approximations
• Tensor abstractions and representations
• High-performance algorithms and implementations of tensor operations
• Tensor libraries and compilers
• Optimization of algorithms for tensor computations
• Optimization methods in numerical multi-linear algebra
• Tensor methods in applied computational domains
• Tensor operations with application to Machine and Deep learning
• Emerging topics such as tensor benchmarking and tensor generation
Image Credit: Melinda Green and Andrey Astrelin