HPC systems represent a powerful tool that enables to speed up discoveries, optimize processes and solve complex problems that would otherwise be time-consuming or unfeasible. However, it is not immune to significant challenges.
One challenge for HPC is to design and implement efficient and scalable architectures that can handle the increasing demands of data-intensive and parallel workloads. Recently, this focus has become even harder to address due to the end of Moore's law (end of transistor scaling). In recent years, several emerging technologies have arisen to address this challenge, such as (i) moving the computation to the growing cloud, (ii) moving the computation closer to where data is produced (edge computing), or (iii) the use of custom and specialized accelerators (e.g., FPGAs/CGRAs or ASICs) to accelerate key applications (e.g., AI).
These technologies are interrelated and complementary, and they offer new opportunities and challenges for HPC research and development. In this line, cloud computing offers benefits such as flexibility, scalability, cost-effectiveness, and security. On the other hand, it can also provide access to specialized hardware resources that are not available on-premises or in local data centers. In conjunction with cloud and edge computing, AI accelerators play a crucial role in enhancing machine learning algorithms and deep neural networks. These specialized hardware devices are designed to accelerate AI workloads and support complex computations. They significantly increase the speed and efficiency of AI applications, enabling faster decision-making and better insights.
Reconfigurable computing adds another layer of flexibility to this ecosystem. It encompasses the use of field-programmable gate arrays (FPGAs) and other similar technologies, allowing hardware to be reprogrammed after manufacturing. FPGAs can be tailored to specific tasks rapidly and efficiently, making them ideal for demanding workloads. By harnessing the power of reconfigurable computing, organizations can optimize the performance of their AI accelerators while adapting to evolving computational needs.
This article collection aims to provide a comprehensive overview of these technologies, their current state-of-the-art, their future trends, and their potential impacts on HPC systems and applications. We invite original research papers that cover topics related to these technologies, such as:
• Design principles and methods for cloud based HPC architectures.
• Performance evaluation and optimization techniques for edge based HPC systems.
• Hardware-software integration and interoperability issues for AI accelerator-based HPC platforms or the use of neuromorphic architectures. Runtime environments and tools for reconfigurable-computing-based HPC solutions
We expect this Research Topic will stimulate further research and innovation in the field of high-performance computing, and foster collaboration among researchers, practitioners, and stakeholders from academia, industry, government, and society.
Keywords:
High performance computing, cloud computing, edge computing, accelerators, reconfigurable computing
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
HPC systems represent a powerful tool that enables to speed up discoveries, optimize processes and solve complex problems that would otherwise be time-consuming or unfeasible. However, it is not immune to significant challenges.
One challenge for HPC is to design and implement efficient and scalable architectures that can handle the increasing demands of data-intensive and parallel workloads. Recently, this focus has become even harder to address due to the end of Moore's law (end of transistor scaling). In recent years, several emerging technologies have arisen to address this challenge, such as (i) moving the computation to the growing cloud, (ii) moving the computation closer to where data is produced (edge computing), or (iii) the use of custom and specialized accelerators (e.g., FPGAs/CGRAs or ASICs) to accelerate key applications (e.g., AI).
These technologies are interrelated and complementary, and they offer new opportunities and challenges for HPC research and development. In this line, cloud computing offers benefits such as flexibility, scalability, cost-effectiveness, and security. On the other hand, it can also provide access to specialized hardware resources that are not available on-premises or in local data centers. In conjunction with cloud and edge computing, AI accelerators play a crucial role in enhancing machine learning algorithms and deep neural networks. These specialized hardware devices are designed to accelerate AI workloads and support complex computations. They significantly increase the speed and efficiency of AI applications, enabling faster decision-making and better insights.
Reconfigurable computing adds another layer of flexibility to this ecosystem. It encompasses the use of field-programmable gate arrays (FPGAs) and other similar technologies, allowing hardware to be reprogrammed after manufacturing. FPGAs can be tailored to specific tasks rapidly and efficiently, making them ideal for demanding workloads. By harnessing the power of reconfigurable computing, organizations can optimize the performance of their AI accelerators while adapting to evolving computational needs.
This article collection aims to provide a comprehensive overview of these technologies, their current state-of-the-art, their future trends, and their potential impacts on HPC systems and applications. We invite original research papers that cover topics related to these technologies, such as:
• Design principles and methods for cloud based HPC architectures.
• Performance evaluation and optimization techniques for edge based HPC systems.
• Hardware-software integration and interoperability issues for AI accelerator-based HPC platforms or the use of neuromorphic architectures. Runtime environments and tools for reconfigurable-computing-based HPC solutions
We expect this Research Topic will stimulate further research and innovation in the field of high-performance computing, and foster collaboration among researchers, practitioners, and stakeholders from academia, industry, government, and society.
Keywords:
High performance computing, cloud computing, edge computing, accelerators, reconfigurable computing
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.