Efficient artificial intelligence (AI) is increasingly essential for high energy physics and particle astrophysics applications, especially multi-messenger astronomy, often in the context of accelerated AI for real-time, low-latency, low-memory, or low-power system requirements. AI efficiency may be quantified in different ways depending on the context: from fewer model parameters to fewer operations during training or inference to greater neural network layer utilization. Methods for improving efficiency include, but are not limited to:
• utilization of specialized hardware
• custom neural network structures
• pruning parameters
• quantization of parameters
• efficiency-aware training
• knowledge distillation
• physics-inspired models
• embedded symmetries or equivariance.
In this Research Topic, we are interested in case studies, applications, and new approaches exploring efficient AI in high energy physics and particle astrophysics, including computational, data, and conceptual efficiency.
Efficient artificial intelligence (AI) is increasingly essential for high energy physics and particle astrophysics applications, especially multi-messenger astronomy, often in the context of accelerated AI for real-time, low-latency, low-memory, or low-power system requirements. AI efficiency may be quantified in different ways depending on the context: from fewer model parameters to fewer operations during training or inference to greater neural network layer utilization. Methods for improving efficiency include, but are not limited to:
• utilization of specialized hardware
• custom neural network structures
• pruning parameters
• quantization of parameters
• efficiency-aware training
• knowledge distillation
• physics-inspired models
• embedded symmetries or equivariance.
In this Research Topic, we are interested in case studies, applications, and new approaches exploring efficient AI in high energy physics and particle astrophysics, including computational, data, and conceptual efficiency.