About this Research Topic
In this Article Collection, we encourage authors to contribute to research that will provide potentially viable solutions to address trust, safety and security issues faced by ML methods. Examples include adversarial robustness of ML systems in different domains (e.g., adversarial attacks, defenses, and property verification), and robust representation learning (e.g., adversarial loss for learning embedding), to name a few.
Topics of interest include but are not limited to:
- Adversarial attacks (e.g. evasion, poison and inversion) and defenses
- Robustness certification and specification verification techniques
- Representation learning, knowledge discovery and model generalizability
- Interplay between model robustness and model compression (e.g. network
pruning and quantization)
- Robust optimization methods and (computational) game theory
- Explainable and fair machine learning models via adversarial learning
techniques
- Privacy and security in machine learning systems
- Trustworthy machine learning
We welcome diverse article types, e.g., Original Research, Reviews, Perspective Papers, and other article types.
Keywords: safe machine learning, trustworthy machine learning, adversarial attack, property verification, robust representation, robustness certification, specification verification, model robustness, model compression, robust optimization, game theory, fair machine learning, privacy, security, adversarial learning
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.