Machine learning (ML) provides incredible opportunities to answer some of the most important and difficult questions in a wide range of applications. However, ML systems often face a major challenge when applied in the real world: the conditions under which the system was deployed can differ from those under which it was developed. Recent examples have shown that ML methods are highly susceptible to minor changes in image orientation, minute amounts of adversarial corruption, or bias in the data. Susceptibility of ML methods to test-time shift is a major hurdle in a universal acceptance of these solutions in several high-regret applications.
In this Article Collection, we encourage authors to contribute to research that will provide potentially viable solutions to address trust, safety and security issues faced by ML methods. Examples include adversarial robustness of ML systems in different domains (e.g., adversarial attacks, defenses, and property verification), and robust representation learning (e.g., adversarial loss for learning embedding), to name a few.
Topics of interest include but are not limited to:
- Adversarial attacks (e.g. evasion, poison and inversion) and defenses
- Robustness certification and specification verification techniques
- Representation learning, knowledge discovery and model generalizability
- Interplay between model robustness and model compression (e.g. network
pruning and quantization)
- Robust optimization methods and (computational) game theory
- Explainable and fair machine learning models via adversarial learning
techniques
- Privacy and security in machine learning systems
- Trustworthy machine learning
We welcome diverse article types, e.g., Original Research, Reviews, Perspective Papers, and other article types.
Machine learning (ML) provides incredible opportunities to answer some of the most important and difficult questions in a wide range of applications. However, ML systems often face a major challenge when applied in the real world: the conditions under which the system was deployed can differ from those under which it was developed. Recent examples have shown that ML methods are highly susceptible to minor changes in image orientation, minute amounts of adversarial corruption, or bias in the data. Susceptibility of ML methods to test-time shift is a major hurdle in a universal acceptance of these solutions in several high-regret applications.
In this Article Collection, we encourage authors to contribute to research that will provide potentially viable solutions to address trust, safety and security issues faced by ML methods. Examples include adversarial robustness of ML systems in different domains (e.g., adversarial attacks, defenses, and property verification), and robust representation learning (e.g., adversarial loss for learning embedding), to name a few.
Topics of interest include but are not limited to:
- Adversarial attacks (e.g. evasion, poison and inversion) and defenses
- Robustness certification and specification verification techniques
- Representation learning, knowledge discovery and model generalizability
- Interplay between model robustness and model compression (e.g. network
pruning and quantization)
- Robust optimization methods and (computational) game theory
- Explainable and fair machine learning models via adversarial learning
techniques
- Privacy and security in machine learning systems
- Trustworthy machine learning
We welcome diverse article types, e.g., Original Research, Reviews, Perspective Papers, and other article types.