This Research Topic primarily focuses on the people – military personnel throughout the command structure – who serve in combat settings with AI-enabled machines. In a battlespace where machine autonomy is increasingly assuming functions once restricted to human beings, maintaining clear lines of human responsibility is of paramount importance. Clarifying this issue should improve ethical instruction within military training and educational institutions, as well as change how AI developers design their technologies. In turn, this will render ethical guidelines better tailored to the battlefield scenarios military personnel will confront in the future.
This collection aims in three settings to yield moral guidelines for AI technology use:
· Conventional armed conflict/battlefield combat;
· Cyber conflict, and strategic planning for war, with the focus on the impact of those uses on the skills;
· Moral reasoning capacities of "end-users" (primary or front line operators).
Additionally, in contrast to risk analysis and cost-benefit analysis or consideration of relevant legal constraints (deontological), we welcome papers examining the impact on the competency and character of human operators through the lens of virtue ethics. Topics that are in scope of this Research Topic include virtue ethics that empower us to act responsibly amid the challenges of personal and professional life provided by the use of artificial intelligence in cyber security, kinetic warfare, and strategical planning.
Within cyber security, AI-based tools can be used in both offensive and defensive cyber applications. On the defensive side examples of cyber applications can be found in malware detection, network intrusion, and phishing, and spam detection. On the offensive side, examples can be found in intelligent threats and tools for attacking AI models.
Interesting AI-based systems and their use in the kinetic battlefield can be found for example autonomous vehicles, drones, and swarms. Within strategical planning, the usage of AI-based systems that are used for planning activities and guiding decision-making processes are examples we are interested in.
The manuscripts should be following either Mini Review, Perspective, or Brief Research Report guidelines.
We acknowledge the funding of the manuscripts published in this Research Topic by PRIO. We hereby state publicly that PRIO has had no editorial input in articles included in this Research Topic, thus ensuring that all aspects of this Research Topic are evaluated objectively, unbiased by any specific policy or opinion of PRIO.
This Research Topic primarily focuses on the people – military personnel throughout the command structure – who serve in combat settings with AI-enabled machines. In a battlespace where machine autonomy is increasingly assuming functions once restricted to human beings, maintaining clear lines of human responsibility is of paramount importance. Clarifying this issue should improve ethical instruction within military training and educational institutions, as well as change how AI developers design their technologies. In turn, this will render ethical guidelines better tailored to the battlefield scenarios military personnel will confront in the future.
This collection aims in three settings to yield moral guidelines for AI technology use:
· Conventional armed conflict/battlefield combat;
· Cyber conflict, and strategic planning for war, with the focus on the impact of those uses on the skills;
· Moral reasoning capacities of "end-users" (primary or front line operators).
Additionally, in contrast to risk analysis and cost-benefit analysis or consideration of relevant legal constraints (deontological), we welcome papers examining the impact on the competency and character of human operators through the lens of virtue ethics. Topics that are in scope of this Research Topic include virtue ethics that empower us to act responsibly amid the challenges of personal and professional life provided by the use of artificial intelligence in cyber security, kinetic warfare, and strategical planning.
Within cyber security, AI-based tools can be used in both offensive and defensive cyber applications. On the defensive side examples of cyber applications can be found in malware detection, network intrusion, and phishing, and spam detection. On the offensive side, examples can be found in intelligent threats and tools for attacking AI models.
Interesting AI-based systems and their use in the kinetic battlefield can be found for example autonomous vehicles, drones, and swarms. Within strategical planning, the usage of AI-based systems that are used for planning activities and guiding decision-making processes are examples we are interested in.
The manuscripts should be following either Mini Review, Perspective, or Brief Research Report guidelines.
We acknowledge the funding of the manuscripts published in this Research Topic by PRIO. We hereby state publicly that PRIO has had no editorial input in articles included in this Research Topic, thus ensuring that all aspects of this Research Topic are evaluated objectively, unbiased by any specific policy or opinion of PRIO.