From voice assistants over autonomous vehicles to chatbots, machines are becoming integral parts of our daily interactions. This integration is not just reshaping industries and institutions but is also altering the very nature of human behavior. Individuals tend to respond to machines in ways that diverge from established theories of and existing evidence on interpersonal behavior. These behavioral shifts have profound economic, emotional, behavioral, technical, and ethical implications. Hence, an interdisciplinary approach is required to gain more holistic insights and to design policies to mitigate any unintended consequences that the promising technological breakthroughs of AI might produce.
In this Research Topic, we want to address the behavioral and economic consequences of the growing integration of AI into various domains such as, among others, consumer behavior, education, the labor market, healthcare, and organizational settings. We seek to understand how interactions between humans and AI influence decision-making, trust, productivity, and wellbeing, and how these outcomes differ from traditional human-to human interactions. For example, do consumers behave differently when interacting with AI-driven recommendation systems as compared to human recommendations? How does AI affect learning outcomes or employee productivity, and what are the implications for inequality and discrimination? We invite contributions that provide empirical evidence or theoretical insights into these questions. Particularly welcome are submissions that propose actionable policy recommendations to prevent unintended effects of underlying biases and behavioral mechanisms, for instance, in the form of guidance to AI developers on how to improve user experience and reduce potential harms. By examining these issues through the lenses of behavioral economics, psychology, and technology, we aim to foster a deeper understanding of how AI shapes human behavior and what interventions are needed to promote equitable and beneficial outcomes in the age of machine intelligence
A non-exhaustive list of themes falling within the scope of the present research topic are:
• AI’s influence on educational investments and outcomes
• AI's potential to exacerbate or reduce discrimination and inequality
• AI's role in information dissemination and updating
• Bias in decision-making processes involving AI
• Differences in human behavior when interacting with AI systems
• Economic impacts of the use of AI
• Ethical challenges related to AI integration in economic decisions
• Smart devices
• Workplace automation
We are interested in original research articles, systematic reviews, methods, meta analyses, and policy-practice reviews
Keywords:
artificial intelligence, economics, interaction, experiment, behavior
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
From voice assistants over autonomous vehicles to chatbots, machines are becoming integral parts of our daily interactions. This integration is not just reshaping industries and institutions but is also altering the very nature of human behavior. Individuals tend to respond to machines in ways that diverge from established theories of and existing evidence on interpersonal behavior. These behavioral shifts have profound economic, emotional, behavioral, technical, and ethical implications. Hence, an interdisciplinary approach is required to gain more holistic insights and to design policies to mitigate any unintended consequences that the promising technological breakthroughs of AI might produce.
In this Research Topic, we want to address the behavioral and economic consequences of the growing integration of AI into various domains such as, among others, consumer behavior, education, the labor market, healthcare, and organizational settings. We seek to understand how interactions between humans and AI influence decision-making, trust, productivity, and wellbeing, and how these outcomes differ from traditional human-to human interactions. For example, do consumers behave differently when interacting with AI-driven recommendation systems as compared to human recommendations? How does AI affect learning outcomes or employee productivity, and what are the implications for inequality and discrimination? We invite contributions that provide empirical evidence or theoretical insights into these questions. Particularly welcome are submissions that propose actionable policy recommendations to prevent unintended effects of underlying biases and behavioral mechanisms, for instance, in the form of guidance to AI developers on how to improve user experience and reduce potential harms. By examining these issues through the lenses of behavioral economics, psychology, and technology, we aim to foster a deeper understanding of how AI shapes human behavior and what interventions are needed to promote equitable and beneficial outcomes in the age of machine intelligence
A non-exhaustive list of themes falling within the scope of the present research topic are:
• AI’s influence on educational investments and outcomes
• AI's potential to exacerbate or reduce discrimination and inequality
• AI's role in information dissemination and updating
• Bias in decision-making processes involving AI
• Differences in human behavior when interacting with AI systems
• Economic impacts of the use of AI
• Ethical challenges related to AI integration in economic decisions
• Smart devices
• Workplace automation
We are interested in original research articles, systematic reviews, methods, meta analyses, and policy-practice reviews
Keywords:
artificial intelligence, economics, interaction, experiment, behavior
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.