The development of AI has dramatically increased the instances of collaborations between human and "intelligent machines" in organizations. Because these machines (in manufacturing, assembling and warehousing tools, voice-based training and assistance, measuring and monitoring devices, diagnostics, etc.) operate through complex algorithms, their learning processes and consequent behaviors may not be perfectly predictable. Also, the ultimate "objective function" of these machines may not be perfectly identifiable. These elements of complexity and opacity make the interaction with humans not just a "mechanical" or "deductive" coordination problem, but a more complex strategic interaction situation where, for example, one wants to enhance cooperation.
How is cooperation achieved when humans interact with "artificial agents"? What is different or similar as compared to human-human interactions?
Do people display similar or different behavioral tendencies and biases (other regarding preferences, time preferences, risk attitudes, (over)confidence, etc.) when interacting with artificial agents as compared to humans?
What are people's attitudes toward the use of intelligent machines for certain tasks or functions? What moral concerns does this raise? How absolute, or amenable to compromises/tradeoffs is any opposition to the reliance on AI-operated machines for certain tasks?
The development of AI has dramatically increased the instances of collaborations between human and "intelligent machines" in organizations. Because these machines (in manufacturing, assembling and warehousing tools, voice-based training and assistance, measuring and monitoring devices, diagnostics, etc.) operate through complex algorithms, their learning processes and consequent behaviors may not be perfectly predictable. Also, the ultimate "objective function" of these machines may not be perfectly identifiable. These elements of complexity and opacity make the interaction with humans not just a "mechanical" or "deductive" coordination problem, but a more complex strategic interaction situation where, for example, one wants to enhance cooperation.
How is cooperation achieved when humans interact with "artificial agents"? What is different or similar as compared to human-human interactions?
Do people display similar or different behavioral tendencies and biases (other regarding preferences, time preferences, risk attitudes, (over)confidence, etc.) when interacting with artificial agents as compared to humans?
What are people's attitudes toward the use of intelligent machines for certain tasks or functions? What moral concerns does this raise? How absolute, or amenable to compromises/tradeoffs is any opposition to the reliance on AI-operated machines for certain tasks?