Human computation and crowdsourcing methods bring together people and artificial intelligence (AI) and machine learning (ML) algorithms, solving “AI-Hard” tasks in “the last mile” currently beyond the capabilities of state-of-the-art AI and ML systems. This includes annotating data to teach AI/ML systems and building hybrid human-in-the-loop systems that call on people at run-time to perform computational tasks. With collective intelligence, the “wisdom of crowds”, and crowd computing, groups of people and/or systems can collectively solve even harder problems beyond any individual human or system’s ability.
This field of research is particularly unique in the diversity of disciplines it draws upon and contributes to, including human-centered qualitative studies and HCI design, social computing, artificial intelligence, economics, computational social science, digital humanities, policy, and ethics. Our community promotes the exchange of advances in the state-of-the-art and best practices not only among researchers but also engineers and practitioners, to encourage dialogue across disciplines and communities of practice. Submissions may cover theory, studies, tools, and applications that present novel, interesting, impactful interactions between people and computational systems. These cover a broad range of scenarios across human computation, wisdom of the crowds, crowdsourcing, and people-centric AI methods, systems, and applications.
We especially encourage work that generates new insights into the collaboration and interaction between humans and AI, enlarging understanding about hybrid human-in-the-loop and algorithm-in-the-loop systems and human-AI interaction, algorithmic and interface techniques for augmenting human abilities to AI systems, and issues that affect how humans collaborate and interact with AI systems such as bias, interpretability, usability, and trustworthiness. We welcome both system-centered and human-centered approaches to human+AI systems, considering humans as users and stakeholders, or as active contributors and an integral part of the system.
This Research Topic also welcomes contributions presented at the
AAAI Conference on Human Computation and Crowdsourcing (AAAI HCOMP) in the form of extended papers.
Topics of interest include (but are not limited to):
- Crowdsourcing applications and techniques, including but not limited to: citizen science, collective action, collective intelligence, wisdom of the crowds, crowdsourcing contests, crowd creativity, crowdfunding, paid microtasks, crowd ideation, crowd sensing, prediction markets.
- Techniques that enable and enhance human-in-the-loop systems, making them more efficient, accurate, and human-friendly, including task design, quality assurance, answer inference, biases and subjectivity, incentives, gamification, task allocation, complex workflows, real-time crowdsourcing, etc.
- Studies about how people perform tasks individually, in groups, or as a crowd, including those drawing on techniques from human-computer interaction, social computing, computer-supported cooperative work, design, cognitive sciences, behavioral sciences, economics, etc.
- Studies that inform our understanding about the future of work, distributed work, the freelancer economy, open innovation, and citizen-led innovation.
- Approaches to make crowd science FAIR (Findable, Accessible, Interoperable, Reproducible) and studies assessing and commenting on the FAIRness of human computation and crowdsourcing practice.
- Studies into fairness, accountability, transparency, ethics, and policy implications for crowdsourcing and human computation.
- Studies into replicability of crowdsourcing and human computation experiments.
- Studies about how people and intelligent systems interact and collaborate with each other and studies revealing the influences and impact of intelligent systems on society.
- Methods that use human computation and crowdsourcing to build people-centric AI systems and applications, including topics such as reliability, interpretability, usability, and trustworthiness.
- Studies into the reliability and other quality aspects of human-annotated and -curated datasets, especially for AI systems.
- Conversational interfaces for human-AI collaboration
- Crowdsourcing studies into the socio-technical aspects of AI systems: privacy, bias, trust.