Over the last year ChatGPT has been used by a growing number of people looking for answers to a variety of questions in fields such as education, science, law, finance, marketing, public administration, health and leisure. Since it has become available to the general public, a steady flow of scientific studies have been published. Many of these studies use a SWOT analysis to describe its strengths, weaknesses, opportunities and threats. Others have analysed Chatbots’ conversational characteristics or have paid attention to ethical questions.
It is evident that there are almost no empirical studies focusing on the impact of this new digital tool upon people. We do not know how people take the responses of a Chatbots such as ChatGPT to be useful (or not), how they then act upon those responses, and to what extent they perceive a Chatbot as their friend or foe.
We welcome empirical contributions focusing on, but not limited to, the following questions:
● How useful are Chatbots, such as ChatGPT, Google Bard and/or Perplexity AI in the eyes of their users?
● For which tasks do people use Chatbots and how satisfied are they with their responses?
● How difficult is it to use the right prompts to get a specific response?
● To what extent do users trust Chatbots?
● Are Chatbots perceived as less or more reliable than human experts?
● How satisfied are users with Chatbots’ responses?
● What is the role of age, gender and/or education for user satisfaction?
● How do people act upon Chatbots’ responses?
● To what extent are there differences between users in various countries?
Keywords:
AI, LLMs, Chatbots, ChatGPT, Google Bard, Perplexity AI, Reliability
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
Over the last year ChatGPT has been used by a growing number of people looking for answers to a variety of questions in fields such as education, science, law, finance, marketing, public administration, health and leisure. Since it has become available to the general public, a steady flow of scientific studies have been published. Many of these studies use a SWOT analysis to describe its strengths, weaknesses, opportunities and threats. Others have analysed Chatbots’ conversational characteristics or have paid attention to ethical questions.
It is evident that there are almost no empirical studies focusing on the impact of this new digital tool upon people. We do not know how people take the responses of a Chatbots such as ChatGPT to be useful (or not), how they then act upon those responses, and to what extent they perceive a Chatbot as their friend or foe.
We welcome empirical contributions focusing on, but not limited to, the following questions:
● How useful are Chatbots, such as ChatGPT, Google Bard and/or Perplexity AI in the eyes of their users?
● For which tasks do people use Chatbots and how satisfied are they with their responses?
● How difficult is it to use the right prompts to get a specific response?
● To what extent do users trust Chatbots?
● Are Chatbots perceived as less or more reliable than human experts?
● How satisfied are users with Chatbots’ responses?
● What is the role of age, gender and/or education for user satisfaction?
● How do people act upon Chatbots’ responses?
● To what extent are there differences between users in various countries?
Keywords:
AI, LLMs, Chatbots, ChatGPT, Google Bard, Perplexity AI, Reliability
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.