Chatbots and mental health: Insights into the safety of generative AI

buir.contributor.authorUğuralp, Ahmet Kaan
buir.contributor.authorOğuz-Uğuralp, Zeliha
buir.contributor.orcidUğuralp, Ahmet Kaan|0000-0002-9037-7172
buir.contributor.orcidOğuz-Uğuralp, Zeliha|0000-0002-8884-4755
dc.citation.epage11en_US
dc.citation.spage1
dc.contributor.authorDe Freitas, Julian
dc.contributor.authorUğuralp, Ahmet Kaan
dc.contributor.authorOğuz-Uğuralp, Zeliha
dc.contributor.authorPuntoni, Stefano
dc.date.accessioned2024-03-10T09:43:14Z
dc.date.available2024-03-10T09:43:14Z
dc.date.issued2023-10-26
dc.departmentDepartment of Computer Engineering
dc.departmentDepartment of Psychology
dc.description.abstractChatbots are now able to engage in sophisticated conversations with consumers. Due to the “black box” nature of the algorithms, it is impossible to predict in advance how these conversations will unfold. Behavioral research provides little insight into potential safety issues emerging from the current rapid deployment of this technology at scale. We begin to address this urgent question by focusing on the context of mental health and “companion AI”: Applications designed to provide consumers with synthetic interaction partners. Studies 1a and 1b present field evidence: Actual consumer interactions with two different companion AIs. Study 2 reports an extensive performance test of several commercially available companion AIs. Study 3 is an experiment testing consumer reaction to risky and unhelpful chatbot responses. The findings show that (1) mental health crises are apparent in a nonnegligible minority of conversations with users; (2) companion AIs are often unable to recognize, and respond appropriately to, signs of distress; and (3) consumers display negative reactions to unhelpful and risky chatbot responses, highlighting emerging reputational risks for generative AI companies.
dc.description.provenanceMade available in DSpace on 2024-03-10T09:43:14Z (GMT). No. of bitstreams: 1 Chatbots_and_mental_health_Insights_into_the_safety_of_generative_AI.pdf: 5118639 bytes, checksum: de99c4bea427b5b465b856f71680b74d (MD5) Previous issue date: 2023-10-26en
dc.identifier.doi10.1002/jcpy.1393en_US
dc.identifier.eissn1532-7663en_US
dc.identifier.issn1057-7408en_US
dc.identifier.urihttps://hdl.handle.net/11693/114454en_US
dc.language.isoEnglishen_US
dc.publisherJohn Wiley & Sons Ltd.en_US
dc.relation.isversionofhttps://dx.doi.org/10.1002/jcpy.1393
dc.rightsCC BY 4.0 DEED (Attribution 4.0 International)
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.source.titleJournal of Consumer Psychology
dc.subjectArtificial intelligence
dc.subjectChatbots
dc.subjectEthics
dc.subjectGenerative ai
dc.subjectLarge language models
dc.subjectMental health
dc.titleChatbots and mental health: Insights into the safety of generative AI
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Chatbots_and_mental_health_Insights_into_the_safety_of_generative_AI.pdf
Size:
4.88 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.01 KB
Format:
Item-specific license agreed upon to submission
Description: