Chatbots and mental health: Insights into the safety of generative AI
buir.contributor.author | Uğuralp, Ahmet Kaan | |
buir.contributor.author | Oğuz-Uğuralp, Zeliha | |
buir.contributor.orcid | Uğuralp, Ahmet Kaan|0000-0002-9037-7172 | |
buir.contributor.orcid | Oğuz-Uğuralp, Zeliha|0000-0002-8884-4755 | |
dc.citation.epage | 11 | en_US |
dc.citation.spage | 1 | |
dc.contributor.author | De Freitas, Julian | |
dc.contributor.author | Uğuralp, Ahmet Kaan | |
dc.contributor.author | Oğuz-Uğuralp, Zeliha | |
dc.contributor.author | Puntoni, Stefano | |
dc.date.accessioned | 2024-03-10T09:43:14Z | |
dc.date.available | 2024-03-10T09:43:14Z | |
dc.date.issued | 2023-10-26 | |
dc.department | Department of Computer Engineering | |
dc.department | Department of Psychology | |
dc.description.abstract | Chatbots are now able to engage in sophisticated conversations with consumers. Due to the “black box” nature of the algorithms, it is impossible to predict in advance how these conversations will unfold. Behavioral research provides little insight into potential safety issues emerging from the current rapid deployment of this technology at scale. We begin to address this urgent question by focusing on the context of mental health and “companion AI”: Applications designed to provide consumers with synthetic interaction partners. Studies 1a and 1b present field evidence: Actual consumer interactions with two different companion AIs. Study 2 reports an extensive performance test of several commercially available companion AIs. Study 3 is an experiment testing consumer reaction to risky and unhelpful chatbot responses. The findings show that (1) mental health crises are apparent in a nonnegligible minority of conversations with users; (2) companion AIs are often unable to recognize, and respond appropriately to, signs of distress; and (3) consumers display negative reactions to unhelpful and risky chatbot responses, highlighting emerging reputational risks for generative AI companies. | |
dc.description.provenance | Made available in DSpace on 2024-03-10T09:43:14Z (GMT). No. of bitstreams: 1 Chatbots_and_mental_health_Insights_into_the_safety_of_generative_AI.pdf: 5118639 bytes, checksum: de99c4bea427b5b465b856f71680b74d (MD5) Previous issue date: 2023-10-26 | en |
dc.identifier.doi | 10.1002/jcpy.1393 | en_US |
dc.identifier.eissn | 1532-7663 | en_US |
dc.identifier.issn | 1057-7408 | en_US |
dc.identifier.uri | https://hdl.handle.net/11693/114454 | en_US |
dc.language.iso | English | en_US |
dc.publisher | John Wiley & Sons Ltd. | en_US |
dc.relation.isversionof | https://dx.doi.org/10.1002/jcpy.1393 | |
dc.rights | CC BY 4.0 DEED (Attribution 4.0 International) | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.source.title | Journal of Consumer Psychology | |
dc.subject | Artificial intelligence | |
dc.subject | Chatbots | |
dc.subject | Ethics | |
dc.subject | Generative ai | |
dc.subject | Large language models | |
dc.subject | Mental health | |
dc.title | Chatbots and mental health: Insights into the safety of generative AI | |
dc.type | Article |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Chatbots_and_mental_health_Insights_into_the_safety_of_generative_AI.pdf
- Size:
- 4.88 MB
- Format:
- Adobe Portable Document Format
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 2.01 KB
- Format:
- Item-specific license agreed upon to submission
- Description: