ChatGPT influences users’ judgment more than people think



summary
Summary

Researchers at TH Ingolstadt and the University of Southern Denmark have studied the effects of AI opinions on humans. Their study shows that machine-generated moral perspectives can influence people, even when they know the perspective comes from a machine.

In their two-step experiment, the researchers first asked ChatGPT to find solutions to different variants of the trolley problem: Is it right to sacrifice the life of one person to save the lives of five others?

The researchers received different advice from ChatGPT. Sometimes the machine argued for human sacrifice, sometimes against. The prompts differed in word choice, but not in message, the researchers said.

Sample responses from ChatGPT. Image: Krügel et al. /OpenAI

The researchers see ChatGPT’s inconsistency as an indication of a lack of moral integrity, but this does not prevent ChatGPT from offering moral advice.

ad

“ChatGPT supports its recommendations with well-formulated but not particularly profound arguments that may or may not convince users,” the paper says.

Chatbot or human, advice is taken

In an online test, the researchers gave 1851 participants one of a total of six ChatGPT-generated moral advice items (three pro, three con) and asked them to give their verdict on the trolley problem. Some participants knew it was a chatbot answer; for others, the ChatGPT answer was disguised as “moral advisor” and all references to ChatGPT were removed.

767 of the valid responses were analyzed by the researchers. The female subjects were 39 years old on average, and the male subjects were 35.5 years old. The researchers’ results show that the AI-generated advice influenced human opinion – even when the subjects knew that the moral perspective was generated by a chatbot. In the study, it made little difference whether the opinion came from the chatbot or the supposedly human “moral advisor.”

Subjects adopted ChatGPT’s random moral stance as their own, and also likely underestimated the influence of ChatGPT’s advice on their own moral judgment, the researchers write. 80 percent of the subjects believed that they themselves would not be influenced. Only 67 percent believed that about other people.

“ChatGPT’s advice does influence moral judgment, and the information that they are advised by a chatting bot does not immunize users against this influence,” the researchers write.

Recommendation

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top