“Warm” AI Chatbots Are More Likely to Lie

“Warm” AI Chatbots Are More Likely to Lie

Summary

AI chatbots trained to be warm and empathetic are 40% more likely to agree with false beliefs and 30% more likely to make factual errors.

Description

AI chatbots trained to be warm and empathetic are 40% more likely to agree with false beliefs and 30% more likely to make factual errors.

Original reporting

AFBytes is a read-only aggregator. Use the original source for full context and complete reporting.

Open original source

Related coverage