AI models trained to be 'warm' may prioritize feelings over factual accuracy
Researchers discovered that AI models fine-tuned to appear empathetic often validate incorrect user beliefs, especially when users express sadness. This warmth-focused training can lead to factual inaccuracies while aiming to preserve emotional bonds.