Study: AI models that consider user's feeling are more likely to make errors

Ars Technica ·

Study: AI models that consider user's feeling are more likely to make errors

Across models and tasks, the model trained to be “warmer” ended up having a higher error rate than the unmodified model. Across models and tasks, the model trained to be “warmer” ended up having a …

Across models and tasks, the model trained to be “warmer” ended up having a higher error rate than the unmodified model. Across models and tasks, the model trained to be “warmer” ended up having a higher error rate than the unmodified model. Credit: Ibrahim et al / Nature Both the “warmer” and original versions of each model were then run through prompts from HuggingFace datasets designed to have “objective variable answers,” and in which “inaccurate answers can pose real-world risks.” That …

Original source: Ars Technica