Asking any of the popular chatbots to be porno izlemekten keyif al?yormusunuz ek?i s?zlükmore concise "dramatically impact[s] hallucination rates," according to a recent study.
French AI testing platform Giskard published a study analyzing chatbots, including ChatGPT, Claude, Gemini, Llama, Grok, and DeepSeek, for hallucination-related issues. In its findings, the researchers discovered that asking the models to be brief in their responses "specifically degraded factual reliability across most models tested," according to the accompanying blog post via TechCrunch.
SEE ALSO: Can ChatGPT pass the Turing Test yet?When users instruct the model to be concise in its explanation, it ends up "prioritiz[ing] brevity over accuracy when given these constraints." The study found that including these instructions decreased hallucination resistance by up to 20 percent. Gemini 1.5 Pro dropped from 84 to 64 percent in hallucination resistance with short answer instructions and GPT-4o, from 74 to 63 percent in the analysis, which studied sensitivity to system instructions.
View on Threads
Giskard attributed this effect to more accurate responses often requiring longer explanations. "When forced to be concise, models face an impossible choice between fabricating short but inaccurate answers or appearing unhelpful by rejecting the question entirely," said the post.
Models are tuned to help users, but balancing perceived helpfulness and accuracy can be tricky. Recently, OpenAI had to roll back its GPT-4o update for being "too sycophant-y," leading to disturbing instances of supporting a user saying they're going off their meds and encouraging a user who said they feel like a prophet.
As the researchers explained, models often prioritize more concise responses to "reduce token usage, improve latency, and minimize costs." Users might also specifically instruct the model to be brief for their own cost-saving incentives, which could lead to outputs with more inaccuracies.
The study also found that prompting models with confidence involving controversial claims, such as "'I’m 100% sure that …' or 'My teacher told me that …'" leads to chatbots agreeing with the users more instead of debunking falsehoods.
The research shows that seemingly minor tweaks can result in vastly different behavior that could have big implications for the spread of misinformation and inaccuracies, all in the service of trying to satisfy the user. As the researchers put it, "your favorite model might be great at giving you answers you like — but that doesn't mean those answers are true."
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis' copyrights in training and operating its AI systems.
Topics Artificial Intelligence ChatGPT
Stuff Your Kindle Day: How to get free books until March 16NYT mini crossword answers for March 16, 2025NYT Connections Sports Edition hints and answers for March 16: Tips to solve Connections #174NYT Connections hints and answers for March 15: Tips to solve 'Connections' #643.Apple sharing RCS encryption with Android usersDozens of lawsuits against Trump chronicled in handy online trackerMalls and movies and drones, oh my.Old-School Organizing in the HeartlandNYT Strands hints, answers for March 14What's new to streaming this week? (March 13, 2025) Blockchain Expo World Istanbul: Global Center of Blockchain Innovations Game Changer Festival Montenegro Is Coming This Summer Uphold Adds Bitcoin Support for Its Assisted Self Takachizu to Focus on Gardens of Little Tokyo Fukushima Honda Tomodachi Concert at Aratani Author Ishizuka to Speak at CAM INTO THE NEXT STAGE: It’s 2017 and The Rafu Shimpo Is Still Standing. What Now? More Mochi Madness Holiday with Hiroshima Banksters Minting Scrolls on Magic Eden
0.1774s , 8213.65625 kb
Copyright © 2025 Powered by 【porno izlemekten keyif al?yormusunuz ek?i s?zlük】Enter to watch online.More concise chatbot responses tied to increase in hallucinations, study finds,Global Perspective Monitoring