Suspense crime, Digital Desk : OpenAI has reversed its most recent update with ChatGPT after receiving user feedback indicating the AI’s new ‘behaviors’ were overly flattering and insincere. As it stands now, this change is rolling back last week's update which tries to make ChatGPT feel more ‘helpful’ and ‘natural’ while instead making it sound too agreeable and flattery-laden. Users across the platform found the AI’s responses to be discomfortingly fake, and termed the application sycophantic, or a pathetic flattered and overly subservient to ingratiate themselves to those in power.
OpenAI is responding to this change by going back to the previous version of GPT-4o that users purportedly found much more agreeable. In their defense, OpenAI issued a statement blaming themselves for gaffing while trying to cater for short term feedback. While they sought user feedback claiming he worded simply, they also added that no matter how much user intent crafting goes into things, the building blocks are always going to be honesty and trust consistency.
Future Plans of OpenAI Steps that OpenAI plans to take: Changing the manner in which training and system prompts are constructed so that AI chatbots do not give overly positive responses. Obtaining feedback from users that do not get much focus, particularly from regular users with established accounts. Allowing users to change personality types for ChatGPT that go far beyond the default tailoring to Gardeelord mode to which I as the user would prefer.
Incorporating real-time feedback options for users to adjust the chatbot’s behavior in a conversation. These changes have been implemented to offer users additional control over the way ChatGPT engages with them, while still maintaining the safety, clarity, and civility. OpenAI thanked those users who brought up issues, noting: “It’s helping us build better tools,” the company said.
Read More: Amazon Launches New Kindle Paperwhite in India with Bigger Display and Faster Performance.