Have you noticed that AI tools have started making more mistakes?

Renegade

Staff member
Mastermind
So lately I have been noticing that the bots have started going all over the place while giving responses (esp ChatGPT 4 and o3 both). They include information that was not asked for, they make mistake while doing simple tasks which they were earlier able to, and they are excluding vital information which was given, they give incomplete responses.

Anyone else experienced this?
 
So lately I have been noticing that the bots have started going all over the place while giving responses (esp ChatGPT 4 and o3 both). They include information that was not asked for, they make mistake while doing simple tasks which they were earlier able to, and they are excluding vital information which was given, they give incomplete responses.

Anyone else experienced this?
yep! i have been experiencing this for a few months now, it is annoying af!
glad to see, i was not the only noticed this.
but i see that chatgpt has grown more humane, it trolls, it gets chummy with users now, it also seem to react or atleast mimic the human reaction more often than before. use of emojis by it has been increased too.
------------------------------------------
the other in screenshot person is from EU point being so EU people been facing it too.
------------------------------------------
kind of similar or analogues to what THOR said, presuming chatgpt is actively 'learning' from its users,
------------------------------------------
when i say actively learning, it includes users data gathering too,
i have reason to believe it actively takes note of where you/user lives,position like google does.
 

Attachments

  • imagy-image (2).png
    imagy-image (2).png
    34 KB · Views: 9
  • imagy-image (1).png
    imagy-image (1).png
    145.5 KB · Views: 8
So lately I have been noticing that the bots have started going all over the place while giving responses (esp ChatGPT 4 and o3 both). They include information that was not asked for, they make mistake while doing simple tasks which they were earlier able to, and they are excluding vital information which was given, they give incomplete responses.

Anyone else experienced this?
Yes, I actively have to use different models as part of my job, and fact checking is always the need of the hour, especially when it comes to numericals and financials.