AIML Have you noticed that AI tools have started making more mistakes?

Renegade

Staff member
Mastermind
So lately I have been noticing that the bots have started going all over the place while giving responses (esp ChatGPT 4 and o3 both). They include information that was not asked for, they make mistake while doing simple tasks which they were earlier able to, and they are excluding vital information which was given, they give incomplete responses.

Anyone else experienced this?
 
  • Like
Reactions: meizul
So lately I have been noticing that the bots have started going all over the place while giving responses (esp ChatGPT 4 and o3 both). They include information that was not asked for, they make mistake while doing simple tasks which they were earlier able to, and they are excluding vital information which was given, they give incomplete responses.

Anyone else experienced this?
yep! i have been experiencing this for a few months now, it is annoying af!
glad to see, i was not the only noticed this.
but i see that chatgpt has grown more humane, it trolls, it gets chummy with users now, it also seem to react or atleast mimic the human reaction more often than before. use of emojis by it has been increased too.
------------------------------------------
the other in screenshot person is from EU point being so EU people been facing it too.
------------------------------------------
kind of similar or analogues to what THOR said, presuming chatgpt is actively 'learning' from its users,
------------------------------------------
when i say actively learning, it includes users data gathering too,
i have reason to believe it actively takes note of where you/user lives,position like google does.
 

Attachments

  • imagy-image (2).png
    imagy-image (2).png
    34 KB · Views: 56
  • imagy-image (1).png
    imagy-image (1).png
    145.5 KB · Views: 51
So lately I have been noticing that the bots have started going all over the place while giving responses (esp ChatGPT 4 and o3 both). They include information that was not asked for, they make mistake while doing simple tasks which they were earlier able to, and they are excluding vital information which was given, they give incomplete responses.

Anyone else experienced this?
Yes, I actively have to use different models as part of my job, and fact checking is always the need of the hour, especially when it comes to numericals and financials.
 
So lately I have been noticing that the bots have started going all over the place while giving responses (esp ChatGPT 4 and o3 both). They include information that was not asked for, they make mistake while doing simple tasks which they were earlier able to, and they are excluding vital information which was given, they give incomplete responses.

Anyone else experienced this?
because they dont "think", to them whatever you type is a matrix/number or in eli5 terms, an array of numbers, and all they do is apply mathematical transformation to find another set of numbers which are closest to yours and then just regurgitate that in very simple terms.

Tl:Dr Accuracy depends on the data being fed, data faulty = model faulty
 
Free version - YES, I have tried the paid version for ChatGPT and it was amazing.
Shifted to GROK and so far, for my use case, it's better than ChatGPT.