News ChatGPT cant name him like "He who shall not be named"

ronnie_gogs

Morphing from a Geek to a Nerd
Herald
So Bryan Lunduke is a bit controversial figure in Linux community. He was previously in the board for openSUSE and famously started a "Linux Sucks" way back in 2000. Its a satirical take on Linux and its issues. He has become a bit infamous in the recent years due to his controversial takes on Linux and political issues mixing. I would refrain from my opinion on the matter. However seems like chatGPT has been trained to treat him like Lord Voldemort. It gives an error when you directly ask him to name him and gives a wrong answer when question is his name.

1734669431391.jpeg


Perplexity AI has no issue with same question

1734669490771.png


Training AI with political views in my opinion is bit of a slippery slope. AI should be objective. Censoring a person like this which may be deliberate is plain wrong in my opinion. This can lead to further issues in the future when AI has been designed to lie and give errors. Its like searching what is "Tiananmen Square" in Baidu.
 
It conforms the adage that "History is written by the victors". A truly objective perspective is difficult to cultivate in humans and AI or even Search results mirrors the bias of its creators implicitly or deliberately. It's just like a child growing up. It gets preconditioned to an extent by the biases in his/her family, school, society.......and algorithms and AI are in the hands of humans who themselves have biases so go figure what would be the outcome. Corporations are driven by profits/greed and they are never going to be objective... With Indian companies like Jio and Ola getting into AI think about who holds the power.....
 
It conforms the adage that "History is written by the victors". A truly objective perspective is difficult to cultivate in humans and AI or even Search results mirrors the bias of its creators implicitly or deliberately. It's just like a child growing up. It gets preconditioned to an extent by the biases in his/her family, school, society.......and algorithms and AI are in the hands of humans who themselves have biases so go figure what would be the outcome. Corporations are driven by profits/greed and they are never going to be objective... With Indian companies like Jio and Ola getting into AI think about who holds the power.....

I agree that truly being objective would be not possible. But in this case it seems to giving wrong information instead of biased information. The refusal to even say the name goes beyond a biased information.

Had a bit of fun trolling chatGPT. Provided ASCII codes to decode. It almost decoded it and realized what it was and gave an error.

1734675255358.png


If Skynet terminator is about to kill you in the future maybe saying Bryan Lunduke will shut it down.