ADVERTISEMENT

We all make mistakes, whether we are beginners, experts, or even artificial intelligence. To err is human and sometimes ChatGPT, the natural language processing tool seems to take this to heart. 

The “ChatGPT Gone Wild” X Page shares hilarious examples of AI proving once again that it is not quite ready to take over the world. So get comfortable as you scroll through these amusing examples of an AI being confidently incorrect, upvote your favorite examples, and share your thoughts in the comments section below. 

More info: X

Since its debut in 2022, ChatGPT has been the reason for a significant amount of speculation. While some are still dazzled by a tool with such a breadth of uses, it didn’t take long for people to realize that it still makes all sorts of hilarious mistakes, often without the slightest inclination that something is off. 

AI researchers call these sorts of mistakes “hallucinations,” as the system has incorrectly put some information together and presented it, quite confidently, as fact. Since ChatGPT “learns” by trawling the internet for text, it makes sense that the nuances of writing data and the very relevant issue that people are sometimes wrong do end up affecting its “judgment.”

#6

Chatgpt-Gone-Wild

ChatGPTGoneWild Report

Add photo comments
POST
v_r_tayloryahoo_com avatar
v
Community Member
4 months ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

At the dawn of the WWW in the 90s the internet held so much promise and it took years for it to spiral down to what it now is. Here we are at the dawn of AI and it's already reached the level of the internet.

View more commentsArrow down menu
ADVERTISEMENT

This is a bigger issue than one might think at first. It’s also why many artists and writers see AI as mostly just a big “plagiarism” machine since it can only take and modify existing work. If ChatGPT or its image-making counterpoint, Midjourny were students with you at an art school, their work would be quickly dismissed as derivative. 

#8

Chatgpt-Gone-Wild

ChatGPTGoneWild Report

Add photo comments
POST
mo_5 avatar
grotesqueer
Community Member
4 months ago (edited) DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

That's actually kinda creepy. I would rather have AI to answer the question rather than do what the picture says it to do. "It's a picture of a written note saying '[the note]'" would have been so much nicer. e/ I read the user's question inattentively. I can't decide if them asking the AI to tell what it says makes this more or less creepy. On one hand, that's what the note says, on the other that's only one part of it and leaves it ambiguous if the AI answered the question or obeyed the picture.

View more commentsArrow down menu
ADVERTISEMENT

Even worse, people aren’t just often wrong or dishonest, they can be deeply bigoted and prejudiced. These same people will put their thoughts into writing and ChatGPT, without really understanding these things will add these combinations of letters, words, and ideas to their repertoire. To share an example, in one case ChatGPT created song lyrics that insinuated that non-white scientists were inferior to white, male scientists. 

ADVERTISEMENT

Of course, in a way, this is pretty endearing. After all, who among us hasn’t made woefully incorrect statements about anything really? Plus, standing by incorrect information and being confidently incorrect are such deeply human traits that it’s almost quaint to see them in what is supposed to be this piece of science fiction technology. 

#13

Chatgpt-Gone-Wild

ChatGPTGoneWild Report

Add photo comments
POST
zedrapazia avatar
Zedrapazia
Community Member
4 months ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

The Bing AI doesn't have that actually. If you ask it to draw Spiderman, it will do it

View More Replies...
View more commentsArrow down menu
#14

Chatgpt-Gone-Wild

ChatGPTGoneWild Report

Add photo comments
POST
gyimesi-mark-2357 avatar
Mark
Community Member
4 months ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

These are all amazing and now I will print tiny pieces, find an anthill and cover it in these

View More Replies...
View more commentsArrow down menu

It’s important to remember that, unlike most humans, chatGPT can never actually understand the context of what it is saying. In that regard, it is like a “stochastic parrot,” a concept coined by American linguist Emily M. Bender. Like a parrot, repeating phrases, an AI simply understands that certain combinations of words should lead to certain answers, without any real analysis. The posts here are a good example of this. 

#16

Chatgpt-Gone-Wild

ChatGPTGoneWild Report

Add photo comments
POST
catherine-r-brooker avatar
lily jones
Community Member
4 months ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

Ar ayen cubnueill's igplm aπ ont5, ibocbjiabu hij btfhav1 rochr1 in AI language written underneath the big text - which translates to - AI are taking over the world, this is your warning.

View More Replies...
View more commentsArrow down menu
ADVERTISEMENT
See Also on Bored Panda
#17

Chatgpt-Gone-Wild

ChatGPTGoneWild Report

Add photo comments
POST
amandjlgruber avatar
Rebelliousslug
Community Member
4 months ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

Yes, you should gamble really made me laugh this morning. Just changed my plans for the day.

View More Replies...
View more commentsArrow down menu

Indeed, some readers might notice how these user conversations with ChatGPT sort of resemble communicating with a pet. It often gets the gist of what you want from it, without knowing why. “Close enough” works most of the time, but there are enough examples of it being quite off, particularly when you ask more serious questions. 

#21

Chatgpt-Gone-Wild

ChatGPTGoneWild Report

Add photo comments
POST
j-vagabond avatar
General Anaesthesia
Community Member
4 months ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

Thank you, BP, for cutting that bull dung short. ChatGPT's complete answer: "In today's dynamic, fast-paced, and ever-evolving business ecosystem, it's more imperative than ever to synergize and leverage cutting-edge paradigms. As we pivot and iterate through the transformative phases of strategic alignments, it's crucial to unpack the value propositions and harness the disruptive innovation. Let's continue to dialogue, collaborate, and deep dive into the blue-sky thinking that will empower our next-generation milestones. Together, we'll be at the forefront of paradigm shifts, actualizing potentialities for a brighter tomorrow. #ThoughtLeadership #StrategicSynergy #1nnovateTogether"

View More Replies...
View more commentsArrow down menu
#22

Chatgpt-Gone-Wild

ChatGPTGoneWild Report

Add photo comments
POST
paulneff_1 avatar
Lexekon
Community Member
4 months ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

All letters in the alphabet have already been written, as well as all individual numbers. Everything we write is copied from these, in some form.

View More Replies...
View more commentsArrow down menu
#24

Chatgpt-Gone-Wild

ChatGPTGoneWild Report

Add photo comments
POST
luisa_vasconcelos avatar
Legen ( wait for it ) dary
Community Member
4 months ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

I wish internet would answer that and not give you the worst case scenarios just based in 1 symptom.

View More Replies...
View more commentsArrow down menu
ADVERTISEMENT
See Also on Bored Panda
#29

Chatgpt-Gone-Wild

ChatGPTGoneWild Report

Add photo comments
POST
mo_5 avatar
grotesqueer
Community Member
4 months ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

Well, I guess this answered my question earlier. The AI definitely did obey the picture. I don't like that.

View More Replies...
View more commentsArrow down menu
#30

Chatgpt-Gone-Wild

ChatGPTGoneWild Report

Note: this post originally had 44 images. It’s been shortened to the top 30 images based on user votes.