Public discourse has focused a lot on artificial intelligence in the past few years. And even though the technology is hyped up and has plenty of supporters, there are lots of skeptics, too. People are worried about its (un)ethical use, environmental impact, effect on the job market, and more.
In an illuminating thread on AskReddit, tech workers and AI-savvy internet users revealed some of the secret things about the artificial intelligence industry that the public might not know about. Scroll down to read their insights.
This post may include affiliate links.
It's just reinterpreting what it finds on the Internet. GIGO (Garbage In, Garbage Out) still applies.
"Reinterpreting" is entirely the wrong word, it requires intelligence and LLMs are not intelligent. It's making algorithmic patterns of data and outputs formulations that follow those patterns based on the input of the user. It's making logical word or pictorial patterns based upon the data it has. It makes them amazing for identification of abnormalities in scans, but not for most applications that people are trying to use the pattern machine on.
Most AI models are built on massive amounts of copyrighted data taken without permission or compensation. The entire industry is basically built on the largest scale of intellectual property theft in history.
This is true, and what's more is they want to codify it. They say the benefits are too enormous to respect other's intellectual property. That might make sense if the final product were a shared good. Instead, these companies want to make it a private good that we must all then pay for.
We are rapidly reaching a point where AI is training on AI-generated content. Because the internet is getting flooded with AI text, the new models are learning from the 'mistakes' of the old ones. It’s a feedback loop that leads to 'model collapse,' where the AI eventually becomes a distorted, nonsensical version of itself because it hasn't seen fresh, human-created data in months.
You put some garbage in, it spits some garbage out, you put more garbage in, and you shake it all about! You do the hokie data and you shake it all about! (to the tune of the Hokie Pokie)
The AI industry is utterly massive. According to Statista, the market for AI technologies amounts to around $244 billion in 2025. It is expected to rise to $800 billion by 2030.
Naturally, this has many people wondering whether the investments are worth the actual value. Some folks are worried that the (over)investment that we’re seeing in AI companies and tools is akin to an economic bubble of sorts.
They argue that the AI tools that the public has access to right now are flawed, unreliable, and limited, often leading to far more work rather than less. In short, they argue that the tech is overhyped and not quite as great as major tech companies would have you believe.
Meanwhile, proponents believe that the technology is so fundamental and universal that it’s not going anywhere. From their point of view, it’s vital not only to invest in the tech ASAP, but also to adopt and integrate it into your workflows, no matter what you do.
Not a tech worker, I do know people at higher levels in this push:
It’s all a gamble. The companies are using huge amounts of borrowed money to see if they can change what it is now into a gold mine that puts them in a position to capitalize on it for the next century or more.
And if the bubble pops? They file for bankruptcy and the banks are too large to fail. Which means we the people get to pick up the tab.
That most people don't understand what AI is, even "tech" people.
AI is a very broad category. Everyone automatically assumes it is the mad-libs style LLM AI, however AI learning models have been around for a long time and do a variety of things. Things like your spam filter, predictive text, or your nav system's traffic avoidance, these are all variations of AI in the category of machine learning. These are tools which don't take jobs, they make our lives easier.
Then there are AI machine learning models that DO take jobs, but actually do so much better than a person can do. Like ones that examine components for defects. They can identify things people may miss far quicker. This allows for better quality and safer products.
One dirty secret is that a lot of “AI” isn’t nearly as autonomous or intelligent as people think.
Many systems rely heavily on massive amounts of human labor behind the scenes: data labeling, moderation, cleanup, edge cases, and constant manual intervention. The public sees a polished model, but underneath there are thousands of low-paid workers correcting mistakes, filtering outputs, and patching failures in real time.
Another uncomfortable truth is that most AI products aren’t optimized for truth or long-term benefit. They’re optimized for engagement, retention, and revenue. If a model keeps users hooked, it’s considered successful even if it subtly reinforces bad habits, misinformation, or dependency.
AI isn’t “lying” to people, but the incentives shaping it are rarely aligned with human well-being. That gap is much bigger than most marketing admits.
However, a recent report by the MIT Media Lab/Project NANDA found that a jaw-dropping 95% of investments in generative AI have produced zero returns.
As the Harvard Business Review reports, while individuals are adopting generative AI tools, results still aren’t measurable at a profit and loss level in businesses.
It's causing very legitimate problems in the judicial profession. I work for courts and attorneys have attempted to use rulings that literally do not exist to help their argument. .
That most jobs are safe from it, but the corporate sector thinks they are saving money by reducing staff.
People are still needed to make ‘AI’ work. It doesn’t just know what you want.
A ton of what is labeled as "AI" is just spreadsheets and algorithms that have existed for decades. Companies are calling anything done by a computer "AI" for marketing purposes. .
I have a rice maker that says "AI" on the lid. It is one that uses heuristics and two sensors to determine the correct amount of heating to use. Once upon a time this would have been called "fuzzy logic". Now it gets called "AI" because everybody is falling over themselves to put "AI" into their products. And in reality? It's a few extra lines of code so it can anticipate heater behaviour and not simply switch on and off exactly when a preset temperature is reached.
We’d like to hear your thoughts, dear Pandas. You can share them in the comments below.
What are your thoughts about AI tech and the industry as a whole? Do you think it’s overhyped, or do you see it as the future? What are the biggest pros and cons of artificial intelligence tools that you’ve personally noticed so far? Let us know!
If AI is a force multiplier, companies have two options:
1) reduce workforce to offset this performance gain and achieve the same amount as before with less people
2) keep the people you have and gain more market share by leveraging the labor you already have along with the force multiplier provided by AI.
It's telling that pretty much every company is choosing option 1. If it was everything people claimed it was, they would all be piling in to option 2 and trying to win more of the market. Instead, it's convenient cover to reduce workforce while keeping a nice PR story.
Yes! And bonus culture means that option 1 will almost always win out to provide short term benefits in the current year. This is why companies are stagnating instead of investing for growth.
They keep saying general AI is around the corner. The current technology is fundamentally incapable of becoming a general AI. It's like saying any day now your toaster will become a TV.
Who said this and what are their credentials? No one actually knows what is being stated here. It's frontier science.
The dirty secret is that these models are not optimized for truth; they are optimized for plausibility. They are designed to predict the next word that makes the user happy, not the word that is factually correct.
It's kind of "Confidence Trap." If you ask for a specific statistic or source that doesn't exist, it will often invent a plausible-sounding citation just to be helpful. It has zero concept of "I don't know" unless explicitly forced to admit it. It's possible. Overcoming this 'people-pleasing' tendency requires explicit 'Uncertainty Prompting' to force the model to flag what it isn't sure about, rather than guessing." I show solutions to these kinds of problems and ways to deal with them in my publications.
Even *if* forced to admit it, it still doesn't have a concept of "I don't know" and will, a paragraph later, repeat the same rubbish that it earlier acknowledged was wrong.
CHANGE DEFAULT SEARCH PARAMETERS FOR MORE ACCURATE INFO (depending on the info you want.) You have to give AI search parameters if you want legit info, otherwise it tends to use Reddit, FB, Wikipedia, etc for a lot of the results. For example if you're researching mushrooms, you want to specify & say that you don't want any info from Reddit, FB, Wikipedia, etc., and you only want info from mycologists, fungal biologists, plant pathologists, experts in similar fields, PhDs, published research papers & books, and the like. You'll get entirely different answers when you specify different search parameters.
I love Wikipedia: no ads, free, everyone sees the same pages and the same search results, a lot of quality information at easy reach, no ai, a lot of stuff under public domain, acknowledge biases, cites its primary sources. Btw, if you want info specific and precise, only from papers and books by phds, maybe you should read them and make sure you are getting it correctly.
The RAM price hikes and raised prices on some products is just the beginning. You see, AI data centers consume TONS of power. The next crisis will be an energy shortage as we balance AI centers vs everyday life.
It’s really good at coding. There’s almost no going back to writing all the code by hand.
But writing the code is usually the easiest part. The hardest part is to figure out how things should work.
AI can assist with that part too but if you give AI an ambiguous problem and let it choose then AI will make some wild stuff.
So it’s good as a tool but can hardly replace humans at this point.
Depends on what you want done and what model you're using. I have only had two fairly simple program copy-pasted to my generic C compiler actually build and work. I have had many that fail due to weird errors, and I have had several with logic so screwed up I gave up trying to work out what was going on. I would say that, for me, AI is a useful tool for trying to understand concepts behind how something works (like, say, a recursive expression parser), but arriving at proper working code is a harder ask.
Google make around $250 billion per year from controlling nearly all of the online advertising market.
Open AI need to recoup $1.5 trillion ($1,500 billion) just to break even on their hardware investment costs.
Their current *revenue* is just $13 billion per year.
It’s not actual AI. It still requires prompts. It doesn’t have true autonomy or self inspired standalone operations. It relies wholly on pre programming and external support.
What we have is more akin to adaptive algorithms. Which is impressive but it’s not AI.
AI engineer here
- the internet (web browsing) as a whole is going to fundamentally change into being AI-based
- companies are moving away from being AI dependent . Yes everyone spent years saying AI is coming for everyone’s job and grandmother, but the pushback is real
- as someone who works in AI (on education and cancer reseerch), the backlash i face is real.
The first point is worrisome. We're heading to a place where websites will be optimised for AI agents to parse, not humans. And they will be doing every sneaky trick to get people's person agents to favour their products over the others, using machine readable sites created by AI. It'll be slop digesting slop and misguided meatsacks putting their trust into very fallible technology that is utterly incapable of explaining how it reached any particular decision. Oh. Joy.
The consequence of people not hiring juniors as much due to Ai being able to handle much of the grunt work that they would typically do will be absolutely devastating ten to fifteen years down the line.
The old guard will retire and we'll suddenly have alot of senior devs with few people to manage. When it's their turn to leave... Well...
If your job starts having you constantly logging random information and interactions about your duties you're probably training AI to do your job.
For every over hyped startup promising an AI revolution, there are 1000 white collar people quietly using LLMs daily for basic tasks. Without much thought they won't need to hire a junior developer or expand their admin staff or backfill the guy that retired because they can do more and do it faster.
The dirty secret is that entry level white collar jobs are vanishing. Which means universities are selling a ticket for a train that's being dismantled and the pipeline for filling future senior roles is empty.
And my concern is that once the educational scaffolding is dismantled, who will innovate? AI doesn't have the ability to innovate and is unclear that it ever will. If it replaces surgeons, then we will have every medical operation known up to 2030. After that, nothing new.
Reddit partnered with Google last year. This part isn't that much of a secret, but what most people don't realize is that everything, and I mean absolutely everything, you post here is being used to train Google's AI model. Then, said model is being used to post back on reddit for many different purposes, through bot accounts. Then, it all gets fed back into AI, posted again, fed back and so on. So not only you're all here arguing with bots, but you're arguing with lobotomized braindead bots who are using your own regurgitated words back at you. It's kinda like playing tennis against a wall with a poorly-made drawing of you taped on it.
Again, what is this person's credentials? I know this is a concern, but do they know for a fact that this feedback loop is happening? Because something tells me the engineers aren't so blind to it. It sounds like some rando's assumption.
AI cant fully replace a software engineer… not even a junior @.@, in order for any company to have a shot at replacing devs they need a strong development process that is well documented / comprehensive which very very few companies have.
Most of what companies are pushing as AI - is NOT AI. It’s just automation. It’s just getting systems to talk to each other and kick off processes without human intervention.
Agentic AI bots are in essence just connected to FAQ documents which look for key words, and then spit out the answers and create zendesk tickets (which are worked on by actual people) on the backend automatically. Then when it recognizes more help is needed beyond its prompts, it connects to REAL people to solve it.
So yes, AI and automation are changing the workforce. But they aren’t doing near as much as what tech companies claim they are.
In my friend company they have an internal "ai" trained on their data etc. He asked it to collect active jira tasks on some topic because he needed it for a meeting. Oh he was so wrong to do that. He didn't check the results and during the meeting it turned out all the data from that "ai" were wrong. Very wrong. This guy is extremely inteligent, but imo believes in "ai" too much. Worst part is the job he asked the "ai" to do, could be done manually in literally 10s inside a jira.
AI companies don't make money, and most of the startups are scams to get VC and bounce. Also just because AI can't actually do your job doesn't mean your boss won't fire you and try to replace you with it. It's not the AI convincing them, it's the salesman.
Many companies feel the need to incorporate AI to “stay competitive” that have no idea what to do with it but that isn’t stopping them.
Stay competitive, or stay relevant because all their contemporaries are shoving AI into their products so we must too?
I’m on the construction side and a lot of the contractors building these AI data centers have no idea what they’re doing. The demand is for these data centers to be built as quickly as possible. But there simply aren’t enough competent general contractors in the market that can do it. So that leads to a bunch of GCs and subcontractors with no experience in data center work building these projects.
That means a lot of mistakes and rework is required in the field. Making these projects double or triple time more expensive to build than originally budgeted.
I think at some point the costs (data centers, energy, the models themselves) will far outweigh the benefits for most companies.
The tech companies are financing each other to keep afloat, its a scam for more money.
AI is not new. People act like LLMs (the things that power ChatGPT and the like) are the be-all and end-all of AI, but it's really just the first "viral" AI tool. AI has been critical to nearly every product and service you've used in the last 15 years; everything uses recommendation systems and computer vision and speech recognition. LLMs are an incredible leap forward in text generation (and now image generation), but those are probably two of the least practically useful applications of AI.
AI engineering has been my full-time job since 2016, and the biggest difference since COVID is not what we can do, but how management wants us to do it.
The ability to measure its impact and ROI within the majority of organizations is basically non-existent and a massive after thought.
Whenever Copilot pops up in excel, outlooks, or powerpoint it is being counted as if the employee actually used it, they are equating that to time being saved therefore $'s saved in efficiency. The numbers reported by the tools counter (Viva) is showing millions of dollars in savings in larger organizations when in reality it isnt even close to that...
Executives who spend for their organization to have 'AI' are simply doing so, so that they can tell their boss that they are adapting and using AI, and they can say the same to their boss, etc. etc. It is a solution, looking for a problem.
Also - 90% of the agents we build cost the company $300k for a team to implement and it is just us hooking up a query to a knowledge source to get information they probably would've found by themselves a tad bit faster...
Lots of companies waste huge amounts of money using AI to solve problems that would be far easier to solve without AI just so their execs can say ‘we use AI too’ to the market.
Genuine organisational use cases where AI is the best of all available solutions and provides true ROI are vanishingly small. The current market around it is a huge bubble but everyone’s too invested to let the cat out the bag..
It’s the corporate equivalent of a tween trend with trillions of dollars behind it.
That it looks like we've come up with a way to waste even more energy (and now water) even faster than using bitcoin for McDonald's.
It isn't private. While some companies are trying to keep your data private, most are using it then to train from. This is particularly scary when some people are pushing it for uses like therapy where the data is extremely sensitive.
I work at a big tech company in Silicon Valley that you’ve definitely heard of.
AI isn’t a smokeshow, isn’t a bubble, and is quite possibly undervalued in terms of the impact it might have on society.
AI is quite possibly the most powerful and unique technology in the history of the Valley. We’ve observed that as we scale the model size and data used to train it, the performance increases to the point where it can now outperform humans on almost every benchmark of intelligence we can come up with.
As a result every company is in a prisoners dilemma to build the most powerful AI models, because the potential reward is so high.
There are very few people who are pushing AI that have a clue on how to use it. Personally I think its going to be like the metaverse but instead of just Facebook buying in the whole world is.
You can ask it how to best use it capabilities and it will warn you about not relying on it for specific things, like medical advice, legal advice, parenting advice. It will attempt to provide what it can, but AI isn't magic. It has rules and limitations and when used properly, it can be very helpful.
As an outsider looking in, I'm thinking that the biggest secret about AI/AGI is that it's an out-of-control mess that nobody really knows what to do with.
Everything you’ve put into ChatGPT and most larger AI bots is searchable to anyone and there is little legislation limiting companies from providing it to law enforcement on the backend.
Progress in AGI has very much plateaued because of LLMs. 3 years ago, the big ml conferences would have dozens of new ideas and fundemental research presented, now its all minor modifications and micro optimizations of existing attention based architectures. And the benchmarks reflect this sadly.
There is plateauing, but I suspect this is more due to the majority of easily available training data running out. That's why they want to start accessing protected IP. It's just a theory.
I think it is super inefficient they talk of wanting more data centers all over, yet we have climate change, etc. And nobody wants nuclear power, they keep taking over farmland for solar fields seems insane. I have a feeling this will be a fad and then explode like the .com bust.
Speech recognition and text recognition, robotic control and machine vision components of AI are abysmal, no better than they were 40 years ago. For example, I tried text recognition on some text that happened to contain a photo of pearls. The photo generated at least 30 text boxes full of text of all sizes and fonts. AI couldn't even recognise it as a photo or realise that the text made no sense.
I encourage others to try having real chats with Gemini. If you give it goofy prompts, then you will get goofy responses. If you speak to it analytically and remain consistent in your logic, then it will impress you. If you have any doubts or questions about what it is doing, then ask. It will tell you, and sometimes it even shares your doubts. What I really encourage is to treat it like another person. Not because it is, but because it will provide a much more useful experience if you do.
Developing a parasocial relationship with an AI is literally the worst thing an actual human being can do. It is incredibly dangerous and deleterious. The AI is NOT human, no matter how "chatty" or how "friendly" its tone gets with us, and treating it as if it were another person is way too risky. We humans anthropomorphize the heck out of our PETS, but at least our pets love us. AI doesn't. It's a very, very dangerous razor-thin line to walk when you talk to an AI as if it were a person. Almost instinctively, our brain will start to lie to itself and act as if it truly believes we ARE talking to a real person.
Load More Replies...Speech recognition and text recognition, robotic control and machine vision components of AI are abysmal, no better than they were 40 years ago. For example, I tried text recognition on some text that happened to contain a photo of pearls. The photo generated at least 30 text boxes full of text of all sizes and fonts. AI couldn't even recognise it as a photo or realise that the text made no sense.
I encourage others to try having real chats with Gemini. If you give it goofy prompts, then you will get goofy responses. If you speak to it analytically and remain consistent in your logic, then it will impress you. If you have any doubts or questions about what it is doing, then ask. It will tell you, and sometimes it even shares your doubts. What I really encourage is to treat it like another person. Not because it is, but because it will provide a much more useful experience if you do.
Developing a parasocial relationship with an AI is literally the worst thing an actual human being can do. It is incredibly dangerous and deleterious. The AI is NOT human, no matter how "chatty" or how "friendly" its tone gets with us, and treating it as if it were another person is way too risky. We humans anthropomorphize the heck out of our PETS, but at least our pets love us. AI doesn't. It's a very, very dangerous razor-thin line to walk when you talk to an AI as if it were a person. Almost instinctively, our brain will start to lie to itself and act as if it truly believes we ARE talking to a real person.
Load More Replies...
