ADVERTISEMENT

Public discourse has focused a lot on artificial intelligence in the past few years. And even though the technology is hyped up and has plenty of supporters, there are lots of skeptics, too. People are worried about its (un)ethical use, environmental impact, effect on the job market, and more.

In an illuminating thread on AskReddit, tech workers and AI-savvy internet users revealed some of the secret things about the artificial intelligence industry that the public might not know about. Scroll down to read their insights.

#1

Three people pointing at a laptop screen while sitting, discussing AI industry dirty secrets together. It's just reinterpreting what it finds on the Internet. GIGO (Garbage In, Garbage Out) still applies.

GEEK-IP , John / Unsplash (not the actual photo) Report

Kira Okah
Community Member
6 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

"Reinterpreting" is entirely the wrong word, it requires intelligence and LLMs are not intelligent. It's making algorithmic patterns of data and outputs formulations that follow those patterns based on the input of the user. It's making logical word or pictorial patterns based upon the data it has. It makes them amazing for identification of abnormalities in scans, but not for most applications that people are trying to use the pattern machine on.

View more comments
RELATED:
    #2

    Female judge in courtroom reviewing documents, representing oversight in the AI industry and its hidden challenges. Most AI models are built on massive amounts of copyrighted data taken without permission or compensation. The entire industry is basically built on the largest scale of intellectual property theft in history.

    Weird_Ad6669 , AnnaStills / Envato (not the actual photo) Report

    Ryan Mercer
    Community Member
    9 hours ago (edited) Created by potrace 1.15, written by Peter Selinger 2001-2017

    This is true, and what's more is they want to codify it. They say the benefits are too enormous to respect other's intellectual property. That might make sense if the final product were a shared good. Instead, these companies want to make it a private good that we must all then pay for.

    View more comments
    #3

    We are rapidly reaching a point where AI is training on AI-generated content. Because the internet is getting flooded with AI text, the new models are learning from the 'mistakes' of the old ones. It’s a feedback loop that leads to 'model collapse,' where the AI eventually becomes a distorted, nonsensical version of itself because it hasn't seen fresh, human-created data in months.

    Ok-Bathroom273 Report

    UnclePanda
    Community Member
    2 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    You put some garbage in, it spits some garbage out, you put more garbage in, and you shake it all about! You do the hokie data and you shake it all about! (to the tune of the Hokie Pokie)

    View more comments
    ADVERTISEMENT

    The AI industry is utterly massive. According to Statista, the market for AI technologies amounts to around $244 billion in 2025. It is expected to rise to $800 billion by 2030.

    Naturally, this has many people wondering whether the investments are worth the actual value. Some folks are worried that the (over)investment that we’re seeing in AI companies and tools is akin to an economic bubble of sorts.

    They argue that the AI tools that the public has access to right now are flawed, unreliable, and limited, often leading to far more work rather than less. In short, they argue that the tech is overhyped and not quite as great as major tech companies would have you believe.

    Meanwhile, proponents believe that the technology is so fundamental and universal that it’s not going anywhere. From their point of view, it’s vital not only to invest in the tech ASAP, but also to adopt and integrate it into your workflows, no matter what you do.

    #4

    Stacks of US hundred-dollar bills bound with rubber bands on a white table representing AI industry dirty secrets. Not a tech worker, I do know people at higher levels in this push:

    It’s all a gamble. The companies are using huge amounts of borrowed money to see if they can change what it is now into a gold mine that puts them in a position to capitalize on it for the next century or more.

    And if the bubble pops? They file for bankruptcy and the banks are too large to fail. Which means we the people get to pick up the tab.

    SciFi_MuffinMan , Celyn Kang / Unsplash (not the actual photo) Report

    CD King
    Community Member
    10 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    Again…. People need to pick up the tab again. And again, and again. That’s capitalism baby

    View more comments
    ADVERTISEMENT
    ADVERTISEMENT
    #5

    Three doctors reviewing medical images on a tablet and computer screens, highlighting AI industry insights in healthcare. That most people don't understand what AI is, even "tech" people.

    AI is a very broad category. Everyone automatically assumes it is the mad-libs style LLM AI, however AI learning models have been around for a long time and do a variety of things. Things like your spam filter, predictive text, or your nav system's traffic avoidance, these are all variations of AI in the category of machine learning. These are tools which don't take jobs, they make our lives easier.

    Then there are AI machine learning models that DO take jobs, but actually do so much better than a person can do. Like ones that examine components for defects. They can identify things people may miss far quicker. This allows for better quality and safer products.

    KelhGrim , DragonImages / Envato (not the actual photo) Report

    CK
    Community Member
    9 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    The best jobs for AI to steal are the ones that aren't being done because they're too tedious and therefore too expensive.

    View more comments
    #6

    Group of young professionals working on computers in an open office, illustrating AI industry behind-the-scenes environment. One dirty secret is that a lot of “AI” isn’t nearly as autonomous or intelligent as people think.
    Many systems rely heavily on massive amounts of human labor behind the scenes: data labeling, moderation, cleanup, edge cases, and constant manual intervention. The public sees a polished model, but underneath there are thousands of low-paid workers correcting mistakes, filtering outputs, and patching failures in real time.
    Another uncomfortable truth is that most AI products aren’t optimized for truth or long-term benefit. They’re optimized for engagement, retention, and revenue. If a model keeps users hooked, it’s considered successful even if it subtly reinforces bad habits, misinformation, or dependency.
    AI isn’t “lying” to people, but the incentives shaping it are rarely aligned with human well-being. That gap is much bigger than most marketing admits.

    EventNo9425 , Flipsnack / Unsplash (not the actual photo) Report

    Miki
    Community Member
    Premium
    8 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    "polished model" 🤣🤣

    ADVERTISEMENT

    However, a recent report by the MIT Media Lab/Project NANDA found that a jaw-dropping 95% of investments in generative AI have produced zero returns.

    As the Harvard Business Review reports, while individuals are adopting generative AI tools, results still aren’t measurable at a profit and loss level in businesses.

    #7

    Businesswoman passing documents to colleague during a meeting, symbolizing AI industry dirty secrets discussion in a formal setting It's causing very legitimate problems in the judicial profession. I work for courts and attorneys have attempted to use rulings that literally do not exist to help their argument. .

    SnooPets1528 , Getty Images / Unsplash (not the actual photo) Report

    Ryan Mercer
    Community Member
    10 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    This isn't an AI problem. This is a people problem.

    View more comments
    #8

    Young professional in glasses working thoughtfully in a private office pod, illustrating AI industry secrets concept. That most jobs are safe from it, but the corporate sector thinks they are saving money by reducing staff.

    People are still needed to make ‘AI’ work. It doesn’t just know what you want.

    nerdykronk , Andrej Lišakov / Unsplash (not the actual photo) Report

    Miki
    Community Member
    Premium
    8 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    And a quality of said work is degrading rapidly.

    View more comments
    ADVERTISEMENT
    ADVERTISEMENT
    #9

    Person working on a Dell laptop analyzing data on a spreadsheet related to AI industry dirty secrets and insights. A ton of what is labeled as "AI" is just spreadsheets and algorithms that have existed for decades. Companies are calling anything done by a computer "AI" for marketing purposes. .

    RedditBugler , Rodrigo Rodrigues / Unsplash (not the actual photo) Report

    Rick Murray
    Community Member
    8 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    I have a rice maker that says "AI" on the lid. It is one that uses heuristics and two sensors to determine the correct amount of heating to use. Once upon a time this would have been called "fuzzy logic". Now it gets called "AI" because everybody is falling over themselves to put "AI" into their products. And in reality? It's a few extra lines of code so it can anticipate heater behaviour and not simply switch on and off exactly when a preset temperature is reached.

    View more comments

    We’d like to hear your thoughts, dear Pandas. You can share them in the comments below.

    What are your thoughts about AI tech and the industry as a whole? Do you think it’s overhyped, or do you see it as the future? What are the biggest pros and cons of artificial intelligence tools that you’ve personally noticed so far? Let us know!

    #10

    If AI is a force multiplier, companies have two options:

    1) reduce workforce to offset this performance gain and achieve the same amount as before with less people
    2) keep the people you have and gain more market share by leveraging the labor you already have along with the force multiplier provided by AI.

    It's telling that pretty much every company is choosing option 1. If it was everything people claimed it was, they would all be piling in to option 2 and trying to win more of the market. Instead, it's convenient cover to reduce workforce while keeping a nice PR story.

    GrayestRock Report

    Robert Millar
    Community Member
    8 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    Yes! And bonus culture means that option 1 will almost always win out to provide short term benefits in the current year. This is why companies are stagnating instead of investing for growth.

    View more comments
    ADVERTISEMENT
    #11

    Woman in orange blazer presenting on stage with digital brain graphic in background, highlighting AI industry insights and secrets. They keep saying general AI is around the corner. The current technology is fundamentally incapable of becoming a general AI. It's like saying any day now your toaster will become a TV. 

    summonsays , DC_Studio / Envato (not the actual photo) Report

    Ryan Mercer
    Community Member
    10 hours ago (edited) Created by potrace 1.15, written by Peter Selinger 2001-2017

    Who said this and what are their credentials? No one actually knows what is being stated here. It's frontier science.

    View more comments
    #12

    Young person wearing headphones and a backward cap, smiling while using a laptop, exploring AI industry insights at home. The dirty secret is that these models are not optimized for truth; they are optimized for plausibility. They are designed to predict the next word that makes the user happy, not the word that is factually correct.

    It's kind of "Confidence Trap." If you ask for a specific statistic or source that doesn't exist, it will often invent a plausible-sounding citation just to be helpful. It has zero concept of "I don't know" unless explicitly forced to admit it. It's possible. Overcoming this 'people-pleasing' tendency requires explicit 'Uncertainty Prompting' to force the model to flag what it isn't sure about, rather than guessing." I show solutions to these kinds of problems and ways to deal with them in my publications.

    Beginning-Law2392 , Natalya / Envato (not the actual photo) Report

    Rick Murray
    Community Member
    8 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    Even *if* forced to admit it, it still doesn't have a concept of "I don't know" and will, a paragraph later, repeat the same rubbish that it earlier acknowledged was wrong.

    View more comments
    ADVERTISEMENT
    ADVERTISEMENT
    #13

    Three women in a library researching AI industry dirty secrets using a laptop and notebooks, focused on their work together. CHANGE DEFAULT SEARCH PARAMETERS FOR MORE ACCURATE INFO (depending on the info you want.) You have to give AI search parameters if you want legit info, otherwise it tends to use Reddit, FB, Wikipedia, etc for a lot of the results. For example if you're researching mushrooms, you want to specify & say that you don't want any info from Reddit, FB, Wikipedia, etc., and you only want info from mycologists, fungal biologists, plant pathologists, experts in similar fields, PhDs, published research papers & books, and the like. You'll get entirely different answers when you specify different search parameters.

    Iamtress1 , Yunus Tuğ / Unsplash (not the actual photo) Report

    Nads
    Community Member
    48 minutes ago (edited) Created by potrace 1.15, written by Peter Selinger 2001-2017

    I love Wikipedia: no ads, free, everyone sees the same pages and the same search results, a lot of quality information at easy reach, no ai, a lot of stuff under public domain, acknowledge biases, cites its primary sources. Btw, if you want info specific and precise, only from papers and books by phds, maybe you should read them and make sure you are getting it correctly.

    View more comments
    #14

    Close-up of a computer memory module on a wooden surface representing AI industry technology components. The RAM price hikes and raised prices on some products is just the beginning. You see, AI data centers consume TONS of power. The next crisis will be an energy shortage as we balance AI centers vs everyday life.

    Celcius_87 , Iyus sugiharto / Unsplash (not the actual photo) Report

    Kira Okah
    Community Member
    5 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    And freshwater resources, since most datacentres are not set up to handle saltwater cooling.

    ADVERTISEMENT
    #15

    Close-up of computer screen displaying colorful AI industry programming code with dark background and blurred edges. It’s really good at coding. There’s almost no going back to writing all the code by hand.

    But writing the code is usually the easiest part. The hardest part is to figure out how things should work.

    AI can assist with that part too but if you give AI an ambiguous problem and let it choose then AI will make some wild stuff.

    So it’s good as a tool but can hardly replace humans at this point.

    Nizurai , Ilya Pavlov / Unsplash (not the actual photo) Report

    Rick Murray
    Community Member
    7 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    Depends on what you want done and what model you're using. I have only had two fairly simple program copy-pasted to my generic C compiler actually build and work. I have had many that fail due to weird errors, and I have had several with logic so screwed up I gave up trying to work out what was going on. I would say that, for me, AI is a useful tool for trying to understand concepts behind how something works (like, say, a recursive expression parser), but arriving at proper working code is a harder ask.

    View more comments
    #16

    Speaker in a chair presenting on stage with OpenAI logo behind, discussing AI industry dirty secrets and insights. Google make around $250 billion per year from controlling nearly all of the online advertising market.

    Open AI need to recoup $1.5 trillion ($1,500 billion) just to break even on their hardware investment costs.

    Their current *revenue* is just $13 billion per year.

    queen-adreena , Getty Images Report

    ADVERTISEMENT
    ADVERTISEMENT
    #17

    Man wearing headphones working on multiple monitors in an office, related to AI industry insights and technology secrets. It’s not actual AI. It still requires prompts. It doesn’t have true autonomy or self inspired standalone operations. It relies wholly on pre programming and external support.

    What we have is more akin to adaptive algorithms. Which is impressive but it’s not AI.

    Telrom_1 , Impactphotography / Envato (not the actual photo) Report

    #18

    Woman working on a laptop with charts and papers nearby, appearing stressed, relating to AI industry dirty secrets. AI engineer here

    - the internet (web browsing) as a whole is going to fundamentally change into being AI-based

    - companies are moving away from being AI dependent . Yes everyone spent years saying AI is coming for everyone’s job and grandmother, but the pushback is real

    - as someone who works in AI (on education and cancer reseerch), the backlash i face is real.

    ToughAd5010 , vukasinlj81 / Envato (not the actual photo) Report

    Rick Murray
    Community Member
    8 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    The first point is worrisome. We're heading to a place where websites will be optimised for AI agents to parse, not humans. And they will be doing every sneaky trick to get people's person agents to favour their products over the others, using machine readable sites created by AI. It'll be slop digesting slop and misguided meatsacks putting their trust into very fallible technology that is utterly incapable of explaining how it reached any particular decision. Oh. Joy.

    View more comments
    ADVERTISEMENT
    #19

    The consequence of people not hiring juniors as much due to Ai being able to handle much of the grunt work that they would typically do will be absolutely devastating ten to fifteen years down the line.

    The old guard will retire and we'll suddenly have alot of senior devs with few people to manage. When it's their turn to leave... Well...

    TheCharalampos Report

    #20

    Man in glasses working on laptop in office, focused on AI industry insights and technology development tasks. If your job starts having you constantly logging random information and interactions about your duties you're probably training AI to do your job.

    onmy40 , WBMUL / Envato (not the actual photo) Report

    ADVERTISEMENT
    #21

    For every over hyped startup promising an AI revolution, there are 1000 white collar people quietly using LLMs daily for basic tasks. Without much thought they won't need to hire a junior developer or expand their admin staff or backfill the guy that retired because they can do more and do it faster.

    The dirty secret is that entry level white collar jobs are vanishing. Which means universities are selling a ticket for a train that's being dismantled and the pipeline for filling future senior roles is empty.

    mechtonia Report

    Ryan Mercer
    Community Member
    9 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    And my concern is that once the educational scaffolding is dismantled, who will innovate? AI doesn't have the ability to innovate and is unclear that it ever will. If it replaces surgeons, then we will have every medical operation known up to 2030. After that, nothing new.

    View more comments
    ADVERTISEMENT
    See Also on Bored Panda
    #22

    Reddit app icon on a smartphone screen with notification badge, representing AI industry discussions and insights. Reddit partnered with Google last year. This part isn't that much of a secret, but what most people don't realize is that everything, and I mean absolutely everything, you post here is being used to train Google's AI model. Then, said model is being used to post back on reddit for many different purposes, through bot accounts. Then, it all gets fed back into AI, posted again, fed back and so on. So not only you're all here arguing with bots, but you're arguing with lobotomized braindead bots who are using your own regurgitated words back at you. It's kinda like playing tennis against a wall with a poorly-made drawing of you taped on it.

    BaltazarOdGilzvita , Brett Jordan / Unsplash (not the actual photo) Report

    Ryan Mercer
    Community Member
    9 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    Again, what is this person's credentials? I know this is a concern, but do they know for a fact that this feedback loop is happening? Because something tells me the engineers aren't so blind to it. It sounds like some rando's assumption.

    #23

    AI cant fully replace a software engineer… not even a junior @.@, in order for any company to have a shot at replacing devs they need a strong development process that is well documented / comprehensive which very very few companies have.

    Mem0 Report

    ADVERTISEMENT
    See Also on Bored Panda
    #24

    Most of what companies are pushing as AI - is NOT AI. It’s just automation. It’s just getting systems to talk to each other and kick off processes without human intervention.

    Agentic AI bots are in essence just connected to FAQ documents which look for key words, and then spit out the answers and create zendesk tickets (which are worked on by actual people) on the backend automatically. Then when it recognizes more help is needed beyond its prompts, it connects to REAL people to solve it.

    So yes, AI and automation are changing the workforce. But they aren’t doing near as much as what tech companies claim they are.

    JGonz1224 Report

    Miki
    Community Member
    Premium
    7 hours ago (edited) Created by potrace 1.15, written by Peter Selinger 2001-2017

    In my friend company they have an internal "ai" trained on their data etc. He asked it to collect active jira tasks on some topic because he needed it for a meeting. Oh he was so wrong to do that. He didn't check the results and during the meeting it turned out all the data from that "ai" were wrong. Very wrong. This guy is extremely inteligent, but imo believes in "ai" too much. Worst part is the job he asked the "ai" to do, could be done manually in literally 10s inside a jira.

    ADVERTISEMENT
    ADVERTISEMENT
    See Also on Bored Panda
    #25

    AI companies don't make money, and most of the startups are scams to get VC and bounce. Also just because AI can't actually do your job doesn't mean your boss won't fire you and try to replace you with it. It's not the AI convincing them, it's the salesman.

    800Volts Report

    Zephyr343
    Community Member
    5 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    Create a product and then figure out afterwards how to make money from it...like Skype did

    #26

    Many companies feel the need to incorporate AI to “stay competitive” that have no idea what to do with it but that isn’t stopping them.

    ActionCalhoun Report

    Rick Murray
    Community Member
    7 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    Stay competitive, or stay relevant because all their contemporaries are shoving AI into their products so we must too?

    ADVERTISEMENT
    See Also on Bored Panda
    #27

    I’m on the construction side and a lot of the contractors building these AI data centers have no idea what they’re doing. The demand is for these data centers to be built as quickly as possible. But there simply aren’t enough competent general contractors in the market that can do it. So that leads to a bunch of GCs and subcontractors with no experience in data center work building these projects.

    That means a lot of mistakes and rework is required in the field. Making these projects double or triple time more expensive to build than originally budgeted.

    JonathanStat Report

    #28

    Technician monitoring servers in a data center filled with complex wiring highlighting AI industry infrastructure and secrets. I think at some point the costs (data centers, energy, the models themselves) will far outweigh the benefits for most companies.

    gianlu_world , JuiceFlair / Envato (not the actual photo) Report

    ADVERTISEMENT
    ADVERTISEMENT
    See Also on Bored Panda
    #29

    The tech companies are financing each other to keep afloat, its a scam for more money.

    LostTiredWanderer Report

    #30

    AI is not new. People act like LLMs (the things that power ChatGPT and the like) are the be-all and end-all of AI, but it's really just the first "viral" AI tool. AI has been critical to nearly every product and service you've used in the last 15 years; everything uses recommendation systems and computer vision and speech recognition. LLMs are an incredible leap forward in text generation (and now image generation), but those are probably two of the least practically useful applications of AI.

    AI engineering has been my full-time job since 2016, and the biggest difference since COVID is not what we can do, but how management wants us to do it.

    nowadaykid Report

    Miki
    Community Member
    Premium
    7 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    Imo best at translation and sound generation.

    View more comments
    ADVERTISEMENT
    See Also on Bored Panda
    #31

    The ability to measure its impact and ROI within the majority of organizations is basically non-existent and a massive after thought.

    Whenever Copilot pops up in excel, outlooks, or powerpoint it is being counted as if the employee actually used it, they are equating that to time being saved therefore $'s saved in efficiency. The numbers reported by the tools counter (Viva) is showing millions of dollars in savings in larger organizations when in reality it isnt even close to that...

    Executives who spend for their organization to have 'AI' are simply doing so, so that they can tell their boss that they are adapting and using AI, and they can say the same to their boss, etc. etc. It is a solution, looking for a problem.

    Also - 90% of the agents we build cost the company $300k for a team to implement and it is just us hooking up a query to a knowledge source to get information they probably would've found by themselves a tad bit faster...

    FlaniganWackerMan Report

    Bec
    Community Member
    2 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    DocuSign added an AI helper. Now every time I go to use a template to send a form at work, I have to close the helpful/s AI offer to summarize the document for me. I'm the one who chose and is filling out the document, W*F would I need AI to summarize it for me!?!

    View more comments
    ADVERTISEMENT
    See Also on Bored Panda
    #32

    Lots of companies waste huge amounts of money using AI to solve problems that would be far easier to solve without AI just so their execs can say ‘we use AI too’ to the market.

    Genuine organisational use cases where AI is the best of all available solutions and provides true ROI are vanishingly small. The current market around it is a huge bubble but everyone’s too invested to let the cat out the bag..

    It’s the corporate equivalent of a tween trend with trillions of dollars behind it.

    magicbellend Report

    ADVERTISEMENT
    #33

    That it looks like we've come up with a way to waste even more energy (and now water) even faster than using bitcoin for McDonald's.

    Dapper-Network-3863 Report

    Ryan Mercer
    Community Member
    9 hours ago

    This comment is hidden. Click here to view.

    This appears to be a low value statement. No one in the industry said this.

    View more comments
    ADVERTISEMENT
    #34

    It isn't private. While some companies are trying to keep your data private, most are using it then to train from. This is particularly scary when some people are pushing it for uses like therapy where the data is extremely sensitive.

    ancalime9 Report

    #35

    Aerial view of a large circular tech campus surrounded by greenery, symbolizing AI industry and its hidden challenges. I work at a big tech company in Silicon Valley that you’ve definitely heard of.

    AI isn’t a smokeshow, isn’t a bubble, and is quite possibly undervalued in terms of the impact it might have on society.

    AI is quite possibly the most powerful and unique technology in the history of the Valley. We’ve observed that as we scale the model size and data used to train it, the performance increases to the point where it can now outperform humans on almost every benchmark of intelligence we can come up with.

    As a result every company is in a prisoners dilemma to build the most powerful AI models, because the potential reward is so high.

    liqui_date_me , Carles Rabada / Unsplash (not the actual photo) Report

    ADVERTISEMENT
    #36

    There are very few people who are pushing AI that have a clue on how to use it. Personally I think its going to be like the metaverse but instead of just Facebook buying in the whole world is.

    Level_Macaroon2533 Report

    Ryan Mercer
    Community Member
    9 hours ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    You can ask it how to best use it capabilities and it will warn you about not relying on it for specific things, like medical advice, legal advice, parenting advice. It will attempt to provide what it can, but AI isn't magic. It has rules and limitations and when used properly, it can be very helpful.

    View more comments
    ADVERTISEMENT
    #37

    As an outsider looking in, I'm thinking that the biggest secret about AI/AGI is that it's an out-of-control mess that nobody really knows what to do with.

    bobvagabond Report

    Ryan Mercer
    Community Member
    9 hours ago

    This comment is hidden. Click here to view.

    "As an outsider looking in..." Why is this even here?

    View more comments
    #38

    Everything you’ve put into ChatGPT and most larger AI bots is searchable to anyone and there is little legislation limiting companies from providing it to law enforcement on the backend.

    justsomedudeusa Report

    Ryan Mercer
    Community Member
    9 hours ago

    This comment is hidden. Click here to view.

    Oh no! It's bad for crime-ing! WTAF?

    View more comments
    ADVERTISEMENT
    #39

    Progress in AGI has very much plateaued because of LLMs. 3 years ago, the big ml conferences would have dozens of new ideas and fundemental research presented, now its all minor modifications and micro optimizations of existing attention based architectures. And the benchmarks reflect this sadly.

    Active_Change9423 Report

    Ryan Mercer
    Community Member
    9 hours ago (edited) Created by potrace 1.15, written by Peter Selinger 2001-2017

    There is plateauing, but I suspect this is more due to the majority of easily available training data running out. That's why they want to start accessing protected IP. It's just a theory.

    #40

    I think it is super inefficient they talk of wanting more data centers all over, yet we have climate change, etc. And nobody wants nuclear power, they keep taking over farmland for solar fields seems insane. I have a feeling this will be a fad and then explode like the .com bust.

    Network-King19 Report

    Andrew Keir
    Community Member
    41 minutes ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    There seem to be plenty of ".com"s around still ... ?

    View more comments
    ADVERTISEMENT
    ADVERTISEMENT