The majority of U.S. adults don’t believe the benefits of artificial intelligence outweigh the risks, according to a new Mitre-Harris Poll released Tuesday.

  • ShadowRam@kbin.social
    link
    fedilink
    arrow-up
    53
    arrow-down
    6
    ·
    1 year ago

    The majority of U.S. adults don’t understand the technology well enough to make an informed decision on the matter.

    • GoodEye8@lemm.ee
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      2
      ·
      1 year ago

      To be fair, even if you understand the tech it’s kinda hard to see how it would benefit the average worker as opposed to CEOs and shareholders who will use it as a cost reduction method to make more money. Most of them will be laid off because of AI so obviously it’s of no benefit to them.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Efficiency and productivity aren’t bad things. Nobody likes doing bullshit work.

        Unemployment may become a huge issue, but IMO the solution isn’t busy work. Or at least come up with more useful government jobs programs.

        • GoodEye8@lemm.ee
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          1 year ago

          Of course, there’s nothing inherently wrong with using AI to get rid of bullshit work. The issue is who will benefit from using AI and it’s unlikely to be the people who currently do the bullshit work.

          • treadful@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            But that’s literally everything in a capitalist economy. Value collects to the capital. It has nothing to do with AI.

        • credit crazy@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          You see the problem with that is how ai in the case of animation and art is how it’s not removing menial labor your removing hobbys that people get paid for taking part in

      • rambaroo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Most of them? The vast majority of jobs cannot be replaced by LLMs. The CEOs who believe that are delusional.

    • Moobythegoldensock@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      If you look at the poll, the concerns raised are all valid. AI will most likely be used to automate cyberattacks, identity theft, and to spread misinformation. I think the benefits of the technology outweigh the risks, but these issues are very real possibilities.

    • meseek #2982@lemmy.ca
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      5
      ·
      1 year ago

      Informed or not, they aren’t wrong. If there is an iota that something can be misused, it will be. Human nature. AI will be used against everyone. It’s potentially for good is equally as strong as its potential for evil.

      But imagine this. You get laid off. At that moment, bots are contacting your bank, LinkedIn, and most of the financial lenders about the incident. Your credit is flagged as your income has dropped significantly. Your bank seizes the opportunity and jacks up your mortgage rates. Lenders are also making use of the opportunity to seize back their merchandise as you’ll likely not be able to make payments and they know it.

      Just one likely incident when big brother knows all and can connect the dots using raw compute power.

      Having every little secret parcelled over the internet because we live in the digital age is not something humanity needs.

      I’m actually stunned that even here, among the tech nerds, you all still don’t realize how much digital espionage is being done on the daily. AI will only serve to help those in power grow bigger.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        But imagine this. You get laid off. At that moment, bots are contacting your bank, LinkedIn, and most of the financial lenders about the incident. Your credit is flagged as your income has dropped significantly. Your bank seizes the opportunity and jacks up your mortgage rates. Lenders are also making use of the opportunity to seize back their merchandise as you’ll likely not be able to make payments and they know it.

        None of this requires “AI.” At most AI is a tool to make this more efficient. But then you’re arguing about a tool and not the problem behavior of people.

    • ZzyzxRoad@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      Seeing technology consistently putting people out of work is enough for people to see it as a problem. You shouldn’t need to be an expert in it to be able to have an opinion when it’s being used to threaten your source of income. Teachers have to do more work and put in more time now because ChatGPT has affected education at every level. Educators already get paid dick to work insane hours of skilled labor, and students have enough on their plates without having to spend extra time in the classroom. It’s especially unfair when every student has to pay for the actions of the few dishonest ones. Pretty ironic how it’s set us back technologically, to the point where we can’t use the tech that’s been created and implemented to make our lives easier. We’re back to sitting at our desks with a pencil and paper for an extra hour a week. There’s already AI “books” being sold to unknowing customers on amazon. How long will it really be until researchers are competing with it? Students won’t be able to recognize the difference between real and fake academic articles. They’ll spread incorrect information after stealing pieces of real studies without the authors’ permission, then mash them together into some bullshit that sounds legitimate. You know there will be AP articles (written by AI) with headlines like “new study says xyz!” and people will just believe that shit.

      When the government can do its job and create fail safes like UBI to keep people’s lives/livelihoods from being ruined by AI and other tech, then people might be more open to it. But the lemmy narrative that overtakes every single post about AI, that says the average person is too dumb to be allowed to have an opinion, is not only, well, fucking dumb, but also tone deaf and willfully ignorant.

      Especially when this discussion can easily go the other way, by pointing out that tech bros are too dumb to understand the socioeconomic repercussions of AI.

      • ShadowRam@kbin.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        I mean, NFT’s is a ridiculous comparison because those that understood that tech were exactly the ones that said it was ridiculous.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Wasn’t it the ones who didn’t understand NFTs who were the fan boys? Everyone who knew what they were said they were bloody stupid from the get-go.

    • archon@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      1 year ago

      You can make an observation that something is dangerous without intimate knowledge of its internal mechanisms.

      • ShadowRam@kbin.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        Sure you can, but that doesn’t change the fact that your ignorant whether it’s dangerous or not.

        And these people are making ‘observations’ without knowledge of even the external mechanisms.

        • archon@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          I’m sure I can name many examples of things I observed as dangerous, and the observation being correct. But sure, claim unilateral ignorance and dismiss anyone who don’t agree with your view.

  • Queen HawlSera@lemm.ee
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    14
    ·
    1 year ago

    At first I was all on board for artificial intelligence and spite of being told how dangerous it was, now I feel the technology has no practical application aside from providing a way to get a lot of sloppy half assed and heavily plagiarized work done, because anything is better than paying people an honest wage for honest work.

    • Franzia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      edit-2
      1 year ago

      This is basically how I feel about it. Capital is ruining the value this tech could have. But I don’t think it’s dangerous and I think the open source community will do awesome stuff with it, quietly, over time.

      Edit: where AI can be used to scan faces or identify where people are, yeah that’s a unique new danger that this tech can bring.

      • Alenalda@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I’ve been watching a lot of geoguesser lately and the number of people who can pinpoint a location given just a picture is staggering. Even for remote locations.

    • Chickenstalker@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      6
      ·
      1 year ago

      Dude. Drones and sexbots. Killing people and fucking (sexo) people have always been at the forefront of new tech. If you think AI is only for teh funni maymays, you’re in for a rude awakening.

      • mriormro@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        1 year ago

        you think AI is only for teh funni maymays

        When did they state this? I’ve seen it used exactly as they have described. My inbox is littered with terribly written ai emails, I’m seeing graphics that are clearly ai generated being delivered as ‘final and complete’, and that’s not to mention the homogeneous output of it all. It’s turning into nothing but noise.

  • DarkGamer@kbin.social
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    1 year ago

    “Can’t we just make other humans from lower socioeconomic classes toil their whole lives, instead?”

    The real risk of AI/automation is if we fail to adapt our society to it. It could free us from toil forever but we need to make sure the benefits of an automated society are spread somewhat evenly and not just among the robot-owning classes. Otherwise, consumers won’t be able to afford that which the robots produce, markets will dry up, and global capitalism will stop functioning.

  • gmtom@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    Most US adults couldnt tell you what LLM stands for, nevermind tell you how stable diffusion works. So theres not much point in asking them as they wont understand the benefits and the risks

  • bigkix@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    My opinion - current state of AI is nothing special compared to what it can be. And when it will be close to all it can be, it will be used (as it always happens) to generate even more money and no equality. Movie “Elysium” comes to mind.

  • Endorkend@kbin.social
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    The problem is that there is no real discussion about what to do with AI.

    It’s being allowed to be developed without much of any restrictions and that’s what’s dangerous about it.

    Like how some places are starting to use AI to profile the public Minority Report style.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    The general public don’t understand what they’re talking about so it’s not worth asking them.

    What is the point in surveys like this, we don’t operate on direct democracy so there’s literally no value in these things except to stir the pot.

  • orca@orcas.enjoying.yachts
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    1 year ago

    I work with AI and don’t necessarily see it as “dangerous”. CEOs and other greed-chasing assholes are the real danger. They’re going to do everything they can to keep filling human roles with AI so that they can maximize profits. That’s the real danger. That and AI writing eventually permeating and enshittifying everything.

    A hammer isn’t dangerous on its own, but becomes a weapon in the hands of a psychopath.

    • Mjpasta@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      So, because of greed and endless profit seeking, expect all corporations to replace everything that can be replaced - with AI…?

      • orca@orcas.enjoying.yachts
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I mean, they’re already doing it. Not in every role because not every one of them can be filled by AI, but it’s happening.

  • vzq@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    The problem is that I’m pretty sure that whatever benefits AI brings, they are not going to trickle down to people like me. After all, all AI investments are coming from the digital land lords and are designed to keep their rent seeking companies in the saddle for at least another generation.

    However, the drawbacks certainly are headed my way.

    So even if I’m optimistic about the possible use of AI, I’m not optimistic about this particular stand of the future we’re headed toward.

  • Gabu@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    1 year ago

    Most US adults don’t even know what AI is and it’s a miracle they don’t drown in their own droll… This sort of “news” is beyond irrelevant.

  • uis@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    More likely they just don’t belive technofeudalism is a good thing. In US it comes in single package with AI.

  • balloflearning@midwest.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Generally, people are wary of disruptive technology. While this technology has potential to displace a plethora of jobs for the sake of increased productivity, companies won’t be able to move product if unemployment skyrockets.

    Regardless of what people think, the Pandora’s box of AI is opened and now the only way forward is to adapt.

  • j677XZ@feddit.de
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    I don’t understand why people don’t have the fantasy imagine all the possibilities in which AI can help us progress from the absolutely dismal state of the world we live in currently. Yes there are risks but I just want technology to progress desperately even if I myself live somewhat comfortably for now.

    • lloram239@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 year ago

      It’s easy to imagine how AI can be beneficial in the short term. The problem is imagining how it won’t go wrong in the long term.

      Even sci-fi has a hard time figuring that out. StarTrek just stops at ChatGPT-level of intelligence, that’s how smart the ship computer is and it doesn’t get any smarter. Whenever there is something smarter, it’s always a unique one-of that can’t be replicated.

      Nobody knows how the world will look like when we have ubiquitous smart and cheap AI, not just ChatGPT-smart, but “smarter than the smartest human”-smart, and by a large margin. There is basically no realistic scenario where we won’t end up with AI that will be far superior to us.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Even sci-fi has a hard time figuring that out.

        Science fiction just is about entertainment. An AI that’s all but invisible and causes no problems isn’t really a character worth exploring.

        • lloram239@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          An AI that’s all but invisible and causes no problems isn’t really a character worth exploring.

          Yeah, but don’t you see the problem in that by itself? Even in the best case scenario we are heading into a future where humanities existence is so boring that it has no more stories worth telling.

          We see a precursor to that with smartphones in movies today. The writer always have to slap some lame excuse in there for the smartphones to not work, as otherwise there wouldn’t be a story. Hardly anybody can come up with ideas on how to have an interesting story where the smartphones do work.

      • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Whenever there is something smarter, it’s always a unique one-of that can’t be replicated.

        EMH mark 1. They duplicated it and used it for cheap, menial labor. Despite the fact that it was capable of real intelligence (see The Doctor). It didn’t dive deeper than that; it was literally the ending scene to a single episode that simply left the audience thinking about the implications, as well as showing a possible start to an uprising.