• filister@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    12 hours ago

    This is the wet dream of all fascist totalitarians, etc. Because uneducated people are easier to suppress and brainwash them. And they are more likely to vote for their policies. The future is very bleak with global warming, the rise of AI, the rise of the far right globally. We are leaving a really doomed place for our kids.

  • heavydust@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    79
    ·
    19 hours ago

    Asking the machine to think for you makes you stupid. Incredible.

    And no, you can’t compare that to a calculator or any other program. A calculator will not do the whole reasoning for you.

      • ThePyroPython@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        2
        ·
        17 hours ago

        Nope, it’s just a black box’s best guess as to what the reasoning should look like.

        Sort of how in an exam you give your best guess for an answer then jotting down some “working out” that you think looks sort-of correct and scraping enough marks to pass.

        Now imagine you’re not just trying to pass one question in one test in one subject but one question out of millions of possible questions in hundreds of thousands of possible subjects AND you experience time 5 million times slower than the examiner AND you had 3 years (in examiner time) to practice your guesswork.

        That’s it. That’s all this AI bullshit is doing. And people are racing to achieve the best monkey typewriter that requires the fewest bananas to work.

        • SpaceNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          2
          ·
          17 hours ago

          Not even that. It’s just a weighted model of what a sentence should look like, with no concept of factual correctness.

    • JustAnotherKay@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      18 hours ago

      To agree with you in different words, I would you argue that you can compare it to a calculator. Without the reasoning, a calculator is basically useless. I can tell you that 1.1(22 * 12 * 3) = 871.2 but it’s impossible to know what that number means or why it’s important from the information there. An LLM works the same way, I give it an equation (“prompt”) and it does some math to give me a response which is useless without context. It doesn’t actually answer the words in the prompt, it does (at best) guess-work based off the “value” of the text

  • RedFrank24@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    18 hours ago

    I have certainly found that to be the case with developers working with me. They run into a small problem so they instantly go to Copilot and just paste what it says the answer is. Then, because they don’t know what they’re asking or fully understand the problem, they can’t comprehend the answer either. Then later, they come to me having installed a library they don’t understand and code that’s been hallucinated by Copilot and then ask me why it’s not working.

    A little bit of stepping back and going “What do I hope to achieve with this?” and “Why do I have to do it this way?” goes a long way. It stops you going down rabbit holes.

    Then again, isn’t that what people used to do with StackOverflow?

    • WhatsTheHoldup@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      17 hours ago

      Then again, isn’t that what people used to do with StackOverflow?

      Yes, one of the major issues with StackOverflow that answerers complained about a lot was the “XY problem.”.

      https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem

      Where you’re trying to do X, but because you’re inexperienced you erroneously decide Y must be the solution even though it is a dead end, and then ask people how to do Y instead of X.

      ChatGPT drives that problem up to 11 because it has no problems enabling you to focusing on Y far longer than you should be.

      • superglue@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        17 hours ago

        I find that interesting because, sometimes AI actually does the opposite for me. It suggests approaches to a problem that I hadn’t even considered. But yes, if you push it a certain direction it will certainly lead you along.

  • DonutsRMeh@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    17 hours ago

    Is that why Microsoft pushing copilot so hard on people? They even made a dedicated button for it on keyboards.

  • Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    10
    ·
    19 hours ago

    The intersection of those people and the people who just stick whatever the AI writes into their Production codebase is a circle.

  • orclev@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    18 hours ago

    The “and at least one AI model seems to agree” on the end has got to be peak irony.

  • vinnymac@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    19 hours ago

    I would like them do this study, but only look at people who have too much work on their plates and are spread so thin that they are approaching burn out.

    Then do it for the lowest performing employees, and compare the results.