• xxd@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    The algorithm team must have been working overtime to get passable results with 85% of the data missing!

    Also, it must feel absolutely horrifying to hear Neuralink decline a surgery to fix your implant. I guess they’re still used to the “try, fail, abandon” strategy from their animal tests?

    • Etterra@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Funny, I think that’s how Elon tests everything. Teslas, especially that cybertruck, Twitter, rockets, his children…

  • Constant Pain@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    Presumably, the person who volunteered knew all the risks and implications, so you can shit all you want on their decisions but that’s how trials work. There’s no promises of you coming out with a functional product.

  • MajorHavoc@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    “asked if Neuralink would perform another surgery to fix or replace the implant, but the company declined”

    Evidence whether the company saw them as a person, or felt any ethical obligation…

    It’s an interesting era when an organization can have a single user, and choose to leave that single user with 85% of the promised functionality no longer functional. But is happily pursuing it’s second user.

    • golli@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      with 85% of the promised functionality no longer functional

      To be fair 85% of threads retracting doesn’t seem to translate to an equal amount of functional loss. The article mentions

      Neuralink was quick to note that it was able to adjust the algorithm used for decoding those neuronal signals to compensate for the lost electrode data. The adjustments were effective enough to regain and then exceed performance on at least one metric—the bits-per-second (BPS) rate used to measure how quickly and accurately a patient with an implant can control a computer cursor.

      I think it will be impossible for us to asses how much it actually impacts function in real world use case.

      It seems clear that this is a case of learning by trial and error, which considering the stakes doesn’t seem like the right approach.

      The question that this article doesn’t answer is, whether they have learned anything at all or if they are just proceeding to do the same thing again. And if they have learned something, is there something preventing it to be applied to the first patient.

      • MajorHavoc@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        if they have learned something, is there something preventing it to be applied to the first patient.

        That’s part of what makes me see this as a really bad look.

        “Install it deeper” isn’t rocket science, and it sounds like their first volunteer is willing.

        They just want the extra data from leaving their first volunteer where they landed.

        Human subject experiments are supposed to carry more long term obligation than this.

    • Kraiden@kbin.run
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      I prefer flipping that number on it’s head. 15%. They delivered 15% of what they promised and are now saying “fuck it.”

      It’s the equivalent of writing your name on the exam, and then sitting there doodling for the rest of the time.