Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.

  • floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    3 days ago

    This seems like a potential actual good use of AI. Can’t have been much fun to train it though.

    And is there any risk of people turning these kinds of models around and using them to generate images?

    • Hoimo@ani.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Available image generators are already capable of generating those images and they weren’t even trained on it. Once a neural network can detect/generate two separate concepts, it can detect/generate the overlap. It won’t be as fine-tuned obviously, but can still turn out scarily accurate.

    • Jimbabwe@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      3 days ago

      If AI was reliable, maybe. MAYBE. But guess what? It turns out that “advanced autocomplete” does a shitty job of most things, and I bet false positives will be numerous.