Microsoft and OpenAI announced they’re offering a select group of media outlets up to $10 million ($2.5 million in cash plus $2.5 million worth of “software and enterprise credits” from each) to try out AI tools in the newsroom.

The first round of funding will go to Newsday, The Minnesota Star Tribune, The Philadelphia Inquirer, Chicago Public Media, and The Seattle Times.

These outlets will receive a grant to hire a two-year fellow who will work to develop and implement AI tools using Microsoft Azure and OpenAI credits. The program is part of a collaboration between Microsoft, OpenAI, and the Lenfest Institute for Journalism, which aims to promote local media.

This news comes while the two companies are still facing a slew of copyright lawsuits, including from The New York Times, The Intercept, Raw Story, AlterNet, the Center for Investigative Reporting, and the Alden Global Capital-owned New York Daily News and Chicago Tribune. Those have continued despite licensing deals reached with many media outlets, including The Verge’s parent company, Vox Media.

  • Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    27
    ·
    29 days ago

    Any outlet accepting the deal should be immediately put into a list of sites spreading disinformation and potentially harmful content.

    • sunzu2@thebrainbin.org
      link
      fedilink
      arrow-up
      3
      ·
      29 days ago

      Aint all commercial news is just fake news ran for the benefit of an oligarch or the regime as a whole?

      • Storksforlegs@beehaw.org
        link
        fedilink
        English
        arrow-up
        9
        ·
        29 days ago

        no?

        Journalistic standards exist. But for those you need qualified human journalists to give a crap about that kind of thing.

      • Lvxferre@mander.xyz
        link
        fedilink
        arrow-up
        5
        ·
        29 days ago

        Kind of. @storksforlegs@beehaw.org is right that journalistic standards prevent too much meddling. Plus commercial news defending interests have a better resource for manipulation - instead of lying, they pick which true pieces of info to release as relevant, and paint them one or another way.

        For example. Let’s say that Alice insults Bob, and Bob slaps Alice in return. Someone defending Alice would say that she was the victim of aggression, while someone defending Bob would say that he reacted to Alice’s verbal abuse. Neither is false, but they don’t get the full picture. While LLM/A"I" style bullshit be saying instead “Alice picked a puppy and beat it to death with Bob’s face”.

  • myliltoehurts@lemm.ee
    link
    fedilink
    arrow-up
    12
    ·
    29 days ago

    “If you get sued for the lies our AI pumped onto your website that we paid you for, it’s on you and nothing to do with us gl hf.”