• Midnight@slrpnk.netOPM
    link
    fedilink
    arrow-up
    4
    ·
    2 months ago

    A section of the overview document titled “Team and Timeline: Where We Are Today” says that “The first version of Impact has been built, and we are starting to deploy it with a few pilot initiatives.” That section also states that “We believe it is critical to move quickly, and have assembled a team of world-class technologists and organizers who are committed to working on Impact.”

    However, in an interview, the two people behind the app, Dmitry Shapiro and Thielen, said that Impact is just a prototype at this point, that only eight people have downloaded the app so far, and that while they are showing it to people, no initiatives are currently using it and no AI-generated text from the app has been posted to social media.

    When I asked why the overview document says that the app is starting to deploy with a few pilot initiatives, Shapiro said “I think that’s loose language in a document,” and reiterated that there are currently no active initiatives or paying customers.

    Thielen said that Impact is a response to the misinformation and inauthentic behavior that has taken over social media, and a recognition that platforms like Twitter are not going to properly address those issues. He also said it only took him a couple of weekends to build the Impact prototype, and that it would be easy and cheap for someone else to build it as well.

    “I see this sort of thing [Impact] as inevitable,” Thielen said. “Social media is not getting cleaner and nicer and more representative of reality, it’s only getting worse. Someone is going to have to make some kind of tool that elevates normal people’s voices and allows people to engage collectively in real time to be able to affect any sort of change on here.”

    Shapiro is a former product manager, and Thielen previously founded a company called Koji, which was acquired by Linktree last year. Currently, Shapiro is CEO of MindStudio, a platform for developing AI-powered applications, where Thielen is CTO.

    Becca Lewis, a postdoctoral scholar at the Stanford Department of Communication, said that when discussing bot farms and computational propaganda, researchers often use the term “authenticity” to delineate between a post shared by an average human user, and a post shared by a bot or a post shared by someone who is paid to do so. Impact, she said, appears to use “authentic” to refer to posts that seem like they came from real people or accurately reflects what they think even if they didn’t write the post.

    • Midnight@slrpnk.netOPM
      link
      fedilink
      arrow-up
      5
      ·
      2 months ago

      “But when you conflate those two usages, it becomes dubious, because it’s suggesting that these are posts coming from real humans, when, in fact, it’s maybe getting posted by a real human, but it’s not written by a real human,” Lewis told me. “It’s written and generated by an AI system. The lines start to get really blurry, and that’s where I think ethical questions do come to the foreground. I think that it would be wise for anyone looking to work with them to maybe ask for expanded definitions around what they mean by ‘authentic’ here.”

      In another video demo Impact shows how a fake organization named “Pro-Democracy” can share a video in support of Kamala Harris with users and ask them to share it to TikTok alongside an AI-generated caption. 0:00 /4:39

      “These AI tools are so new that we don’t yet have clear norms surrounding when it’s acceptable to use AI in the democratic process,” Josh A. Goldstein, a research fellow at Georgetown University’s Center for Security and Emerging Technology, said when 404 Media showed him the Pro-Democracy demo video. “If AI can help someone articulate a view they truly hold, it could empower people who might not otherwise participate and increase involvement in civic discourse. But there are also risks. People may become overly reliant on AI models and passively share AI-generated content that they haven’t checked themselves.”

      The “Impact platform” has two sides. There’s an app for “supporters (participants),” and a separate app for “coordinators/campaigners/stakeholders/broadcasters (initiatives),” according to the overview document.

      Supporters download the app and provide “onboarding data” which “is used by Impact’s AI to (1) Target and (2) Personalize the action requests” that are sent to them. Supporters connect to initiatives by entering a provided code, and these action requests are sent as push notifications, the document explains.

      “Initiatives,” on the other hand, “have access to an advanced, AI-assisted dashboard for managing supporters and actions.”

      In the Stop Anti-Semitism demo, Thielen directs supporters to this tweet, about a July 19 International Court of Justice Advisory Opinion that Israel’s presence in the occupied Palestinian territories is illegal and should stop, an opinion it also shared in 2004.

      In the Impact demo video Thielen doesn’t instruct supporters to correct any misinformation in the tweet and instead asks supporters to “provide additional context and set the record straight.”

      Specifically, it gives supporters the following “talking points.”

      The ICJ has a known history of anti-semitism
      There are lots of accusations that are not vetted or fact-checked, and a lot of misinformation is damaging public opinion of Israel
      Where is the ICJ ruling on Hamas?
      The ICJ and ICC have zero jurisdiction over Israel or the United States. There [sic] rulings mean absolutely nothing. 
      

      “Think of these as the core substance of the response that you want,” Thielen says in the video, and explains that some of the responses that will be AI-generated based on those talking points may include just one of them, more than one, or a synthesis of several.

      In the “additional context” box Thielen writes that the target audience should be “People who have been seeing a lot of misinformation about Israel and the war online, and find themselves increasingly sympathetic to Gaza. Encourage them to do more research.”

      Impact then generates a “seed” for each supporter. “This is what makes the messages all appear to be coming from different perspectives and angles.”

      An example of one seed shown in the demo reads: “Informative and calm, longer, providing historical context, link to reputable sources.”

      “Frustrated and urgent, medium, highlighting double standards, use caps for emphasis,” reads a seed to another supporter. The demo video also shows what the push notification each supporter would get is based on the seed, as well as the “Draft message” Impact is asking them to share. According to the video, the push notification this supporter would get would read: “Dana, respond to the tweet about the ICJ ruling on Israel. Add context and correct any misinformation.”

      The draft message for this user reads:

      “Where’s the ICJ ruling on Hamas? The court’s history of anti-Semitism is CLEAR. So much misinformation out there is warping public opinion. Before jumping to conclusions, DO YOUR RESEARCH. The ICJ has ZERO jurisdiction over Israel anyway!”

      “Meme-like, very short, pointing out hypocrisy, include trending hashtag,” another seed says. The generated draft message based on that seed is: “ICJ ruling on Israel but silent on Hamas? 🤔 Make it make sense. #DoubleStandards.”

      “The goal is to create a well-rounded yet consistent narrative in a way that makes it easy for your supporters to just tap ‘copy,’ paste this in, and then they’re good to go,” Thielen says in the video.

      When I asked Thielen why the demo showed Impact directing users to flood a factual tweet with replies trying to undermine it, he said that he did not give the specifics of the demo a lot of thought.

      “That was just me being lazy,” he told me. “I just typed ‘Israel’ into Twitter search and clicked on the top thing without looking at it.”

      Twitter’s “platform manipulation and spam policy” states that “You may not use X’s services in a manner intended to artificially amplify or suppress information or engage in behavior that manipulates or disrupts people’s experience or platform manipulation defenses on X.” Twitter also says that prohibited behavior includes “coordinated activity, that attempts to artificially influence conversations through the use of multiple accounts, fake accounts, automation and/or scripting.” However, it’s unclear if what Impact proposes would violate Twitter’s policy, which also states that “coordinating with others to express ideas, viewpoints, support, or opposition towards a cause,” is not a violation of this policy.

      “Coordinated groups of people can show up and help, or coordinated groups of people can show up and harass,” Shapiro said. “We don’t think coordination is in any way a bad thing. We think it’s a great thing, because you can get stuff done, and if you’re doing good, truthful things, then I don’t see any problems.”

      Twitter did not respond to a request for comment.

      “If social media users aren’t transparent about their own AI use, others may lose trust in online forums as it becomes harder to distinguish human writing from synthetic prose,” Goldstein said in response to the Pro-Democracy demo video.

      “I think astroturfing is a great way of phrasing it, and brigading as well,” Lewis said. “It also shows it’s going to continue to siphon off who has the ability to use these types of tools by who is able to pay for them. The people with the ability to actually generate this seemingly organic content are ironically the people with the most money. So I can see the discourse shifting towards the people with the money to to shift it in a specific direction.”