The “good guy” from OpenAI, the Ash Ketchum to Sam Altman’s Gary Oak, left OpenAI (a while back) to create a new lab/company called Safe Superintelligence. Here is a recent WSJ article about them: https://archive.is/mooDM

This is from their mission statement:

Superintelligence is within reach.

Building safe superintelligence (SSI) is the most important technical problem of our​​ time.

We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.

Some factoids about them:

  • They have raised $2 billion.
  • Have about 20 employees.
  • Candidates who secure an in-person interview are instructed to leave their phone in a Faraday cage, a container that blocks cellular and Wi-Fi signals, before entering SSI’s offices, one of the knowledgeable people said.

  • SSI operates as secretly as it can out of offices in Silicon Valley and Tel Aviv.

  • [Employees] are discouraged from mentioning SSI on their LinkedIn profiles

  • Have not released anything. No research, no product, no nothing.
  • Sutskever has told associates he isn’t developing advanced AI using the same methods he and colleagues used at OpenAI. He has said he has instead identified a “different mountain to climb” that is showing early signs of promise, according to people close to the company. [No, you cannot see it.]

Ponzi scheme or Mossad front?

  • 陆船。@lemmygrad.ml
    link
    fedilink
    arrow-up
    0
    ·
    22 hours ago

    Imo it’s whites who are afraid the computer will do to them what they do to everyone else. I’ve written this comment elsewhere about these “safe AI” and “AI extinction risk” freaks.

    Given the mostly white, bourgeois preoccupation with “x-AI risk” (existential/extinction) I think the real “risk” is that the self-legitimating myths of capitalism will fall on muted microphones. Even 10 years ago when AI was still called machine learning and it was much less impressive (its outputs were exclusively categorization of inputs) and it required decades of breakthroughs and to be hooked up to every input in society and multiplexed with every output to do anything “harmful” the x-AI risk people were running around crying (this holds true today of LLMs and other statistically likely to exist content emitters).

    The pitch is always that the AI will decide the needs of the many outweigh the needs (private property rights) of the few. This is only scary if you are among that few. Even property rights obsessed liberals don’t think themselves among the few who will be exproprAIted but are outraged by the expropriation itself. It’s a boogyman spewed by the people who are the problem and we’re asked to share their fear. Ridiculous.

    Unlike other private property and artifacts of capital accumulation which are inert (the workers may organize against you but the steel mill itself won’t), the AI their capital gives birth to might in several decades time maybe organize against you (but not really).