The “good guy” from OpenAI, the Ash Ketchum to Sam Altman’s Gary Oak, left OpenAI (a while back) to create a new lab/company called Safe Superintelligence. Here is a recent WSJ article about them: https://archive.is/mooDM
This is from their mission statement:
Superintelligence is within reach.
Building safe superintelligence (SSI) is the most important technical problem of our time.
We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
Some factoids about them:
- They have raised $2 billion.
- Have about 20 employees.
-
Candidates who secure an in-person interview are instructed to leave their phone in a Faraday cage, a container that blocks cellular and Wi-Fi signals, before entering SSI’s offices, one of the knowledgeable people said.
-
SSI operates as secretly as it can out of offices in Silicon Valley and Tel Aviv.
-
[Employees] are discouraged from mentioning SSI on their LinkedIn profiles
- Have not released anything. No research, no product, no nothing.
-
Sutskever has told associates he isn’t developing advanced AI using the same methods he and colleagues used at OpenAI. He has said he has instead identified a “different mountain to climb” that is showing early signs of promise, according to people close to the company. [No, you cannot see it.]
Ponzi scheme or Mossad front?
Imo it’s whites who are afraid the computer will do to them what they do to everyone else. I’ve written this comment elsewhere about these “safe AI” and “AI extinction risk” freaks.