#sutskever

2025-05-28

An OpenAI co-founder wants to build a bunker before releasing AGI. Ilya Sutskever publicly admitted he was terrified of what’s coming next.
news-cafe.eu/?go=news&n=13666
#AI #sutskever #openai #altman #technology #artificialintelligence #chatgpt #AGI

Chuck Darwincdarwin@c.im
2024-12-14

“We’ve achieved peak data and there’ll be no more.”

OpenAI’s cofounder and former chief scientist,
#Ilya #Sutskever, made headlines earlier this year after he left to start his own AI lab called
Safe Superintelligence Inc.

He has avoided the limelight since his departure but made a rare public appearance in Vancouver on Friday at the
Conference on Neural Information Processing Systems (NeurIPS).

“Pre-training as we know it will unquestionably end,” Sutskever said onstage.

This refers to the first phase of AI model development,
when a large language model learns patterns from vast amounts of unlabeled data
— typically text from the internet, books, and other sources.

During his NeurIPS talk, Sutskever said that,
while he believes existing data can still take AI development farther,
the industry is tapping out on new data to train on.

This dynamic will, he said, eventually force a shift away from the way models are trained today.

He compared the situation to fossil fuels:
just as oil is a finite resource,
the internet contains a finite amount of human-generated content.

“We’ve achieved peak data and there’ll be no more,” according to Sutskever.

“We have to deal with the data that we have. There’s only one internet

Next-generation models, he predicted, are going to “be agentic in a real ways.”

Agents have become a real buzzword in the AI field.

While Sutskever didn’t define them during his talk, they are commonly understood to be an autonomous AI system that performs tasks, makes decisions,
and interacts with software on its own.

Along with being “agentic,” he said future systems will also be able to reason.

Unlike today’s AI, which mostly pattern-matches based on what a model has seen before,
future AI systems will be able to work things out step-by-step in a way that is more comparable to thinking.

The more a system reasons, “the more unpredictable it becomes,” according to Sutskever.

He compared the unpredictability of “truly reasoning systems” to how advanced AIs that play chess “are unpredictable to the best human chess players.”

“They will understand things from limited data,” he said.

“They will not get confused.”

On stage, he drew a comparison between the scaling of AI systems and evolutionary biology,
citing research that shows the relationship between brain and body mass across species.

He noted that while most mammals follow one scaling pattern, hominids (human ancestors) show a distinctly different slope in their brain-to-body mass ratio on logarithmic scales.

He suggested that, just as evolution found a new scaling pattern for hominid brains,
AI might similarly discover new approaches to scaling beyond how pre-training works today.
theverge.com/2024/12/13/243208

2024-06-20

Oltre #OpenAI, #Sutskever lancia la sfida per creare un'#IA che superi l'uomo... in sicurezza ⤵️⤵️

innovationisland.it/openai-sut

John Leonardjohnleonard
2024-06-20

Ilya Sutskever forms new AI startup to pursue 'safe superintelligence'

Former OpenAI chief scientist promises efforts will be insulated from commercial pressures

computing.co.uk/news/4325343/i

🔘 G◍M◍◍T 🔘gomoot@mastodon.uno
2024-06-20

💡Ilya Sutskever, co-fondatore di OpenAI, fonda Safe Superintelligence Inc. ( @ssi )

gomoot.com/ilya-sutskever-co-f

#AI #blog #ia #news #OpenAI #ssi #Sutskever #tech #tecnologia @ilyasut

ComputerBaseComputerBase
2024-06-20

Ex-OpenAI-Chefwissenschaftler: Ilya Sutskever gründet AI-Startup für sichere Super­intelligenz computerbase.de/2024-06/ex-ope

Chuck Darwincdarwin@c.im
2024-06-14

OpenAI has appointed Paul M. Nakasone,
a retired general of the US Army and a former head of the National Security Agency ( #NSA ),
to its board of directors, the company announced on Thursday.

OpenAI says Nakasone will join its Safety and Security Committee, which was announced in May and is led by CEO Sam Altman, “as a first priority.”

Nakasone will “also contribute to OpenAI’s efforts to better understand how AI can be used to strengthen cybersecurity by quickly detecting and responding to cybersecurity threats.”

#Nakasone was nominated to lead the NSA by former President Donald Trump, and directed the agency from 2018 until February of this year.

Before Nakasone left the NSA, he wrote an op-ed supporting the renewal of Section 702 of the Foreign Intelligence Surveillance Act, the surveillance program that was ultimately reauthorized by Congress in April.

OpenAI board chair Bret Taylor said in a statement. “General Nakasone’s unparalleled experience in areas like cybersecurity will help guide OpenAI in achieving its mission of ensuring artificial general intelligence benefits all of humanity.”

Recent departures tied to safety at OpenAI include co-founder and chief scientist Ilya #Sutskever, who played a key role in Sam Altman’s November firing and eventual un-firing,
and Jan #Leike, who said on X that “safety culture and processes have taken a backseat to shiny products.”
theverge.com/2024/6/13/2417807

John Leonardjohnleonard
2024-05-15

Chief Scientist and superalignment lead Ilya Sutskever parts ways with OpenAI

Superalignment co-lead Jan Leike follows hours later

computing.co.uk/news/4208378/c

ComputerBaseComputerBase
2024-05-15
Norobiik @Norobiik@noc.socialNorobiik@noc.social
2024-05-15

#Sutskever played a key role in #Altman’s dramatic firing and rehiring in November last year. At the time, Sutskever was on the board of #OpenAI and helped to orchestrate Altman’s firing.

Sutskever has long been a prominent researcher in the #AI field. He started his career working with #GeoffreyHinton, one of the so-called “godfathers of AI”.

OpenAI cofounder Ilya Sutskever departs #ChatGPT maker
rappler.com/technology/openai-

OpenAI co-founder and chief scientist Ilya Sutskever is leaving the startup at the center of today’s artificial intelligence boom.
2023-12-18

Lichess' puzzle database (CC0 licensed! see: database.lichess.org/#puzzles) has a "cameo" in a pre-print (arxiv.org/pdf/2312.09390.pdf) about supervising stronger LLMs with weaker ones.
The pre-print is authored by #OpenAI 's Ilya #Sutskever and his superalignment team.

#chess #lichess #opendata #LLM #AI

Pre-print extract. It is a paragraph named "Chess puzzles" with a link to Lichess.
2023-11-22

has passed the . :birdsite: Now stronger than ever. The has kicked itself out of the Game . Was a Shot in the own Knee. What about ? Leaving the Firm ?

meta_blummeta_blum
2023-11-22

Da brodelt es im Board, der CEO wird entlassen, kommt nach vier Tagen zurück und das Board wird neu besetzt. Wer entscheidet da eigentlich was? Welche Funktion hat der Aufsichtsrat und wer bestellt ihn? Weiß jemand von Euch mehr dazu?

2023-11-21

war offensichtlich nicht klar, was er mit seinem - Putsch auslösen würde. Zurück wird ein Gerippe von einer Firma bleiben, die keine Investoren mehr findet und höchstens noch ein bisschen Forschung betreiben wird. Der Rest sitzt in einer neuen Unterfirma bei Microsoft und freut sich mit und .

HistoPol (#HP) 🏴 🇺🇸 🏴HistoPol
2023-11-20

@TheGuardian

(4/n)

...letter’s signaturies are “unable to work for or with people that lack competence, judgment and care for our mission and employees.”

Technology journalist has posted the letter on X (formerly Twitter) – and points out that OpenAI’s chief scientist, , has signed it, even though he is a member of the board that fired Altman.

As flagged earlier, has posted on today that “I deeply regret...

twitter.com/i/status/172660301

2023-11-19

There has been a hiatus in my Sentient Syllabus Writing, while I was lecturing and thinking through things.

But I have just posted an analysis on the #Sutskever / #Altman schism at #OpenAI and hope you find it enlightening.

Enjoy!

#ChatGPT #GPT4 #HigherEd #AI #AGI #ASI #ChatGPT #generativeAI #Bostrom #AI-ethics #Education #University #Academia

sentientsyllabus.substack.com/

GripNewsGripNews
2023-11-19

🌗 Ilya Sutskever與Jensen Huang的聊天突顯:AI今天和未來的願景 - YouTube
➤ AI今天的狀況和未來的願景
youtube.com/watch?v=GI4Tpi48DlA
這是「聊天突顯:Ilya Sutskever與Jensen Huang:AI今天和未來的願景(2023年3月)」的精簡版本。在這個影片中,我們將聽到Sutskever和Huang討論AI的現況以及未來的願景。
+ 很期待聽到Sutskever和Huang的見解。
+ 這將是一個令人興奮的話題,AI的進展一直都很有趣。

Erik JonkerErikJonker
2023-11-18

More details about the departure of , the role played, he apparently wanted to slow things down. Also a good explanantion of the peculiar company structure in this blog.

arstechnica.com/information-te

OpenAI company structure
Erik JonkerErikJonker
2023-11-18

Interessant is dat de Chief Scientist vam , Ilya Sutskever nog steeds bij werkt. Sommige suggereren dat hij achter het vertrek van Sam Altman zit. Lees het interessante artikel van @marcoderksen
koneksa-mondo.nl/2023/11/18/il

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst