https://winbuzzer.com/2026/02/11/openai-disbanded-mission-alignment-team-16-months-xcxwbn/
OpenAI Disbanded Its Mission Alignment Team After Just 16 Months
#AI #OpenAI #AGI #AISafety #AIGovernance #AIEthics #ResponsibleAI #JoshuaAchiam
https://winbuzzer.com/2026/02/11/openai-disbanded-mission-alignment-team-16-months-xcxwbn/
OpenAI Disbanded Its Mission Alignment Team After Just 16 Months
#AI #OpenAI #AGI #AISafety #AIGovernance #AIEthics #ResponsibleAI #JoshuaAchiam
Silicon Valley billionaires, such as Elon Musk, Jeff Bezos, and Sam Altman, promise salvation through space colonization, immortality, superintelligent AI, and endless growth. Adam Becker, astrophysicist and author of More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, debunks these profoundly immoral and biophysically impossible delusions, and explains why resisting them through collective action is essential. Highlights include:
0:00 Introduction
4:44 Motivations to write the book
9:23 Confusing science fiction for reality
12:37 Musk and Bezos reasons for off-planet plans
15:55 Musk Mars plans are delusional
20:33 Bezos space station plans are delusional
23:34 Kurzweil’s AGI dream
29:27 End of Moore’s Law
34:38 Limits of large language models
36:37 Longtermism perversion of ethics
44:58 Effective accelerationists
48:42 Malcolm and Simone Collins
50:32 Fear of death and technological salvation
53:57 Meaningful democratic resistance
Sacudida en OpenAI – Disuelven el equipo de «Alineación de Misión» encargado de la seguridad de la IA
OpenAI ha confirmado la disolución de su equipo de «Mission Alignment» (Alineación de Misión), el grupo interno responsable de garantizar que sus modelos de inteligencia artificial se desarrollen de manera segura, confiable y ética. Este movimiento, reportado inicialmente por Platformer y confirmado por TechCrunch este 11 de febrero de 2026, marca una nueva etapa en la reestructuración de la compañía hacia un enfoque más comercial y agéntico (Fuente Platformer).
El equipo de Alineación de Misión fue creado en septiembre de 2024 con un mandato claro: ser la «conciencia organizativa» de OpenAI. Su labor consistía en asegurar que los sistemas de IA, cada vez más potentes, siguieran las intenciones humanas incluso en situaciones adversas. Sin embargo, en un comunicado oficial, un portavoz de la empresa describió la disolución como una «reorganización rutinaria» dentro de una compañía que se mueve a gran velocidad, asegurando que los principios de seguridad ahora se integrarán directamente en los equipos de producto y plataforma.
Cambios clave en el liderazgo:
Este no es el primer equipo de seguridad que desaparece en OpenAI. En 2024, la empresa ya disolvió el famoso equipo de «Superalineación», liderado en su momento por Ilya Sutskever y Jan Leike, tras profundas discrepancias sobre las prioridades de la empresa (seguridad frente a velocidad de lanzamiento). La desaparición de Mission Alignment refuerza la percepción de que OpenAI está priorizando la integración de funciones agénticas y comerciales —como el reciente lanzamiento de GPT-5.3-Codex— por encima de las estructuras de control externas.
Para los expertos de la industria, la dispersión de estos especialistas sugiere un cambio de filosofía: en lugar de tener un «árbitro» de seguridad, OpenAI busca que la seguridad sea parte del proceso de ingeniería común. Sin embargo, queda en el aire la duda de si este modelo descentralizado tendrá la fuerza necesaria para cuestionar el lanzamiento de productos si se detectan riesgos éticos o sociales profundos en el futuro.
#AGI #arielmcorg #ÉticaDigital #ciberseguridad #infosertec #innovación #InteligenciaArtificial #JoshAchiam #openai #PORTADA #SeguridadIA #TechNews #tecnologíaBindu Reddy (@bindureddy)
오픈소스 AGI 생태계가 폐쇄형보다 빠르게 가속화되고 있음을 주장하는 트윗입니다. Kimi K2.5, GLM 5, 곧 나올 DeepSeek 등 오픈 모델들을 업무에 더 많이 활용 중이라고 밝히며, 단순 작업엔 저비용 모델, 핵심 SOTA 작업엔 대형 모델 사용을 권장합니다.
GMI Cloud (@gmi_cloud)
GLM-5가 GMI Cloud에서 Day-0 공개되었습니다. Zai_org가 발표한 이 모델은 744B 파라미터(40B active), 28.5T 토큰으로 프리트레인되었으며, AGI 지향의 가장 강력한 오픈소스 모델을 목표로 설계됐다고 소개되었습니다. 에이전트·장기 계획 등 복합적 작업을 겨냥한 신모델 공개 소식입니다.
AI Leaks and News (@AILeaksAndNews)
Joshua Achiam이 OpenAI의 'Chief Futurist'로 임명되었다는 발표입니다. 새 직함을 통해 고급 AI, AGI, ASI가 사회에 미치는 영향을 연구하고 위험을 줄이며 혜택을 극대화하는 역할을 맡을 예정이라고 하며, OpenAI가 AGI 준비에 나서고 있음을 강조합니다.
«...celui qui utilise une machine accomplit son travail machinalement. Celui qui fait son travail machinalement finit par avoir le coeur d'une machine et celui qui porte en son sein le coeur d'une machine perd sa simplicité. Celui qui a perdu sa simplicité devient incertain dans les mouvements de son âme. L'incertitude dans les mouvements de l'âme est une chose contraire à l'honnêteté. Ce n'est pas que je ne connaisse pas les choses dont tu me parles: j'aurais honte de les utiliser.»
#agi
Avez-vous des plans pour mardi le 18 juillet 2034?
1/n Notes on #UAP Discussions : And now the un enviable need to return to an idea previously introduced in humor. Bare with me...
Hegemons would very much like you to confuse the ideas of “Super #AI”, #AGI or superintelligent AI with the notion of naturally evolved biological #NHI and temporally evolved biological Superintelligent #NHI. To simplify a bit those last two represent real biological intelligence that had its beginnings long before humans arrived on the scene.
The low level scam is…
It's fairly obvious to me that LLMs will not lead to animal level intelligence. If anything quantum computing will crack that. Animal neurons are not bitwise operations, they are complex interactions of thousands of chemicals inside the cell. Most of which we have no idea what their role is.
“Well, #AGI still hasn’t come (even though they keep issuing the same promises, year after year). #LLMs still hallucinate and continue to make boneheaded errors.
And #reasoning is still one of the core issues.” open.substack.com/pub/garymarc...
BREAKING: LLM “reasoning” cont...
Mark Gadala-Maria (@markgadala)
Seedance 2.0 버전을 언급하며 'Will Smith spaghetti test'에서 매우 인상적인 결과를 보였다고 알리는 트윗으로, 작성자는 이를 바탕으로 AGI(범용 인공지능)가 달성되었다고 주장하고 있습니다. (주장은 과장 가능성이 있음)
The #singularity will occur on a Tuesday - Specifically Tuesday, July 18, 2034. Here’s the mathematical proof… What do you think? #AI #AGI #ArtificialIntelligence
Grok 5 lekkinud ja AGI paanika: miks OpenAI kardab? https://tehisarukas.ee/grok-5-lekkinud-ja-agi-paanika-miks-openai-kardab/?utm_source=dlvr.it&utm_medium=mastodon #Grok5 #AGI #tehisintellekt #OpenAI #ElonMusk
The planet didn't get to this (image) because of the kids. It was US driving to get burgers, flying for holidays and denoucing measures to contain emissions as "tree hugging shit" and "climate alarmism".
We handed "the kids" a steaming pile of smoking shit. Demanding they stop doing what WE did with cavaleer impunity is only going to get ironic dismissal.
I'm far from claiming, like the broligarchs do, that #AI is going to save us from #climatecatastrophe
But the "Inconvenient truth" (! Sic) is that we have crossed 6 out of 9 planetary boundaries (https://www.cnn.com/2023/09/13/world/planetary-boundaries-humanity-climate/index.html)
At this rate, ecosystem collapses may be starting as early as 2027. Billions of humans will die from catastrophic events and resource wars...
... Worrying about extra 1% in carbon emissions when the whole shitshow is about to come down around us, is rearranging deckchairs on the titanic.
And when you can't get a good data analyst because the last one was eaten by the wasteland reavers, a shitty one from an #AI engine is a pretty good substitute. Which is what the broligarchs in their bunkers will get.
I don't mean to sound so negative, but for far too long we have been blaze about #climatechange and now it's very likely too late.
Pushing forward with the tech is probably the only way forward... I'd rather get the big brother #AGI sorting out the Apes... Relying on "free market" and #oligarchs got us here...
... And I can't see #peoplepower fixing shit.
/steps off soapbox
Richard Amador (@acuriocabinet)
작성자는 현재 수준의 LLM만으로도 사회가 붕괴할 수 있으며 실제 AGI가 반드시 필요하지 않다고 주장합니다. 입사 초기 업무의 대부분을 LLM이 더 잘 수행할 수 있어 노동시장이 아직 이 변화를 처리하지 못했다고 경고합니다.