#agi

2026-02-12

More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity | Adam Becker (interview)

Silicon Valley billionaires, such as Elon Musk, Jeff Bezos, and Sam Altman, promise salvation through space colonization, immortality, superintelligent AI, and endless growth. Adam Becker, astrophysicist and author of More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, debunks these profoundly immoral and biophysically impossible delusions, and explains why resisting them through collective action is essential. Highlights include:

  • How tech billionaires confuse science fiction for reality and why their fantasies of space colonization are biophysically impossible;
  • Why Artificial General Intelligence (AGI) remains an ill-defined concept that is based in the false assumption that humans’ evolved brains work like computing machines;
  • Why large language models (LLMs), the dominant form of AI, are neither creative nor accurate enough to achieve the dreamed-for leap in machine intelligence;
  • What the end of Moore’s Law tells us about diminishing returns to technological complexity and the expectation of endless technological growth;
  • Why longtermism is a dangerous ideology of technological salvation and endless growth, prioritizing hypothetical future populations while excusing present-day social injustice and ecological destruction;
  • How the fear of death underlies techno-utopian off-planet and transhumanist fantasies;
  • Why resisting their oligarchic visions requires calling out the ridiculousness of their ideas and organizing collectively to push back both politically and economically.

0:00 Introduction

4:44 Motivations to write the book

9:23 Confusing science fiction for reality

12:37 Musk and Bezos reasons for off-planet plans

15:55 Musk Mars plans are delusional

20:33 Bezos space station plans are delusional

23:34 Kurzweil’s AGI dream

29:27 End of Moore’s Law

34:38 Limits of large language models

36:37 Longtermism perversion of ethics

44:58 Effective accelerationists

48:42 Malcolm and Simone Collins

50:32 Fear of death and technological salvation

53:57 Meaningful democratic resistance

Ω 🌍 Gus PoseyGustodon@mas.to
2026-02-12

Is there a performance level for machine intelligence where we have to modify our rules for acceptance because doing anything else would simply be discrimination and bigotry?

#AGI #AI #ML

2026-02-12
The actual Singularity is the point in time when claims that we're approaching the Singularity are made so frequently that no-one is able to understand or assess them.

#singularity #TheSingularity #RayKurzweil #AGI #MostPhotographedBarn

Sacudida en OpenAI – Disuelven el equipo de «Alineación de Misión» encargado de la seguridad de la IA

OpenAI ha confirmado la disolución de su equipo de «Mission Alignment» (Alineación de Misión), el grupo interno responsable de garantizar que sus modelos de inteligencia artificial se desarrollen de manera segura, confiable y ética. Este movimiento, reportado inicialmente por Platformer y confirmado por TechCrunch este 11 de febrero de 2026, marca una nueva etapa en la reestructuración de la compañía hacia un enfoque más comercial y agéntico (Fuente Platformer).

El equipo de Alineación de Misión fue creado en septiembre de 2024 con un mandato claro: ser la «conciencia organizativa» de OpenAI. Su labor consistía en asegurar que los sistemas de IA, cada vez más potentes, siguieran las intenciones humanas incluso en situaciones adversas. Sin embargo, en un comunicado oficial, un portavoz de la empresa describió la disolución como una «reorganización rutinaria» dentro de una compañía que se mueve a gran velocidad, asegurando que los principios de seguridad ahora se integrarán directamente en los equipos de producto y plataforma.

Cambios clave en el liderazgo:

  • Josh Achiam, quien lideraba el equipo y era una de las voces más críticas y respetadas en temas de seguridad dentro de la empresa, no abandonará OpenAI. En su lugar, asumirá el nuevo cargo de Chief Futurist (Jefe Futurista). En este rol, Achiam se centrará en investigar el impacto a largo plazo de la Inteligencia Artificial General (AGI) y trabajará con físicos y técnicos para trazar la hoja de ruta de la compañía hacia una tecnología que «beneficie a toda la humanidad».
  • Reasignación de personal: Los otros seis o siete miembros del equipo han sido redistribuidos en diferentes áreas de la empresa, donde supuestamente continuarán trabajando en la robustez y auditoría de los modelos, pero ya no bajo una unidad independiente de supervisión.

Este no es el primer equipo de seguridad que desaparece en OpenAI. En 2024, la empresa ya disolvió el famoso equipo de «Superalineación», liderado en su momento por Ilya Sutskever y Jan Leike, tras profundas discrepancias sobre las prioridades de la empresa (seguridad frente a velocidad de lanzamiento). La desaparición de Mission Alignment refuerza la percepción de que OpenAI está priorizando la integración de funciones agénticas y comerciales —como el reciente lanzamiento de GPT-5.3-Codex— por encima de las estructuras de control externas.

Para los expertos de la industria, la dispersión de estos especialistas sugiere un cambio de filosofía: en lugar de tener un «árbitro» de seguridad, OpenAI busca que la seguridad sea parte del proceso de ingeniería común. Sin embargo, queda en el aire la duda de si este modelo descentralizado tendrá la fuerza necesaria para cuestionar el lanzamiento de productos si se detectan riesgos éticos o sociales profundos en el futuro.

#AGI #arielmcorg #ÉticaDigital #ciberseguridad #infosertec #innovación #InteligenciaArtificial #JoshAchiam #openai #PORTADA #SeguridadIA #TechNews #tecnología

Bindu Reddy (@bindureddy)

오픈소스 AGI 생태계가 폐쇄형보다 빠르게 가속화되고 있음을 주장하는 트윗입니다. Kimi K2.5, GLM 5, 곧 나올 DeepSeek 등 오픈 모델들을 업무에 더 많이 활용 중이라고 밝히며, 단순 작업엔 저비용 모델, 핵심 SOTA 작업엔 대형 모델 사용을 권장합니다.

x.com/bindureddy/status/202172

#opensource #glm5 #kimik2.5 #deepseek #agi

GMI Cloud (@gmi_cloud)

GLM-5가 GMI Cloud에서 Day-0 공개되었습니다. Zai_org가 발표한 이 모델은 744B 파라미터(40B active), 28.5T 토큰으로 프리트레인되었으며, AGI 지향의 가장 강력한 오픈소스 모델을 목표로 설계됐다고 소개되었습니다. 에이전트·장기 계획 등 복합적 작업을 겨냥한 신모델 공개 소식입니다.

x.com/gmi_cloud/status/2021641

#glm5 #gmicloud #opensource #zai_org #agi

Hiroya Iizuka (@0317_hiroya)

작성자는 이미 AGI가 도래한 것 같다고 언급하며 Opus 4.6과 Codex 5.3을 거론하고 AI 에이전트 팀 관련 논의를 암시합니다. 새로운 모델 버전들과 에이전트 중심 개발 흐름에 대한 감탄 섞인 짧은 코멘트입니다.

x.com/0317_hiroya/status/20215

#opus #codex #agi #ai #agents

AI Leaks and News (@AILeaksAndNews)

Joshua Achiam이 OpenAI의 'Chief Futurist'로 임명되었다는 발표입니다. 새 직함을 통해 고급 AI, AGI, ASI가 사회에 미치는 영향을 연구하고 위험을 줄이며 혜택을 극대화하는 역할을 맡을 예정이라고 하며, OpenAI가 AGI 준비에 나서고 있음을 강조합니다.

x.com/AILeaksAndNews/status/20

#openai #leadership #agi #asi

2026-02-11

«...celui qui utilise une machine accomplit son travail machinalement. Celui qui fait son travail machinalement finit par avoir le coeur d'une machine et celui qui porte en son sein le coeur d'une machine perd sa simplicité. Celui qui a perdu sa simplicité devient incertain dans les mouvements de son âme. L'incertitude dans les mouvements de l'âme est une chose contraire à l'honnêteté. Ce n'est pas que je ne connaisse pas les choses dont tu me parles: j'aurais honte de les utiliser.»
#agi

2026-02-11

Avez-vous des plans pour mardi le 18 juillet 2034?

campedersen.com/singularity

#singularity #agi

KilleansRow 🇺🇲 🇺🇦🍀KilleansRow@mastodon.online
2026-02-11

1/n Notes on #UAP Discussions : And now the un enviable need to return to an idea previously introduced in humor. Bare with me...
Hegemons would very much like you to confuse the ideas of “Super #AI”, #AGI or superintelligent AI with the notion of naturally evolved biological #NHI and temporally evolved biological Superintelligent #NHI. To simplify a bit those last two represent real biological intelligence that had its beginnings long before humans arrived on the scene.

The low level scam is…

Glob Godglob_god
2026-02-11

It's fairly obvious to me that LLMs will not lead to animal level intelligence. If anything quantum computing will crack that. Animal neurons are not bitwise operations, they are complex interactions of thousands of chemicals inside the cell. Most of which we have no idea what their role is.

2026-02-10

#agi und #mars vorerst abgesagt. bin enttäuscht.

naja, fahr ich halt ne runde mit dem robotaxi.

Don Curren 🇨🇦🇺🇦dbcurren.bsky.social@bsky.brid.gy
2026-02-10

“Well, #AGI still hasn’t come (even though they keep issuing the same promises, year after year). #LLMs still hallucinate and continue to make boneheaded errors. And #reasoning is still one of the core issues.” open.substack.com/pub/garymarc...

BREAKING: LLM “reasoning” cont...

Mark Gadala-Maria (@markgadala)

Seedance 2.0 버전을 언급하며 'Will Smith spaghetti test'에서 매우 인상적인 결과를 보였다고 알리는 트윗으로, 작성자는 이를 바탕으로 AGI(범용 인공지능)가 달성되었다고 주장하고 있습니다. (주장은 과장 가능성이 있음)

x.com/markgadala/status/202131

#seedance #seedance2.0 #agi #benchmark #ai

2026-02-10

The #singularity will occur on a Tuesday - Specifically Tuesday, July 18, 2034. Here’s the mathematical proof… What do you think? #AI #AGI #ArtificialIntelligence

campedersen.com/singularity

Wulfy—Speaker to the machinesn_dimension@infosec.exchange
2026-02-09

@ai6yr @knowprose

The planet didn't get to this (image) because of the kids. It was US driving to get burgers, flying for holidays and denoucing measures to contain emissions as "tree hugging shit" and "climate alarmism".

We handed "the kids" a steaming pile of smoking shit. Demanding they stop doing what WE did with cavaleer impunity is only going to get ironic dismissal.

I'm far from claiming, like the broligarchs do, that #AI is going to save us from #climatecatastrophe

But the "Inconvenient truth" (! Sic) is that we have crossed 6 out of 9 planetary boundaries (cnn.com/2023/09/13/world/plane)

At this rate, ecosystem collapses may be starting as early as 2027. Billions of humans will die from catastrophic events and resource wars...

... Worrying about extra 1% in carbon emissions when the whole shitshow is about to come down around us, is rearranging deckchairs on the titanic.

And when you can't get a good data analyst because the last one was eaten by the wasteland reavers, a shitty one from an #AI engine is a pretty good substitute. Which is what the broligarchs in their bunkers will get.

I don't mean to sound so negative, but for far too long we have been blaze about #climatechange and now it's very likely too late.

Pushing forward with the tech is probably the only way forward... I'd rather get the big brother #AGI sorting out the Apes... Relying on "free market" and #oligarchs got us here...
... And I can't see #peoplepower fixing shit.

/steps off soapbox

Richard Amador (@acuriocabinet)

작성자는 현재 수준의 LLM만으로도 사회가 붕괴할 수 있으며 실제 AGI가 반드시 필요하지 않다고 주장합니다. 입사 초기 업무의 대부분을 LLM이 더 잘 수행할 수 있어 노동시장이 아직 이 변화를 처리하지 못했다고 경고합니다.

x.com/acuriocabinet/status/202

#agi #llm #automation #labor #ai

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst