"Achieving AGI is the explicit goal of companies like OpenAI and much of the AI research community. It is treated as a milestone in the same way as building and delivering a nuclear weapon was the key goal of the Manhattan Project.
This goal made sense as a milestone in the Manhattan Project for two reasons. The first is observability. In developing nuclear weapons, there can be no doubt about whether you’re reached the goal or not — an explosion epitomizes observability. The second is immediate impact. The use of nuclear weapons contributed to a quick end to World War 2. It also ushered in a new world order — a long-term transformation of geopolitics.
Many people have the intuition that AGI will have these properties. It will be so powerful and humanlike that it will be obvious when we’ve built it. And it will immediately bring massive benefits and risks — automation of a big swath of the economy, a great acceleration of innovation, including AI research itself, and potentially catastrophic consequences for humanity from uncontrollable superintelligence.
In this essay, we argue that AGI will be exactly the opposite — it is unobservable because there is no clear capability threshold that has particular significance; it will have no immediate impact on the world; and even a long-term transformation of the economy is uncertain."
https://www.aisnakeoil.com/p/agi-is-not-a-milestone
#AI #GenerativeAI #AGI #AIBubble #OpenAI #NuclearWeapons #ManhattanProject #STS