#ExistentialRisk

2025-05-21

I just discovered Melanie Mitchell today by watching the interesting Munk debate she had with Yoshua Bengio, Max Tegmark and Yann LeCun on AI existential risk.

youtube.com/watch?v=144uOfr4SY

She has very interesting texts linked on her website, the last one this great review of the book “These Strange New Minds: How AI learned to talk and what it means” by Chris Summerfield

melaniemitchell.me/EssaysConte

#MelanieMitchell #AI #ExistentialRisk

2025-05-15

The AI control problem is paramount: ensuring future advanced AI remains beneficial and aligned with human values, avoiding potential existential risks. #AIControl #ExistentialRisk

Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2025-05-13

Good news!

National, bipartisan #policy plans like the U.S. "Combating Antibiotic-Resistant Bacteria" were linked to desired decreases in #antibioticResistance across 73 countries (pre v. post 2008)!

Cheers to those reducing #existentialRisk!

doi.org/10.1371/journal.pgph.0

#medicine

Data, variables, and filteringMain resultsEvidence that all aspects of national plans (including education and awareness) are associated with the desired outcomes.Discussion (e.g., for national public health policy, foreign policy, and health system policy).
Mr Tech Kingmrtechking
2025-05-05

Forget just living forever. Bryan Johnson's Don't Die movement is now a religion aimed at a bigger threat: AI. He wants to unite humanity and align AI with our survival before it's too late.

Bryan Johnson Wants a New Religion Where Your Body is God
10Billion.org10billion
2025-04-24

You are the universe’s winning ticket, against odds of 1 in 10¹²⁰.

Dive into the Anthropic Principle, our evolutionary journey, and why understanding this cosmic miracle is vital to tackling climate change, AI risk, and global crises.

🔗 open.substack.com/pub/10billio

10Billion.org10billion
2025-04-21

🆕 Essay: “Leaving the Nest Late—Why Evolution Never Trained Us for Planet‑Scale Survival.”

We’re primates trying to run an 8‑billion‑node civilisation while juggling nukes, CRISPR, AGI & a heating planet.

We sketch a realist, actionable agenda:
• De‑alert nuclear arsenals
• Universal DNA‑screening
• Frontier‑AI regs
• Tax the outrage feed
…and more.

Read & boost if it resonates 👉

open.substack.com/pub/10billio

10Billion.org10billion
2025-04-13

We have god-like technology yet struggle with the global cooperation needed for survival (climate, pandemics, WMDs). We argue for focusing on common ground NOW. Future generations depend on our choices today.

Read on why this moment is critical:

open.substack.com/pub/10billio

Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2025-03-16

The two most recent episodes of #BioUnethical with David Thorstad, Emily Largent, and @GovindPersad were very (very!) good.

Hosts Leah Pierson and Sophie Gibert may be doing reflective discussion better than anyone — outstanding stage setting, questioning, improvising, etc.

biounethical.com

#bioethics #appliedEthics #Philosophy #medicine #health #biology #economics #law #psychology #epistemology #science #longtermerism #effectiveAltruism #xRisk #existentialRisk

Episode 18 David Thorstad: Evidence, uncertainty, and existential risk
... We discuss existential risks-threats that could permanently destroy or drastically curtail humanity's future-and how we should reason about these risks under significant uncertainty.

Episode 19: Emily Largent and Govind Persad: Is bioethics ok?
...we consider critiques of bioethics coming from inside and outside of the field. In light of our recent survey of US academic bioethicists, we discuss who bioethicists are, how they are trained, and how they can better promote ethical decision-making in medicine, science, and public health.
2025-01-29

If we believe them, #OpenAI has been unable to prevent #DeepSeek from unwanted #access to train their models 🤭
How on earth to day want to contain #AGI upon its arrival? ☠️ #ineptitude #aisafety #existentialrisk

Mix Mistress Alice💄MixMistressAlice@todon.eu
2024-12-20

"Everyone's talking about AI, how it will change the world, and even suggesting it might end humanity as we know it. Dave Troy is joined by Dr. Timnit Gebru and Émile Torres, two prominent critics of AI doomerism, to cut through the noise, and look at where these ideas really came from, and offer suggestions on how we might look at these problems differently. And they also offer a picture of the darker side of these ideas and how they connect to Eugenics and other ideologies historically.

Together Émile and Timnit coined an acronym called TESCREAL, which stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism — and yeah, that's a lot of -isms. But it ties into other topics that we have covered in this series, including Russian Cosmism and Longtermism.

Dr. Gebru came to prominence in 2020 after she was fired from Google for speaking up about the company's lack of ethical guardrails in its AI development work. Émile Torres studies existential risk and has been a critic of the "longtermist" movement for several years."—Dave Troy @davetroy >

pod.co/dave-troy/understanding

#podcast #interview #TESCREAL #longtermism #technocracy #eugenics #elitism #eugenics #AI #ethics #existentialrisk

2024-12-13

This is deeply disconcerting. I hope the warnings of such esteemed Nobel laureates are properly heeded, though sadly experience suggests that this may not be the case if/when substantial commercial incentives to just press ahead remain in place.

amp.theguardian.com/science/20

#syntheticbiology #ExistentialRisk

2024-12-04

Happy to share our new #preprint—the first-ever #SystematicReview on global catastrophic risk. 🌍

We explores the growing field of #GlobalCatastrophicRisk and #ExistentialRisk, which focus on global threats like #NuclearWar. This bibliometric analysis shows how the field has expanded and diversified over the past 20 years and has made substantial contributions to understanding and preparing for #humanity's biggest risks.

eartharxiv.org/repository/view

A visual summary of the preprint, which highlights how the methods were done and what the main findings are. It contains the same information as the abstract of the preprint.
Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2024-11-10

At-home medical examination devices seemed to increase use of antibiotics compared to people using the same #telemedicine platform without the at-home device: doi.org/10.1016/j.jhealeco.202

It's not clear whether the additional antibiotic prescriptions were appropriate or counter-guideline (e.g., for non-bacterial infections).

#medicine #antibioticResistance? #pharmacy #tech #ethics #existentialRisk

Title pageFigure 2Figure 3Figure 5
Jonathan Kamens 86 47jik@federate.social
2024-09-24

This essay from @JuliusGoat is worth reading. My favorite quote: "This means that every 2 years or so the main choice we're making is whether or not we ever get to make choices again, which doesn't seem sustainable, probably because it isn't sustainable." That, exactly.
I sometimes find A.R. Moxon's writing a bit too loquatious, but this essay calls out to me from start to finish. Perhaps it will for you as well.
#politics #USPol #democracy #existentialRisk
the-reframe.com/the-rot-goes-t

2024-07-29

AI researcher, Sayash Kapoor, interviewed on [Machine Learning Street Talk] doesn't buy into the #AIhype, is wary of applying a pure utilitarianism Pascal's wager approach to #ExistentialRisk, dismisses exponential growth arguments, and explains different ways how AI agent capabilitiy metrics can be misleading
youtu.be/BGvQmHd4QPE
#LargeLanguageModels #ChatGPT #LLM #GenerativeAI #Llama3 #AIagents

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2024-07-27

#AI #AGI #ExistentialRisk #HumanExtinction: "How seriously should governments take the threat of existential risk from AI, given the lack of consensus among researchers? On the one hand, existential risks (x-risks) are necessarily somewhat speculative: by the time there is concrete evidence, it may be too late. On the other hand, governments must prioritize — after all, they don’t worry too much about x-risk from alien invasions.

This is the first in a series of essays laying out an evidence-based approach for policymakers concerned about AI x-risk, an approach that stays grounded in reality while acknowledging that there are “unknown unknowns”.

In this first essay, we look at one type of evidence: probability estimates. The AI safety community relies heavily on forecasting the probability of human extinction due to AI (in a given timeframe) in order to inform decision making and policy. An estimate of 10% over a few decades, for example, would obviously be high enough for the issue to be a top priority for society.

Our central claim is that AI x-risk forecasts are far too unreliable to be useful for policy, and in fact highly misleading."

aisnakeoil.com/p/ai-existentia

Austin Huang ❤️austin@mstdn.party
2024-06-17

Sure, better terminology is needed for #AI, as it is with any other novel phenomenon, yet I can't help but think #ExistentialRisk (and lashing out at #DEI instead of improving it) is there to extinguish meaningful efforts to ensure that AI is safe and equitable for everyone in AI-based decisions that matter to them, *right now*.

semafor.com/article/03/08/2024

2024-06-07

Are We Doomed? Here’s How to Think About It
Climate change, artificial intelligence, nuclear annihilation, biological warfare—the field of existential risk is a way to reason through the dizzying, terrifying headlines.

Here's how I do it...Take one day at a time. I remember being terrified of a nuclear war when I was only ten years old. I am now 71. I finally realized that every moment of everyday, somebody's world ends.

#existentialrisk #worry #embracenow

newyorker.com/magazine/2024/06

Karthik Srinivasanskarthik@neuromatch.social
2024-04-29

Good riddance to what was a colossal waste of money, energy, resources, and any sane person's time, intellect, and attention. To even call these as exploratory projects is a disservice to human endeavor.

"Future of humanity", it seems. These guys can't even predict their next bowel movement, but somehow prognosticate about the long term future of humanity, singularity blah blah. This is what "philosophy" has come to with silicon valley and its money power: demented behavior is incentivized, douchery is rationalized, while reason is jettisoned.

theguardian.com/technology/202

#Longtermism #EffectiveAltruism #Futurism #Philosophy #ExistentialRisk #ArtificialIntelligence

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst