#artificalintelligence

2026-02-24

#Firefox has finally released their #AI Controls section.

firefox.com/en-US/firefox/148.

Update your install now to block their in-browser #ArtificalIntelligence features more easily!

#tech #technology #browser #browsers

2026-02-24

“The economic system is, in effect, a mere function of social organization”*…

A statue in the likeness of a police officer stands watch over a smart highway in Jinan, China, on April 18, 2024

The AI race is, of course, afoot. But while most headlines focus on the new capabilities and benchmarks achieved by competing developers, Jeremy Shapiro reminds us that the winners in this race won’t necessarily be the most objectively capable, but rather the players who most effectively integrate the technology into their organizations, economies, and societies…

Artificial intelligence has rapidly become a central arena of geopolitical competition. The United States government frames AI as a strategic asset on par with energy or defense and seeks to press its apparent lead in developing the technology. The European Union lags in platform power but seeks influence over AI through regulation, labor protections, and rule-setting. China is racing to catch up and to deploy AI at scale, combining heavy state investment with administrative control and surveillance.

Each of these rivals fears falling behind. Losing the AI race is widely understood to mean slower growth, military disadvantage, technological dependence, and diminished global influence. As a result, governments are pouring money into chips, data centers, and national AI champions, while tightening export controls and treating compute capacity as a strategic resource. But this familiar race narrative obscures a deeper danger. AI is not just another general-purpose technology. It is a force capable of reshaping the very meaning of work, income, and social status. The states that lose control of these social effects may find that technological leadership offers little geopolitical advantage.

History suggests that societies unable to absorb disruptive economic change become politically volatile, strategically erratic, and ultimately weaker competitors. The central question, then, is not only who builds the most powerful AI systems, but who can integrate them into society without triggering a societal backlash or an institutional breakdown.

Karl Polanyi’s The Great Transformation, published in 1944, explains why the capacity to “socially embed” new market forces determines national strength. By “embeddedness,” Polanyi meant that markets have historically been subordinate to social and political institutions, rather than governing them. The nineteenthcentury idea of what he called a “self-regulating market” was historically novel precisely because it sought to “disembed” the economy from society and organize social life around price and competition rather than social obligation. As Polanyi put it in his most succinct formulation, “instead of economy being embedded in social relations, social relations are embedded in the economic system.”

Writing in the shadow of the Great Depression, Polanyi argued that the attempt in the nineteenth century to create a self-regulating market society that treated labor, land, and money as commodities generated social dislocation so severe that it provoked authoritarian backlash and geopolitical collapse. Stable orders, he insisted, required markets to be re-embedded in social and political institutions. Where they were not, societies sought protection by other means, which often translated into support for fascist or communist regimes that promised to tame the market. Today, it often means electing populist leaders who promise to break the entire existing order, both domestic and international.

Polanyi insisted that the idea of a “self-adjusting market implied a stark utopia” because such a system could not exist “for any length of time without annihilating the human and natural substance of society.” The interwar gold standard, for example, disciplined states in the name of efficiency, but it did so by transmitting economic shocks directly into social life. When democratic governments proved unable to shield their populations, they either abandoned the liberal economic order or turned authoritarian (or both)…

[Shapiro considers the history of the 20th century, in particular the rise of Nazi Gernmany, sketches the state of play in the AI arena, considers the challenge of embedding the changes that AI will bring in The U.S., Europe, and China, then teases out the ways in which the “industrial revolution” is different from it predecessors (in particular, the mobility of capital, the services (as opposed to manufacturing)-heavy character of employment today, and the accelerating pace of tech deelopment. He concludes…]

… Geopolitical competition in the AI age will not take place solely in clean rooms or data centers. It will also involve the less visible realm of social institutions: labor markets, communities, social protections, and political legitimacy. Polanyi teaches us that markets are powerful only when societies can bear them. When they cannot, markets provoke their own undoing and often in rather spectacular fashion.

The West’s success in the Cold War owed much to its ability to reconcile capitalism with social protection. If the AI age is another “great transformation,” the same lesson applies. Chips matter. Data matters. But the ultimate source of power may be the capacity to re-embed technological change in society without sacrificing cohesion.

That is not a liberal-progressive distraction from geopolitical competition. It is its hidden core.

The Next Great Transformation,” from @jyshapiro.bsky.social and @open-society.bsky.social.

For a complementary perspective (with special focus on the interaction between labor and the supply side of the economy) pair with: “Brave New World- a third industrial divide?” from @thunen.bsky.social in @phenomenalworld.bsky.social.

And see also: “AI and the Futures of Work,” from Johannes Kleske (@jkleske.bsky.social). A response to dramatic predictions of AI’s impact– most recently, Matt Shumer‘s viral “Something Big Is Happening“: it’s a possible future, Kleske suggests. but only one possibe future– and one that, while plausible, isn’t likely (at least outside the rarified atmsphere of coding, in which Shumer operates). In a way that echoes Shapiro’s piece above, Kleske suggests that individuals need to better understand the technology in order to retain/regain some agency, and societies need the same kind of rekindled resistance to act clearly and with purpose in re-embedding AI, and markets, in society. Not the other way around… Resonant with the thinking of Tim O’Reilly and Mike Loukides featured here before: “The best way to predict the future is to invent it“; and with Ted Chiang‘s “ChatGPT Is a Blurry JPEG of the Web” and “Will A.I. Become the New McKinsey?” And then there’s the ever-illuminating Rusty Foster (riffing on Gideon Lewis-Kraus‘ recent New Yorker piece): “A. I. Isn’t People.”

For a look at a high-value, trust-based use case for AI that seems to avoid the objections to AGI (and speak to Shapiro’s points), see “The Middle Game: Routers at the Edge,” from Byrne Hobart.

But back to AGI… as Nicholas Carr observes, we might understand Bosrtrom’s “paperclip maximizer” “not as a thought experiment but as a fable. It’s not really about AIs making paperclips. It’s about people making AIs. Look around. Are we not madly harvesting the world’s resources in a monomaniacal attempt to optimize artificial intelligence? Are we not trapped in an “AI maximizer” scenario?”

###

As we digest development, we might recall that it was on this date in 1962 that an early precondition for the revolution underway was first achieved: telephone and television signals were first relayed in space via the communications satellite Echo 1– basically a big metallic balloon that simply bounced radio signals off its surface.  Simple, but effective.

Forty thousand pounds (18,144 kg) of air was required to inflate the sphere on the ground; so it was inflated in space.  While in orbit it only required several pounds of gas to keep it inflated.

Fun fact: the Echo 1 was built for NASA by Gilmore Schjeldahl, a Minnesota inventor probably better remembered as the creator of the plastic-lined airsickness bag.

source

#AI #artificalIntelligence #communications #communicationsSatellite #culture #Echo1 #GilmoreSchjeldahl #history #IndustrialRevolution #industrialRevolutions #KarlPolanyi #Polanyi #Science #society #Technology #TheGreatTransformation
200px-Echo-1
Lowyat.NETlowyat
2026-02-23
2026-02-19

Interesting post by @simon on StrongDM’s approach to AI coding.

Teams using AI often “feel” faster, but measured delivery actually gets slower.

In their approach, instead of humans reviewing code, they validate the apps behaviour at scale by extensively testing well defined scenarios.

“Time saved by AI” is still a bad metric. Measuring end-to-end results is what matters, regardless of who is writing the code.

simonwillison.net/2026/Feb/7/s

#ai #artificalintelligence #vibecoding

Lowyat.NETlowyat
2026-02-19
The Hollywood Reporterhollywoodreporter
2026-02-16
2026-02-14

This is how your fugly genAI pictures and plagiarized articles are made: illegal massively polluting carcinogenic data centers built in poor neighborhoods. #genAI #AI #artificalIntelligence #LLM #environment #Musk www.theguardian.com/environment/...

‘A different set of rules’: th...

‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report – AI (artificial intelligence) – The Guardian

AI (artificial intelligence), Explainer

AI image – The Guardian

‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report

Annual review highlights growing capabilities of AI models, while examining issues from cyber-attacks to job disruption

By Dan Milmo, Global technology editor, Tue 3 Feb 2026 00.00 EST Share

The International AI Safety report is an annual survey of technological progress and the risks it is creating across multiple areas, from deepfakes to the jobs market.

Commissioned at the 2023 global AI safety summit, it is chaired by the Canadian computer scientist Yoshua Bengio, who describes the “daunting challenges” posed by rapid developments in the field. The report is also guided by senior advisers, including Nobel laureates Geoffrey Hinton and Daron Acemoglu.

Here are some of the key points from the second annual report, published on Tuesday. It stresses that it is a state-of-play document, rather than a vehicle for making specific policy recommendations to governments. Nonetheless, it is likely to help frame the debate for policymakers, tech executives and NGOs attending the next global AI summit in India this month.

Anthropic has released models with heightened safety measures. Photograph: Dado Ruvić /Reuters
  1. 1. The capabilities of AI models are improving A host of new AI models – the technology that underpins tools like chatbots – were released last year, including OpenAI’s GPT-5, Anthropic’s Claude Opus 4.5 and Google’s Gemini 3. The report points to new “reasoning systems” – which solve problems by breaking them down into smaller steps – showing improved performance in maths, coding and science. Bengio said there has been a “very significant jump” in AI reasoning. Last year, systems developed by Google and OpenAI achieved a gold-level performance in the International Mathematical Olympiad – a first for AI.However, the report says AI capabilities remain “jagged”, referring to systems displaying astonishing prowess in some areas but not in others. While advanced AI systems are impressive at maths, science, coding and creating images, they remain prone to making false statements, or “hallucinations”, and cannot carry out lengthy projects autonomously.Nonetheless, the report cites a study showing that AI systems are rapidly improving their ability to carry out certain software engineering tasks – with their duration doubling every seven months. If that rate of progress continues, AI systems could complete tasks lasting several hours by 2027 and several days by 2030. This is the scenario under which AI becomes a real threat to jobs.But for now, says the report, “reliable automation of long or complex tasks remains infeasible”.

Continue/Read Original Article Here: ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report | AI (artificial intelligence) | The Guardian

Tags: AI, Annual Report, Artifical Intelligence, Dan Milmo, International AI Safety Report, Risks, Safety, Safety Report, Senior Advisors, Summit, Survey, Tech Progress, The Guardian
#AI #AnnualReport #ArtificalIntelligence #DanMilmo #InternationalAISafetyReport #Risks #Safety #SafetyReport #SeniorAdvisors #Summit #Survey #TechProgress #TheGuardian
a-post-about-ai-generative-ai-and-news-about-artificialAI image The Guardian
2026-02-13

This is exactly the kind of crap that just gets ignored because people are hell bent on using AI for everything, at all cost. The world going to hell in an AI handbasket.

#AI #ArtificalIntelligence

youtube.com/watch?v=LF4o4Z01Q0

Daniel MaurerDanma_md
2026-02-13

A very informative Video (not only) about the history of
Very usable for pedagogical use.
youtu.be/BTvWy4Vti38?si=IXYdoJ

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst