#AIWarfare

Dr. Thompsonrogt_x1997
2025-06-07

🛫 AI jets now make kill decisions in under 0.3 seconds.
From DRL-powered maneuvers to predictive maintenance, aerial combat will never be the same.
💥 The future doesn’t fly — it calculates.
Read now:
medium.com/@rogt.x1997/the-alg

medium.com/@rogt.x1997/the-alg

Ryan Hitereligiousryan
2025-05-28

Startups > defense giants?
AI, drones, and Silicon Valley are reshaping the battlefield.
Palantir, Anduril, and Helsing aren’t just tech firms—they’re the future of war.
Full story → ryanjhite.com/2025/05/28/the-n

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2025-05-24

"Big Tech’s dreams come true. After spending millions on campaign contributions and lobbying, the country’s biggest tech companies and their industry allies submitted artificial intelligence policy wish lists they want the Trump administration to grant — and some of those provisions made it into the “big beautiful” budget House Republicans advanced this morning. While slashing Medicaid, SNAP, and other aid programs, the current budget would set $1.2 billion aside for lucrative private-public AI contracts with the Defense Department. It would also effectively bar states from regulating AI.

Kiss the ring. Firms that responded to the Trump administration’s request for guidance on its AI regulatory framework — including Amazon, Google, Microsoft, venture capitalist firm Andreessen Horowitz, OpenAI, defense-technology company Palantir, Meta, Amazon-backed Scale AI, and trade association Data Center Coalition — have spent more than $26.4 million lobbying the Trump White House, lawmakers, and regulators on AI and other issues since January, disclosures show. Meanwhile, Meta’s Mark Zuckerberg, Amazon’s Jeff Bezos, and OpenAI’s Sam Altman each donated $1 million to Trump’s inauguration despite prior rocky relationships."

levernews.com/big-techs-big-be

#USA #Trump #GOP #AI #GenerativeAI #AIWarfare #DoD #Pentagon #BigTech #SiliconValley #Lobbying

2025-05-05

Anduril Industries, the AI-driven defense startup founded by Palmer Luckey, is acquiring Irish tech firm Klas to boost its battlefield AI capabilities. Klas’s rugged communication tech will power Anduril’s Lattice platform used to control autonomous drones and gather real-time combat intelligence.

#Anduril #AIWarfare #DefenseTech #PalmerLuckey #AutonomousSystems #MilitaryAI #TechAcquisition #NationalSecurity #Innovation #EthicalAI

Read Full Article Here :- techi.com/anduril-klas-acquisi

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2025-04-19

"Today marks one year since workers with No Tech for Apartheid staged sit-ins at Google offices to protest the use of our labor to power the genocide in Gaza, to demand an end the harassment and discrimination of our Palestinian, Muslim, and Arab coworkers, and to pressure executives to address the workplace health and safety crisis that Nimbus has caused. Google retaliated against workers and illegally fired 50 Googlers—including many who did not participate directly in the action.

In the year since, Google has only deepened its commitment to being a military contractor. Two months ago, in order to take advantage of the federal contracts the corporation can gain under Trump, Google abandoned its pledge not to build AI for weapons or surveillance. In rapid succession, Google then acquired Israeli cloud security start-up Wiz, pursued partnerships with US Customs and Border Patrol to update towers by Israeli war contractor Elbit Systems with AI at the US-Mexico border, and launched an AI partnership with the largest war profiteer in the world: Lockheed Martin.

Lockheed Martin, Northrop Grumman, and Raytheon are no longer the only war corporations in town; Google and big tech are increasingly eating their lunch. Big tech companies are being pushed by the market to continue to bank returns. But having saturated the consumer and enterprise markets, corporations like Google, in a contentious arms race to dominate the cloud market, have identified the ever-ballooning so-called “defense” budgets of the US and other governments as major pots for profit.

One thing is clear: We urgently need an AI arms embargo."

thenation.com/article/society/

#USA #Google #BigTech #Surveillance #AIWarfare #ProjectNimbus

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2025-04-06

"A Microsoft employee disrupted the company’s 50th anniversary event to protest its use of AI.

“Shame on you,” said Microsoft employee Ibtihal Aboussad, speaking directly to Microsoft AI CEO Mustafa Suleyman. “You are a war profiteer. Stop using AI for genocide. Stop using AI for genocide in our region. You have blood on your hands. All of Microsoft has blood on its hands. How dare you all celebrate when Microsoft is killing children. Shame on you all.”"

theverge.com/news/643670/micro

#AI #Microsoft #Israel #AIWarfare #Palestine

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2025-04-01

"AI firms are interested in developing tools and marketing strategies that revolve around the allure of AGI—around a stillborn god that will transform large swaths of society into excessively profitable enterprises and incredibly efficient operations. Think of it as a desperate attempt to defend capitalism, to preserve the status quo (capitalism) while purging recent reforms that purportedly undermine it (democracy, liberalism, feminism, environmentalism, etc.). Sam Altman, OpenAI’s co-founder, has repeatedly called for “a new social contract,” though most recently has insisted the “AI revolution” will force the issue on account of “how powerful we expect [AGI] to be.” It doesn’t take much to imagine that the new social contract will be a nightmarish exterminist future where AI powers surveillance, discipline, control, and extraction, instead of “value creation” for the whole of humanity.

The subsuming of art springs out of the defense of capitalism—more and more will have to be scavenged and cannibalized to sustain the status quo and somehow, someday, realize this supposedly much more profitable horizon. The ascendance of fascism comes with the purge—the attempt to rollback institutions and victories seen as shackles on the ability of capitalism to deliver prosperity (and limiters on the inordinate power and privilege for an unimaginably pampered and cloistered elite).

Both are part and parcel to what’s going on, but one project is objectively more dangerous (and ambitious) than the other. In that way, then, all of this is a distraction."

thetechbubble.substack.com/p/d

#AI #GenerativeAI #OpenAI #Marketing #AISlop #AIArt #Surveillance #AGI #Capitalism #AIWarfare #PoliceState

Cornelia Es Saidkrautart@todon.eu
2025-03-09

Entschuldigt, dass ich so lange still war. Ich war mit meinem neuesten Projekt „Voices of the Unseen“ beschäftigt, das ein kollektives Video von und mit 13 internationalen Künstlern geworden ist, das ich noch bearbeite (mehr dazu, sobald das Vdeo fertig ist)

Ab und zu gibt es jedoch Themen, die sich regelrecht aufdrängen, wie zur Zeit: KI-Krieg. #KI_im_Krieg #aiwarfare
Ich habe chatGPT o3.mini-high zu einer auf eine "Deep Research" geschickt, um anschlieĂźend gemeinsam diesen Artikel zu schreiben:

krautart.de/ki-krieg-die-unsic

For the english speaking folks there is an earlier article, that covers a related topic: the Code-driven Battlefield:
krautart.de/ai-warfare-ethics/

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2025-02-18

"This troubling decision to potentially profit from high-tech warfare, which could have serious consequences for real lives and real people comes after criticism from EFF, human rights activists, and other international groups. Despite its pledges and vocal commitment to human rights, Google has faced criticism for its involvement in Project Nimbus, which provides advanced cloud and AI capabilities to the Israeli government, tools that an increasing number of credible reports suggest are being used to target civilians under pervasive surveillance in the Occupied Palestinian Territories. EFF said in 2024, “When a company makes a promise, the public should be able to rely on it.” Rather than fully living up to its previous human rights commitments, it seems Google has shifted its priorities.

Google is a company valued at $2.343 trillion that has global infrastructure and a massive legal department and appears to be leaning into the current anti-humanitarian moment. The fifth largest company in the world seems to have chosen to make the few extra bucks (relative to the company’s earnings and net worth) that will come from mass surveillance tools and AI-enhanced weapons systems."

eff.org/deeplinks/2025/02/goog

#AI #AIWarfare #Google #BigTech #MassSurveillance

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2025-02-05

"Google on Tuesday updated its ethical guidelines around artificial intelligence, removing commitments not to apply the technology to weapons or surveillance.

The company’s AI principles previously included a section listing four “Applications we will not pursue.” As recently as Thursday, that included weapons, surveillance, technologies that “cause or are likely to cause overall harm,” and use cases contravening principles of international law and human rights, according to a copy hosted by the Internet Archive.

A spokesperson for Google declined to answer specific questions about its policies on weapons and surveillance, but referred to a blog post published Tuesday by the company’s head of AI, Demis Hassabis, and its senior vice president for technology and society, James Manyika.

The executives wrote that Google was updating its AI principles because the technology had become much more widespread and there was a need for companies based in democratic countries to serve government and national security clients."

washingtonpost.com/technology/

#AI #AIWarfare #Surveillance #Google #BigTech #SiliconValley #Oligopolies

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2025-01-29

"As most people who have played with a large language model know, foundation models frequently “hallucinate,” asserting patterns that do not exist or producing nonsense. This means that they may recommend the wrong targets. Worse still, because we can’t reliably predict or explain their behavior, the military officers supervising these systems may be unable to distinguish correct recommendations from erroneous ones.
Foundation models are also often trained and informed by troves of personal data, which can include our faces, our names, even our behavioral patterns. Adversaries could trick these A.I. interfaces into giving up the sensitive data they are trained on.

Building on top of widely available foundation models, like Meta’s Llama or OpenAI’s GPT-4, also introduces cybersecurity vulnerabilities, creating vectors through which hostile nation-states and rogue actors can hack into and harm the systems our national security apparatus relies on. Adversaries could “poison” the data on which A.I. systems are trained, much like a poison pill that, when activated, allows the adversary to manipulate the A.I. system, making it behave in dangerous ways. You can’t fully remove the threat of these vulnerabilities without fundamentally changing how large language models are developed, especially in the context of military use.

Rather than grapple with these potential threats, the White House is encouraging full speed ahead."

nytimes.com/2025/01/27/opinion

#AI #GenerativeAI #AIWarfare #CyberSecurity

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2025-01-23

"Google employees have worked to provide Israel’s military with access to the company’s latest artificial intelligence technology from the early weeks of the Israel-Gaza war, according to documents obtained by The Washington Post.

The internal documents show Google directly assisting Israel’s Defense Ministry and the Israel Defense Forces, despite the company’s efforts to publicly distance itself from the country’s national security apparatus after employee protests against a cloud computing contract with Israel’s government.

Google fired more than 50 employees last year after they protested the contract, known as Nimbus, over fears it could see Google technology aid military and intelligence programs that have harmed Palestinians.

In the weeks after the Oct. 7, 2023, attack on Israel by Hamas militants, a Google employee in its cloud division escalated requests for increased access to the company’s AI technology from Israel’s Defense Ministry, the documents obtained by The Post show.

The documents, which detail projects inside Google’s cloud division, indicate that the Israeli ministry urgently wanted to expand its use of a Google service called Vertex, which clients can use to apply AI algorithms to their own data."

washingtonpost.com/technology/

#AI #Google Israel #AIWarfare #BigTech #Palestine #Gaza

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2025-01-21

"Access Now welcomes the ceasefire in the Gaza Strip and urges for a permanent end of hostilities. While the cessation of physical violence is a critical first step, it is far from sufficient. All parties must also commit to a “digital ceasefire” — an immediate stop to online violence, cyberattacks, and deliberate targeting of communication infrastructure which worsen the suffering of civilians.

Israel’s war on Gaza has been defined not only by the devastating physical violence — claiming over 46,000 lives and destroying close to 70% of its infrastructure — but also by the weaponization of technology in unprecedented ways. Internet shutdowns, censorship, disinformation, genocidal rhetoric, and the deployment of artificial intelligence (AI) for indiscriminate attacks have transformed the region’s cyberspace into a battlefield. These digital assaults have intensified the humanitarian crisis, silenced critical voices, and created barriers to delivering life-saving aid in Palestine. "

accessnow.org/press-release/ce

#Palestine #Gaza #Israel #HumanRights #AI #AIWarfare #DigitalRights #InternetShutdowns

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2024-12-22

"On November 7th, we published an op-ed titled “Daniela Rus, The People Demand: No More Research for Genocide” in the MIT Tech. Our piece detailed how Prof. Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory, uses Israeli Ministry of Defense money to develop algorithms with applications in “multirobot security defense and surveillance.” Rather than engage with these publicly verifiable facts, the Tech’s editorial board (under the guidance of Prof. Rus) retracted our op-ed.

MIT sent several of us “no contact” and “no harassment” orders for Prof. Rus, disciplining one student for simply writing our Op-Ed’s title on a public chalkboard! As if this naked intimidation wasn’t enough, the Tech indefinitely halted all Op-Eds after retracting our piece. This comes directly after the suspension and effective expulsion of MIT PhD student Prahlad Iyengar, in part due to an email he sent Professor Rus’ students “offering support” and a “safe space” to discuss her research.

We refuse to be intimidated by MIT. Professor Rus takes money from a genocidal army to do research with military applications (stated in her own papers here, here and here). Retractions and suspensions cannot change these simple facts. Here, we republish our article in full:"

mondoweiss.net/2024/12/despite

#USA #MIT #AI #Surveillance #AIWarfare #Israel #Palestine

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2024-12-12

"In the context of unprecedented U.S. Department of Defense (DoD) budgets, this paper examines the recent history of DoD funding for academic research in algorithmically based warfighting. We draw from a corpus of DoD grant solicitations from 2007 to 2023, focusing on those addressed to researchers in the field of artificial intelligence (AI). Considering the implications of DoD funding for academic research, the paper proceeds through three analytic sections. In the first, we offer a critical examination of the distinction between basic and applied research, showing how funding calls framed as basic research nonetheless enlist researchers in a war fighting agenda. In the second, we offer a diachronic analysis of the corpus, showing how a 'one small problem' caveat, in which affirmation of progress in military technologies is qualified by acknowledgement of outstanding problems, becomes justification for additional investments in research. We close with an analysis of DoD aspirations based on a subset of Defense Advanced Research Projects Agency (DARPA) grant solicitations for the use of AI in battlefield applications. Taken together, we argue that grant solicitations work as a vehicle for the mutual enlistment of DoD funding agencies and the academic AI research community in setting research agendas. The trope of basic research in this context offers shelter from significant moral questions that military applications of one's research would raise, by obscuring the connections that implicate researchers in U.S. militarism."

arxiv.org/abs/2411.17840

#AI #DoD #USA #AIWarfare #MilitaryAI #DARPA

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2024-12-06

"At the start of 2024, OpenAI’s rules for how armed forces might use its technology were unambiguous.

The company prohibited anyone from using its models for “weapons development” or “military and warfare.” That changed on January 10, when The Intercept reported that OpenAI had softened those restrictions, forbidding anyone from using the technology to “harm yourself or others” by developing or using weapons, injuring others, or destroying property. OpenAI said soon after that it would work with the Pentagon on cybersecurity software, but not on weapons. Then, in a blog post published in October, the company shared that it is working in the national security space, arguing that in the right hands, AI could “help protect people, deter adversaries, and even prevent future conflict.”

Today, OpenAI is announcing that its technology will be deployed directly on the battlefield.

The company says it will partner with the defense-tech company Anduril, a maker of AI-powered drones, radar systems, and missiles, to help US and allied forces defend against drone attacks."

technologyreview.com/2024/12/0

#AI #OpenAI #AIWarfare #Cybersecurity #DroneWarfare

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2024-11-18

"Meta’s open large language model family, Llama, isn’t “open-source” in a traditional sense, but it’s freely available to download and build on—and national defense agencies are among those putting it to use.

A recent Reuters report detailed how Chinese researchers fine-tuned Llama’s model on military records to create a tool for analyzing military intelligence. Meta’s director of public policy called the use “unauthorized.” But three days later, Nick Clegg, Meta’s president of public affairs, announced that Meta will allow use of Llama for U.S. national security.

“It shows that a lot of the guardrails that are put around these models are fluid,” says Ben Brooks, a fellow at Harvard’s Berkman Klein Center for Internet and Society. He adds that “safety and security depends on layers of mitigation.”"

spectrum.ieee.org/ai-used-by-m

#AI #GenerativeAI #AIWarfare #AISafety #DoD #Meta #Llama

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst