Statewatch

We are activists, researchers, lawyers and jounalists exposing state power across Europe and its borders. Our work has supported debates, movements and campaigns since 1991.

2025-05-02

Security AI is not neutral. It’s reinforcing the systems we already have — and who they exclude.

Facial recognition, risk scoring, behaviour prediction — EU governments are building AI tools into policing and migration systems already marked by racism, over-surveillance and discrimination.

Policymakers talk about “debiasing” these tools, but sidestep the real issue: AI is being deployed to reinforce structures of exclusion, not to dismantle them.

📓 Read our report: buff.ly/5QD3kXD

Stark image of a police barricade, with officers linked in arms and wearing gas masks. Text over the image says "Security AI is not neutral. It’s reinforcing the systems we already have — and who they exclude."
2025-05-01

Earlier this week, we discussed whether or not we would work on May Day. Someone joked, "Well, it depends on how much we want to embody the labour movement that day."

We laughed—and all agreed we wouldn’t work.

Today, we honour the historic labour movements that won the rights so many of us rely on. But more than that, we act in solidarity with the movements that continue—led by workers pushing back against exploitation, inequality, policing, and more.

That's something we should all embody.

2025-04-30

1/ While our newest report focuses on the expanding use of AI in policing, border and immigration systems in Europe — its relevance stretches beyond EU borders.

The UK government has also vowed to “mainline AI,” and this is increasingly evident in its approach to asylum and immigration decision-making.

Some of its latest plans include using AI to:
• summarise asylum interview transcripts
• ‘speed up’ the retrieval of country guidance
• supposedly cut down on decision-making time and costs

The UK's latest plans for AI? Using it to to summarise “lengthy” asylum interview transcripts and “streamline” asylum processing. All to ‘save an hour’. ...while at the same time ‘empowering’ law enforcement” even more... Two strips of cut out paper show pieces from a recent UK government new story with text highlighted in purple to document the claims. Full news story is at the link in the posts.
2025-04-30

3/ Whether in the UK or the EU, the increasing use of AI in migration control — especially in tandem with increasing powers for law enforment — raises fundamental questions:
• Accountability: Who oversees these systems, and how?
• Rights: How can individuals challenge automated decisions?
• Power: Who is shaping this infrastructure, and in whose interest?

🔗 Read our report, Automating Authority, for more: statewatch.org/automating-auth
🔗 See the recent UK announcement: gov.uk/government/news/sex-off

2025-04-30

2/ In Automating Authority, we document how similar systems are being developed and deployed across the EU, with similar concerns:
• Frontex, Europol and eu-LISA are using or testing AI to analyse data, assess asylum claims, and profile travellers
• The EU’s AI Act includes broad exemptions for security agencies which limit oversight and transparency
• These technologies are being embedded with little public debate or scrutiny

2025-04-29

1/ Artificial intelligence is no longer just a tech trend — it’s being embedded into Europe’s security systems, with far-reaching consequences for rights, democracy, and accountability.

Our new report, Automating Authority: Artificial intelligence in EU police and border regimes, maps out how AI is being introduced into immigration, asylum, policing, and criminal justice systems. It is often in secret, with minimal oversight.

2025-04-29

3/ Why it’s important:
This report is our attempt to shift the debate. To expose what's happening. And to ask the most basic democratic questions:

• Who has the power to build these systems?
• In whose interests?
• And who gets to hold them to account?

📓 Read the report: statewatch.org/automating-auth
✉️ Sign up for updates: statewatch.org/about/mailing-l

2025-04-29

2/ What we found:
• EU agencies are developing AI tools to profile travellers, assess asylum claims, and conduct biometric surveillance.

• These systems are disproportionately aimed at migrants, racialised communities, and people on the move.

• The EU’s AI Act provides sweeping exemptions for security use, sidestepping fundamental rights protections.

• A powerful new “security AI complex” is emerging, shaped by public-private partnerships, opaque decision-making, and massive public funding

2025-04-28

Data protection is a powerful but often overlooked tool for challenging racist migration and asylum systems. We designed this workshop to help change that — and so far, participants have had wonderful things to say.

While we would love to continue offering these, this will sadly be our last live workshop.

If you haven't had the chance yet, register today! And please share with your network so we can reach everyone who would benefit.

🖋️ Register & find more info here: statewatch.org/publications/ev

2025-04-24

Last chance to join our Data Protection in Immigration & Asylum workshop! We’ve just added one last date for this popular session—don’t miss your chance to take part.

Whether you work in advocacy, law, or research, this workshop gives you the tools to navigate and challenge data practices in immigration and asylum systems.

When: Thursday, 8 May 2025, from 15:00–17:00 BST
Where to register: us02web.zoom.us/meeting/regist
Find more info here: statewatch.org/publications/ev

2025-04-22

Belgian police want to predict crime—but what they’re doing is reinforcing discrimination.

A two-year investigation by Statewatch, Ligue des droits humains, and Liga voor Mensenrechten found:
• A lack of transparency around the local & federal use of these systems
• Biased, outdated or unfounded data forming the basis of predictions
• Clear risks of discrimination against marginalised communities
• No meaningful oversight or safeguards

📖 Read the full report: buff.ly/2abnjon

2025-04-16

France’s new “war on drugs” law won’t stop drug trafficking. But it will expand invasive surveillance—especially targeting migrants, activists, and political dissent.

These powers include:
• remote activation of phones by police
• mass surveillance via black boxes sucking up internet and telecoms metadata

🔗 Read more: statewatch.org/analyses/2025/f

hand holding a joint in front of a no entry sign. Text next to the image says "sensationalism.
surveillance.
repression.

France’s new “narcotraffic” law won’t succeed in combating drugs.
But it will increase the unjust surveillance and repression of migrants and activists.

Read analysis for more"
2025-04-10

1/ The UK Ministry of Justice’s "murder prediction" tool isn’t the only algorithmic system profiling people as future criminals.

The MoJ is also using an AI system to assess the ‘risk’ of re-offending—profiling over 1,300 people every day. This system uses sensitive data, including information about people’s mental health, addictions, and disabilities, to make flawed and potentially discriminatory predictions.

2025-04-10

3/ This growing reliance on algorithmic tools to make life-altering decisions is part of a wider trend: cutting welfare and targeting marginalised people, while pouring public money into surveillance technology.

Instead of investing in dodgy data and racist code, the government should be funding systems that support people’s safety and wellbeing—not criminalising them.

Read the full article on the MoJ’s re-offending risk tool:
statewatch.org/news/2025/april

2025-04-10

2/ Predictive tools like these are already being used across the criminal justice system. Police forces and state agencies are relying on automated profiles to justify suspicion, stop-and-search, arrests—and even potentially deadly force.

And now, the UK government wants to go even further by weakening legal safeguards around police use of automated decision-making. We joined over 30 organisations in urging them not to: statewatch.org/news/2025/april

2025-04-10

1/ No, this is not Minority Report. It’s worse.

After reading headline after headline comparing the findings from our investigation to the film #MinorityReport, we thought we'd set the record straight.

The UK Ministry of Justice is piloting a ‘Homicide Prediction’ tool—not in a sci-fi future, but right now. And no, it’s not powered by some psychic soup in a “Precrime” department. It’s a very real project, misusing people’s personal data to make biased assumptions and threatening our rights.

2025-04-10

3/ As our researcher Sofia Lyall puts it:
“Time and again, research shows that algorithmic systems for ‘predicting’ crime are inherently flawed. Yet the government is pushing ahead with AI systems that will profile people as criminals before they’ve done anything.”

Minority Report was fiction.
This is real surveillance dressed up as public safety.

Read more: statewatch.org/news/2025/april

2025-04-10

2/ Data from police & government bodies—including info on victims, suspects, witnesses, missing people, & those with safeguarding concerns—is being fed into an opaque algorithm to ‘predict’ who might commit a crime.

This kind of profiling is unlikely to prevent harm and far more likely to cause it.

The predictions made by this tool are not only flawed; they’re biased. Research shows that predictive policing tools reinforce systemic racism & structural injustice in the criminal justice system

2025-04-09

The UK Ministry of Justice is running a secret data project which aims to ‘predict’ who will commit murder. The so-called predictive tool uses sensitive personal data, even considering "health data" to have "significant predictive power".

This means using data on people’s mental health, addiction, self-harm, suicide, vulnerability, and disability. All are supposedly considered relevant to 'predicting' who will commit murder.

A secret “murder prediction” system by the UK Ministry of Justice is using sensitive personal data on hundreds of thousands of people.
2025-04-09

The data comes from multiple police and judicial databases, known for institutional racism and bias. The data includes information on suspects, victims, witnesses, missing people, and people for whom there are safeguarding concerns.

A document we obtained says data on between 100,000 and 500,000 people was shared by GMP to develop the tool.

Read more: statewatch.org/news/2025/april

See the coverage in the Guardian: theguardian.com/uk-news/2025/a

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst