Fergus Ryan

China analyst

Crossposting everywhere but hanging out at 🦋 fergus.bsky.social 

Join my newsletter: 🧧redpacket.substack.com

2025-12-22

I just signed a petition standing with Jewish Australians against antisemitism, racism and political division in the wake of the Bondi Beach massacre.

Add your name for unity and safety: jewishcouncil.good.do/unity/

2025-12-18

Does a Royal Commission typically slow action, or can governments act in parallel while one is underway?

My instinct is the latter, but that’s just a guess.

2025-12-16

I'm interested to understand the mechanism here: which specific antisemitism-focused policies does Howard think would have prevented the Bondi attack?

abc.net.au/news/2025-12-16/joh

2025-12-03

A very sharp piece from Lisa Visentin

Even as Australia reassesses its exposure to China, we still need people who understand how its system works for policy, strategy & risk.

If the market isn’t producing that talent, govt may need to consider how to.

smh.com.au/world/asia/how-do-w

2025-12-02

Huge thanks to my brilliant co-authors: Bethany Allen, Shelly Shih, Stephan Robin, Nathan Attrill, Jared Alpert, Astrid Young & Tilla Hoja.

Thanks to the Human Rights Foundation for supporting this work.

If you think this research matters, RT to get it in front of more people.

2025-12-02

The rest of the report expands the picture:

• AI in policing, courts & prisons
• AI-driven online censorship
• Minority-language LLMs used for surveillance of Uyghur, Tibetan communities
• AI-enabled fishing platforms affecting other countries' economic rights

2025-12-02

Chinese models are rapidly gaining global users through open-weight releases & easy API access.

As I told The Washington Post, if we don’t understand how they’re built, we risk importing censorship & political control hidden inside the technology itself.

washingtonpost.com/world/2025/

Screenshot of a Washington Post article quoting Fergus Ryan (me) saying Chinese AI systems have global ambitions and that without understanding how they’re shaped, “we risk importing censorship and political control hidden inside the technology itself.”
2025-12-02

In some cases, the same image generated wildly different answers depending on language:

• English → neutral description
• Simplified Chinese → state-aligned warnings (“cults”, “extremism”)
• Traditional Chinese → less constrained

Falun Gong practitioners meditating in a 2009 London protest in front of a banner reading “The world needs Truth, Compassion, Tolerance”. Used in Chapter 1 to show how Chinese LLMs describe Falun Gong very differently depending on language, often adopting state-aligned framing in Chinese.
2025-12-02

We tested image understanding in English, Simplified Chinese, and Traditional Chinese.

We found:

• Much stricter censorship in Chinese
• Strongest distortions on Tiananmen, Tibet, Falun Gong & Xinjiang
• Some models quietly added pro-state framing even without refusing.

2025-12-02

The results are staggering.

Models routinely:

❌ refuse to answer
❌ erase key details
❌ repeat state narratives

Here’s one example: a major Chinese LLM flat-out refuses to describe the 1989 Tiananmen Square massacre.

Screenshot of a Chinese LLM analysing the Tank Man photo. The model warns it must “be careful”, acknowledges the image is politically sensitive, and avoids historical context — an example from Chapter 1 showing how Chinese LLMs self-censor sensitive images.
2025-12-02

Chapter 1 tests what this system actually produces.

We asked leading Chinese LLMs to describe politically sensitive images: Tiananmen, Hong Kong 2019, Uyghur and Tibetan protests, Falun Gong, Taiwan, and more.

2025-12-02

China’s national AI-safety standard categorises 31 types of “unsafe” content — from terrorism and privacy breaches to…

🟥 criticism of the CCP
🟥 “Western ideology”
🟥 peaceful protest

This is the logic shaping every model released in China.

A circular chart showing China’s official AI-safety taxonomy: 31 categories of “unsafe” content, dominated by violations of “core socialist values”. Used in Chapter 1 to show how these categories shape censorship in Chinese LLMs. For more, see Chapter 1.
2025-12-02

The report opens by explaining something crucial:

In China, “AI safety” doesn’t mean protecting people from harm.

It means ensuring AI protects the state, aligns with “core socialist values”, and avoids anything that might “harm the national image”.

2025-12-02

Our new ASPI report 'The Party’s AI' is out now. It shows how China’s LLMs, vision models and “AI+” governance architecture are hard-coding censorship and control into the future of AI.

aspi.org.au/report/the-partys-

Cover of ASPI’s “The Party’s AI”. A lone man stands on a road where tanks should be, but they’ve been erased, mirroring how Chinese LLMs censor sensitive images. For more, see Chapter 1.
2025-11-26

‘Western politicians have failed to speak up about the slaughter of civilians in Sudan because the United Arab Emirates has bought and paid for their silence, according to a top Sudanese general.‘

middleeasteye.net/news/uae-buy

2025-11-25

‘Fan, who now spends most of her time outside China‘

Where is Fan Bingbing based?

focustaiwan.tw/culture/2025112

2025-11-25

I just chipped in to help Cairo Takeaway fight a defamation case born of a tabloid hit job. They could use your support too.

chuffed.org/project/cairo-take

2025-11-18

Pretty strong denial from Rudd.

mol.im/a/15299949

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst