Yehoshua Bar-Hillel, 1966
Interested in many things PL from theory to implementation, also logic, category theory, also distributed systems, more recently Rust.
Check out Mangle Datalog, a typed, datalog-based logic programming language and deductive database: https://codeberg.org/TauCeti/mangle-go http://codeberg.org/TauCeti/mangle-rs
#datalog #CategoryTheory #logic #types #systems #QueryLanguage #DistributedSystems
Yehoshua Bar-Hillel, 1966
I'm writing this in English.
Not because English is my first language—it isn't. I'm writing this in English because if I wrote it in Korean, the people I'm addressing would run it through an outdated translator, misread it, and respond to something I never said. The responsibility for that mistranslation would fall on me. It always does.
This is the thing Eugen Rochko's post misses, despite its good intentions.
@Gargron argues that LLMs are no substitute for human translators, and that people who think otherwise don't actually rely on translation. He's right about some of this. A machine-translated novel is not the same as one rendered by a skilled human translator. But the argument rests on a premise that only makes sense from a certain position: that translation is primarily about quality, about the aesthetic experience of reading literature in another language.
For many of us, translation is first about access.
The professional translation market doesn't scale to cover everything. It never has. What gets translated—and into which languages—follows the logic of cultural hegemony. Works from dominant Western languages flow outward, translated into everything. Works from East Asian languages trickle in, selectively, slowly, on someone else's schedule. The asymmetry isn't incidental; it's structural.
@Gargron notes, fairly, that machine translation existed decades before LLMs. But this is only half the story, and which half matters depends entirely on which languages you're talking about. European language pairs were reasonably serviceable with older tools. Korean–English, Japanese–English, Chinese–English? Genuinely usable translation for these pairs arrived with the LLM era. Treating “machine translation” as a monolithic technology with a uniform history erases the experience of everyone whose language sits far from the Indo-European center.
There's also something uncomfortable in the framing of the button-press thought experiment: “I would erase LLMs even if it took machine translation with it.” For someone whose language has always been peripheral, that button looks very different. It's not an abstract philosophical position; it's a statement about whose access to information is expendable.
I want to be clear: none of this is an argument that LLMs are good, or that the harms @Gargron describes aren't real. They are. But a critique of AI doesn't become more universal by ignoring whose languages have always been on the margins. If anything, a serious critique of AI's political economy should be more attentive to those asymmetries, not less.
The fact that I'm writing this in English, carefully, so it won't be misread—that's not incidental to my argument. That is my argument.
RE: https://cosocial.ca/@timbray/116200896936074766
> When the person opening the PR gets credit for shipping and the reviewer bears the consequences of reviewing a bad merge, you have a structural problem no tool can solve.
In a pool of bad AI takes and cringy jokes, this is something you really want to read and reflect on. The issue predates AI, but AI is going to make it so much worse, that a conflict is nearly inevitable.
Good article on comptime and the comprehension cost of breaking parametricity https://noelwelsh.com/posts/comptime-is-bonkers/
new very serious important post: on nominal typing in webassembly https://wingolog.org/archives/2026/03/10/nominal-types-in-webassembly
Being intentional seems to matter more than ever. We need to recalibrate how we should view and deal with code in our various communities that formed around code. That is human-to-human work, the machines are not going to help with that.
Both views are necessary. Vibe coding is going after functionality without caring about properties, but you can use LLMs as tools to establish properties, prove theorems, improve quality, write docs for human understanding...
The split I see is the one between a "functional" (the artifacts achieve what is needed) vs "structural" view (the artifacts are how we want them to be).
Sometimes, the code is a means to an end, sometimes the code itself is the objective, like a proof that we can overcome a set of constraints.
Programs are just sequences of tokens, we can and do ascribe meaning to those token sequences in more than one way.
... and the results are in: "This is a no go" ¯\_(ツ)_/¯
Called out by an LLM that detected I had used another LLM when generating a kernel docs patch...
It pointed out - correctly - that there were some hallucinated identifiers. Fixed, checked harder, sent v2.
I enjoyed this article on open source in China by Kevin Xu ... especially the emphasis that people learn through open source and that open source is something that cannot be confined to the borders of one country https://interconnect.substack.com/p/chinese-open-source-a-definitive
@wingo you internalized 33 rpm / 45 rpm ?
Here is a Deductive Database + LLM 3blue1brown guest video I find fascinating https://youtu.be/4NlrfOl0l8U?t=4m8s
When AI Writes the World’s Software, Who Verifies It?
https://leodemoura.github.io/blog/2026/02/28/when-ai-writes-the-worlds-software.html
"Writing a specification forces clear thinking about what a system must do, what invariants it must maintain, what can go wrong. This is where the real engineering work has always lived. Implementation just used to be louder."