Chill zone.
At the Melbourne state library.
Software architect and engineer, #Unix tragic, consulting/fractional CTO.
Enjoy open source, #cloud, distributed systems, #FreeBSD and #retro 8 bit computers. Movie buff, trivia nerd, team psychology curious.
Husband, father, Essendon football tragic, nature finder when I want to unwind. Originally of the Dandenong Ranges and enjoy walks and long drives in the area.
Spent quite a bit of time in Montreal, Canada and loved the people and my time there #elbowsup
Chill zone.
At the Melbourne state library.
One of the hardest and most valuable things you can do as a company is the following:
1. Have a fully up to date org chart
2. Have a diagram that is not the org chart that accurately reflects how work flows through the company
3. Have an up to date and accurate diagram and explanation of what the company does and how it does it (architecture, revenue funnels, business value streams, code-bases)
Scaling decision making is *impossible* without a shared context to build alignment off of.
Borland TurboVision (the PC text mode windowing UI used in Turbo Pascal/C++) has been open-sourced and updated to work seamlessly on Linux and with Unicode:
https://github.com/magiblot/tvision
It’s all in C++, though if someone hasn’t wrapped it in bindings for Python/Rust/&c. yet, surely they will
Extra Instructions of the 65XX Series CPU
L: http://www.ffd2.com/fridge/docs/6502-NMOS.extra.opcodes
C: https://news.ycombinator.com/item?id=46169330
posted on 2025.12.05 at 19:38:50 (c=0, p=4)
Leaving Intel
L: https://www.brendangregg.com/blog//2025-12-05/leaving-intel.html
C: https://news.ycombinator.com/item?id=46167552
posted on 2025.12.05 at 16:27:04 (c=2, p=6)
Fraser Tweedale - champion open source guy in Australia - has logged a Freedom of Information request for the myGov code generator app:
https://www.pozible.com/project/mygov-app-source-code-foi-review-1
So we have a very simple app, that should be based on standard public TOTP algorithms, produced by the Australian government, for use by the public. And yet they have fought at every turn to not release the source code to this app. And of course we get to the stage where Fraser has to stump up a fair bit of money in order to take the case to the Administrative Review Tribunal.
Personally I'm baffled that they felt the need to write their own. Which is why I completely support Fraser trying to see that this stuff is actually implemented securely and in the open, and can therefore be trusted to secure people's access to vital government services.
Go @hackuador !
FreeBSD 15.0 Now Available
The FreeBSD Project has announced the availability of FreeBSD 15.0-RELEASE, introducing updated toolchains, enhanced hardware support, improved security features, and key updates across the base system. This release continues the Project’s focus on stability, long-term maintainability, and consistent engineering.
We encourage you to review the release notes and upgrade guidance
Read the full announcement: https://www.freebsd.org/releases/15.0R/announce/
The December 2nd, 2025 Jail/Zones Production User Call is up:
We discussed the #FreeBSD 15.0 release, the FreeBSD Foundation's outreach regarding projects prioritization, OCI container images, PkgBase and Poudriere, ZFS boot environments and "mountroot", DHCP in jails with devfs and epairs, a great demo of new Sylve jail features including Linux compatibility jails, a PkgBase jail creation tool, and more!
"Don't forget to slam those Like and Subscribe buttons."
You can support all Call For Testing efforts via BSD Fund: https://bsdfund.org
NEW VIDEO - FreeBSD 15.0-RELEASE is out!
https://youtu.be/xR0zjPtix50?si=PNkje5nqa0JgTO1Y via @YouTube
FreeBSD 15.0 is officially released! 🚀
Highlights include:
- Updated base system and OpenZFS 2.4
- Broad architecture support: amd64, aarch64, RISC-V
- Security and developer tool improvements
Read more & download: https://www.opensourcefeed.org/freebsd-15-0-released/
FreeBSD 15 released
https://www.freebsd.org/releases/15.0R/announce/
Release notes:
https://www.freebsd.org/releases/15.0R/relnotes/
Nothing can stop me from updating my devices to FreeBSD 15.0-RELEASE. The base system is so stable that I believe it will stay stable after major upgrade
Forgive,
Be curious,
Ask for help,
Accept failure,
Try out new things,
Admire instead of envy,
Be patient with learning,
Find amusement in simplicity,
Not care about what others think,
Tell the truth to yourself and others.
Darius Ryan-Kadem
This is so true. I feel like this should be re-shared every single week day and twice on weekends! 🥰🥰
RE: https://infosec.exchange/@david_chisnall/115604184530371368
i've worked at SCI for a few months and it's a very nice place to be at https://mastodon.social/@david_chisnall@infosec.exchange/115604184606085511
For those who are skeptical that AI is a bubble, let's look at the possible paths from the current growth:
Scenario 1: Neither training nor inference costs go down significantly.
Current GenAI offerings are heavily subsidised by burning investor money, when that runs out the prices will go up. Only 8% of adults in the US would pay anything for AI in products, the percentage who would pay the unsubsidised cost is lower. And, as the costs go up, the number of people willing to pay goes down. The economies of scale start to erode.
End result: Complete crash.
Scenario 2: Inference costs remain high, training costs drop.
This one is largely dependent on AI companies successfully lobbying to make plagiarism legal as long as it's 'for AI'. They've been quite successful at that so far, so there's a reasonable chance of this.
In this scenario, none of the big AI companies has a moat. If training costs go down, the number of people who can afford to build foundation models goes up. This might be good for NVIDIA (you sell fewer chips per customer, to more customers, and hopefully it balances out). OpenAI and Anthropic have nothing of value, they start playing in a highly competitive market.
This scenario is why DeepSeek spooked the market. If you can train something like ChatGPT for $30M, there are hundreds of companies that can do it. If you can do it for $3m, there are hundreds of companies for which this would be a rounding error in their IT budgets.
Inference is still not at break even point, so costs go up, but for use cases where a 2X cost is worthwhile there's still profit.
End result: This is a moderately good case. There will be some economic turmoil because a few hundred billion have been invested in producing foundation models on the assumption that the models and the ability to create them constitutes a moat. But companies like Amazon, Microsoft and Google will still be able to sell inference services at a profit. None will have lock in to a model, so the prices will drop to close to the cost, though still higher than they are today. With everyone actually paying, there won't be such a rush to put AI in everything. The datacenter investment is not destroyed because there's still a market for inference. The growth will likely stall though and so I expect a lot of the speculative building will be wiped out. I'd expect this to push the USA into recession, but this is more the stock market catching up with the economic realities.
Scenario 3: Inference costs drop a lot, training costs remain high.
This is the one that a lot of folks are hoping for because it means on-device inference will replace cloud services. Unfortunately, most training is done by companies that expect to recoup that investment selling inference. This is roughly the same problem as COTS software: you do the expensive thing (writing software / training) for free and then hope to make it up charging for the thing that doesn't cost anything (copying software / inference).
We've seen that this is a precarious situation. It's easy for China to devote a load of state money to training a model and then give it away for the sole purpose of undermining the business model of a load of US companies (and this would be a good strategy for them).
Without a path to recouping their investment, the only people who can afford to train models have no incentive to do so.
End result: All of the equity sunk into building datacentres to sell inference is wasted. Probably close to a trillion dollars wiped off the stock market in the first instance. In the short term, a load of AI startups who are just wrapping OpenAI / Anthropic APIs suddenly become profitable, which may offset the losses.
But new model training becomes economically infeasible. Models become increasingly stale (in programming, they insist on using deprecated / removed language features and APIs instead of their replacements. In translation they miss modern idioms and slang. In summarisation they don't work on documents written in newer structures. In search, they don't know anything about recent events. And so on). After a few years, people start noticing that AI products are terrible, but none of the vendors can afford to make them good. RAG can slow this decline a bit, but at the expense of increasingly large contexts (which push up inference compute costs). This is probably a slow deflate scenario.
Scenario 4: Inference and training costs both drop a lot.
This one is quite interesting because it destroys the moat of the existing players and also wipes out the datacenter investments, but makes it easy for new players to arise.
If it's cheap to train a new model and to do the inference, then a load of SaaS things will train bespoke models and do their own inference. Open-source / cooperative groups will train their own models and be able to embed them in things.
End Result: Wipe out a couple of trillion from the stock market and most likely cause a depression, but end up with a proliferation of foundation models in scenarios where they're actually useful (and, if the costs are low enough, in a lot of places where they aren't). The most interesting thing about this scenario is that it's the worst for the economy, but the best outcome for the proliferation of the technology.
Variations:
Costs may come down a bit, but not much. This is quite similar to the no-change scenario.
Inference costs may come down but only on expensive hardware. For example, a $100,000 chip that can run inference for 10,000 users simultaneously, but which can't scale down to a $10 chip that can run the same workloads. This is interesting because it favours cloud vendors, but is otherwise somewhere between cheap and expensive inference costs.
Overall conclusion: There are some scenarios where the outcome for the technology is good, but the outcomes for the economy and the major players is almost always bad. And the cases that are best for widespread adoption for the technology are the ones that are worst for the economy. And that's pretty much the definition of a bubble: A lot of money invested in ways that will result in losing the money.
We (SCI Semiconductor) are about to hire some folks in the next couple of months (probably starting in January, since we're about to hit Christmas):
We're aiming to hire 1-3 FAEs, who can build out the open-source bits of the #CHERIoT software stack (including drivers / various communication stacks), build demos, and work with customers on use-case bringup.
We also want to hire someone else on the toolchain side. Primarily #LLDB + #OpenOCD, but also working with our #LLVM (and #RustC) folks.
Let me know if you're interested!
EDIT: We are a full-remote company. It's easiest for us to hire people in the UK (and one of our investors would really like us to hire more people in Sheffield), but elsewhere is possible (though might, for tax purposes, require you to be officially a contractor for a while).
We're also going to be hiring people for our hardware verification and RTL teams soon (more on the verification side than design at the moment, I think). I'm not responsible for them, but I can find out more details if anyone is interested. Our first CHERIoT chip is nearly finished, we're starting to work on the second.
EDIT 2: Thanks to all of the people who have expressed interest (in public and private posts). I'll try to get back to you all next week!
EDIT 3: I hope I've replied to everyone now! If I missed you (there were more replies than I expected!) please let me know. I think we'll aim to do another hiring round over the summer next year, so if the current timeline doesn't work out for you, please still let me know and I'll keep you in mind next time!
How far along has tarfs gotten in #FreeBSD? I've been watching @dch 's talk on immutable FreeBSD from EuroBSDCon 2023 (1 hour wasn't enough to cover your talk!)
I've been looking into applying the principles but sadly it won't be for $day_job as they've no interest in my FreeBSD ideas but for something else.
I'm curious which sockets were used in the jails. Was /tmp replaced with tmpfs in the jail and did you use a socket to talk to syslog-ng on the host?
Thousands of hacked Asus routers are under control of suspected China-state hackers
So far, the hackers are laying low, likely for later use.
https://arstechnica.com/security/2025/11/thousands-of-hacked-asus-routers-are-under-control-of-suspected-china-state-hackers/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social
“Architectural Theory Guardians: When junior developers or LLMs produce code, senior developers serve as the critical bridge between raw implementation and coherent system design. They can evaluate whether new code aligns with or violates the system’s theoretical foundation—not just technically, but conceptually. They understand the difference between code that works and code that belongs.”
https://cekrem.github.io/posts/programming-as-theory-building-naur/