@FandaSin @cleverboi @BrodieOnLinux @neal I think #i486 is a good baseline because that's the furthest some systems can go!
@FandaSin @cleverboi @BrodieOnLinux @neal I think #i486 is a good baseline because that's the furthest some systems can go!
@cleverboi @FandaSin @BrodieOnLinux @neal as for #i386 I understood why cuz it was more and more painful m, but the problem with #i486 is that in several #embedded and #industrial setups there are still newly deloyed systems based off it.
I.e. #Vortex86 #SoC's cuz #MSDOS and shit still gets used in #industrial equipment.
Linux stopped supporting i386 with versions 3.4.99 (longterm)
& 3.6.9
respectably.
i386
where none of the toolchain (#musl) and utilities (#toybox) supoort it, i486
is still supported there.And I really want to continue developing a minimalist "rescue" distro that can handle such legacy hardware because it may be the only option to ddrescue
stuff from certain systems or to properly & reproduceably backup & restore them!
To me @OS1337 is just an attempt to a minimalist #Linux distro because I want some #reproduceable & #auditable #firmware for various other projects, and both @yoctoproject and #RaspberryPiOS 'lite' seem rather excessive to me.
I just think that it can have serious benefits being less distracting and allowing me (and others) to just use basically any hardware to get work done...
@lmemsm basically the Idea behind it is to be a brutally simple #toybox + #musl / #linux distro that grew out of the necessity for me to actually think about #firmware for some projects.
Basically I want something that is so simple and auditable that it's practical to make it pass any #verification demands for #SecureTerminal|s in #CriticalInfrastructure and #Communications.
Not to mention a #GNUfree - #Linux distro is the way to go if I want that thing to not get bricked constantly by minor #GlibC-changes...
I hope that answers your question...
Things #musl libc will never do (broad but not comprehensive):
- Nag you to update.
- Phone home to check it if should nag you to update.
- Tell you a CVE can't be fixed without updating to the latest version.
- Try to force you to switch from glibc to musl.
- Get other software you depend on dependent on musl.
- Rant against "wokeness" or "DEI".
- Integrate "AI" into your libc.
- Give you up.
- Let you down.
Running Android Platform tools (adb, fastboot, etc) on musl. #android #musl #Linux
https://gist.github.com/lidgnulinux/eeb1d1b881edf8a10df7d0861e31eb66
Details on progress in this post to the #musl mailing list: https://www.openwall.com/lists/musl/2025/06/11/1
Actual draft code repo is now up too, at https://github.com/richfelker/musl-uca-draft
Update on #musl LC_COLLATE work: NFD code is functional and Unicode test vectors for normalization to NFD all pass.
.rodata 14161 bytes
.text 1089 bytes
🎉
This thread by @ariadne on Treehouse's intent to defederate from Fosstodon lays out more details, in more diplomatic terms:
https://social.treehouse.systems/@ariadne/114632131776730666
I (@dalias) have tried my best to engage with the new Fosstodon admin in good faith on what mistakes they're making, only to be met with assertions that their comfort is more important than marginalized folks' safety on their instance. I cannot in good conscience ask folks to maintain a relationship with Fosstodon in order to follow #musl libc updates on the fedi.
Well they *are* making their own versions of those two macros. I'd already read the ostensible rationale for it, and it seemed poor.
They have a problem with their own code's headers having lots of cross-dependencies. (Strong coupling and low cohesion: a long-standing #systemd problem.) That's not a reason for fiddling with the #StandardC library, let alone making one's own FILE and DIR macros.
I understood Daniel J. Bernstein's avoidance of the Standard C library, especially strings and standard I/O which had been rife with pointer mis-uses for decades.
But this is not that.
Why on Earth would one continue to use stdio but roll one's own version of the GNU C library's FILE and DIR macros?
https://github.com/systemd/systemd/issues/37779#issuecomment-2952813363
All I want is just a collection of #binutils, #GCC, #llvm+#clang, #glibc and #musl that are "free standing" / relocatable, which I can pack into a #squashfs image to carry around to my various development machines.
You'd think that for something as fundamental as compiler infrastructure with over 60 years of knowledge, the whole bootstrapping and bringup process would have been super streamlined, or at least mostly pain free by now.
Yeah, about that. IYKYK
@wyatt Also glancing from the #PC98 architecture and specific quirks that #Linux accounts for in menuconfig
it's most likely not gonna be enough to "just boot the #i486 version of @OS1337"...
https://github.com/OS-1337/OS1337/blob/main/OS1337-core-prerelease.img
As for the specifics of the #PC9821+ and their detailed hardware I'm clueless beyond the Wikipedia Article and can only Assume anything before a (RAM-upgraded) #9821Ap & #9801BA is off the table and a #9821Af or better is desireable as the 14,6MB RAM limit will really become a problem quickly, as I'm pretty shure it's impossible to get linux-6.6.6
run on anything below 8MB at all and most recommendations hint that 16MB is more often than not a practical limit (tho that may be becaise 10/12/14 and other "asymetric" configurations were already avoided back then)...
On the flipside I did manage to install #WindowsXP on machines with 32MB RAM and actually get them to display a desktop (abeit with seconds per frame instead of frames per second) so this is more likely a question of chipping down things.
dd
something on a #BlueSCSI or similar...#Alpine 3.22.0 has been released (#AlpineLinux / #musl / #BusyBox / #Linux) https://alpinelinux.org/
So far, I've gotten an initial estimate of about 30k for the tables needed for Unicode normalization to NFD (decomposed), which was already *really small* compared to how this is usually done, down to an estimate of about 10k. This was through analyzing the data to be represented and adapting the data structures, building on the concepts used for other Unicode data in #musl.
13/N
In an age where folks don't bat an eye at embedding a friggin outdated insecure fork of Chrome into every desktop application, hundreds of kB might not seam like a big deal. In some contexts it is, and in others it isn't.
But what is a big deal is the perception by folks who don't want that cost, that "support for all those other weird languages is a burden I don't want bloating my system". It feeds a fascist-adjacent, xenophobic narrative that blames cultural "others" for "my software is bloated".
And from the beginning, my goal with #musl has been that this never be a tradeoff. That multilingual stuff just work, basically just as efficiently as your old ASCII-only or 8-bit-codepage stuff did.
12/N
I'll get into the full scope a bit later, but the big technical part of the sponsored locale project is extending this kind of compact and efficient Unicode data representation that's been characteristic of #musl to some of the more challenging parts.
Up til now, we have not had any support for LC_COLLATE (collation/sort order) beyond just simple byte-by-byte comparison, which is not reasonable for the majority of the world's languages.
This is about to change, but it has some big prerequisites...
9/N