I [re-]started doing weekly livestreaming of my oss development. And to accompany it, I made a ko-fi for anyone who wants to encourage me!
I did several test runs last week. I’m currently mostly working on something that’s used by people in science and medicine, called Tripoli. I’m also doing some work on my audio api, Grig. I explain what I’m working on at the beginning of each stream, even when it means having to repeat myself. I might do other fun things on Friday streams like playing indie games. Sometimes I do my cute cartoon avatar character (Tamsynne the Dryad), sometimes I just do a webcam.
I try to do this at least once a week. This week will probably be an exception due to logistics. I’m streaming at https://www.twitch.tv/osakared and old videos that I feel like saving go on the youtube playlist. Feel free to tune in and heckle at me for my tech decisions.
If you want to help a non-binary trans fem (me) pay for laser for this #PrideMonth, I made a goal on ko-fi for laser. Every little bit helps, especially if it’s recurring. I have ways of keeping costs down and can just do some areas for now.
I ended up doing one test run Monday and another today just now. I’ll keep playing with my streaming setup throughout the week until I like it. The plan is to do these at least once a week.
I have income now, but if you donate to my ko-fi it helps me justify spending some time on this and other oss projects that I don’t have paying clients for (like my audio libs). It’s also currently my way of trying to raise money for doing laser hair removal. See the Goal on my ko-fi.
It also just helps for people to watch these and give me feedback. E.g., can you understand my effected voice or would you prefer me not use the character. Or feedback on my coding (by my pair coding partner!) I may also do other content in this such as playing indie games or pair coding with friends on the stream.
#MutualAid
RE: https://mastodon.social/@thomasjwebb/114699389908227551
Let me say something brought about by controversies over training data that I would hope both pro-IP and anti-IP people can agree with - if you are a tech billionaire, you almost certainly are a benefactor of intellectual property. If you are in that position and publicly take an anti-IP position now because you see how it stymies training, then from the perspective of your stated beliefs, you are sitting on top of ill-gotten gains that you should perhaps give away for the cause of free culture. You should publicize and open source any proprietary data you own.
If, on the other hand, you jealously guard your model and simply wish to remove restrictions on how you train, then you’re not against IP, you just are a garden variety rent-seeker and no one should fall into your trap by pretending that this is an ideology or policy debate. There are consistent arguments against IP, see https://c4ss.org/content/14857 for an example. That is not what motivates people like Jack Dorsey. They wouldn’t be where they are without IP and they’d lose it were the impossible to happen and our country abolish one of the cornerstones of modern capitalism. They can safely call for the end of IP because they know it’s not going to happen.
I am a very versatile developer (and I do other things than development!), but some of my more niche skills are more in demand in Europe than in the US. I’m not even entirely sure why that is, just happens to be that way. Moving to another country is difficult even for the most privileged of people and there’s always visas and such to worry about. We only have to get on a plane to visit one side of our family, moving to EU would make that the case for both sides. So I’m speaking more abstractly here and have a strong preference for remaining where I am (I’m also strictly remote-only).
In particular, the UK and Germany seem to be hotbeds of innovation in the audio space. I would like to see more audio stuff happening all around the world and not be concentrated in a few countries because any country could develop issues. The US and UK both have decided to economically close themselves off from the rest of the world, with brexit and the current ramping up trade wars, respectively. Any heuristic the business community was holding onto that conservative parties are pro-business no longer holds and they need to adjust accordingly.
The legal system in the UK has also embraced a niche, elite cult that obsessively hates transgender people. To be clear, the general population of the UK doesn’t hold these views. They tend to be, like Americans, either pro-equal rights or opposed on fairly ordinary grounds, usually religious or traditionalist. And only speaking for myself, but the dishonesty, sanctimony and verbose word salad pseudoscience that go with this ideology are a lot more repugnant than simply hearing a slur from an ignorant person. So anyway, there’s no way I could ever relocate to the UK. I’d rather deal with our brand of authoritarianism if just because it’s the devil I know.
So that would just leave Germany. I like the German language, but my wife might not like having to learn a language with even more complicated grammar than English. Now to be clear, by no means do I have to work in the audio space. Indeed, I like doing a variety of different things. I say all this as a long-winded way to say that there’s a bitter irony behind a country, the UK, being a hotbed of innovation in the audio development space also having a legal system designed to alienate a population stereotypically prone to being obsessed with synthesizers. And the stereotype applies in my case.
One of the biggest things that can be an annoying blocker in development is not writing code - that’s often the easy part - but getting a setup working on your machine. And this is precisely the sort of things you do not want AI doing. Imagine giving someone you don’t know who’s tripping balls your root password! So while LLMs might clear some blockers where Reddit or StackOverflow fails, they don’t clear one of the more annoying ones. Of course, it’s always important imo to make projects as easy to get hacking on with as little setup as possible, the sysadmin problems rear their ugly head.
Just sent an e-mail to the organizers of ADC about this as every ADC or ADCx has been located in one of these three countries. There’s no refuge from fascism in this world, but damn places need to be less oblivious and at least rotate the locations. Maybe one year, you inadvertently are located somewhere that trans people are afraid of visiting. No biggie if next year you’re at a different place.
@grig some overview because I know the above list is probably confusing:
Initially, the only way to use this DSL will be to create a class that implements grig.audio.Processor
. Initially, there won’t even be a way to share code between modules. I want to pick the brain of the Reflaxe author about best approaches to that. You declare parameters with @:parameter
next to a member var
, which currently must be Atomic<Float>
. Stuff like frequency, pulse width, etc. Like the automation parameters in VST SDKs (which I have every intention of supporting). You’ll be able to allocate memory in normal functions or constructor, but not in a mandatory process
method that is called by the audio callback. There will also be a grig.audio.Node
interface that works similarly but operates per-frame. I say frame instead of sample because it could be multiple channels, so e.g., if you’re doing stereo, you process a fixed array of two Float
s. You’ll be able to make plugins, standalone synths, etc. from either of these interfaces but the goal is to also be able to make sharable code in generic classes that can then be used by Processor
s and Node
s or transpiled and used from target language.
My goals with hxal (may get renamed since there is haxe externs for openal that are also referred to as hxal), the audio DSL I’m working on based on haxe to facilitate writing high-performance real-time audio code in haxe:
To have a simple way of writing non-GC’d haxe code that also enforces best practices for hard real-time code, so no indeterminate options like allocations or file access in the audio callbacks. Any parameter that is meant to be controllable from outside the hxal code (i.e., normal haxe code) must be lock-free atomic.
To seamlessly work in larger applications that compile through the normal haxe pipeline going through its targets. Initially, this will probably just be the hxcpp target, but I would want to start working on hashlink as well and also make bindings for other languages that can be targeted (so, e.g., you can target web with wasm or nodejs with js bindings to c). Basically, the part made in hxal will use its own code generation and provide c bindings or a C++ class wrapper as appropriate. Initially, this will consist of getters/setters of atomic Float
parameters.
To work well with Reflaxe, so that the entire application can be built using it. Indeed, I want to make all or as much as possible of @grig compatible with Reflaxe, and I think that’s quite doable. I will also increasingly use Reflaxe’s internals to avoid duplication of efforts with hxal’s own targets. To this end, I’ll start making backends for Reflaxe (at least Rust and swift, which I want hxal to be able to target).
To create code in its native targets or when used via Reflaxe that is clean and usable from the language in question so one can decide to stop using hxal and revert to coding your audio in C++ or Rust if you want. Ideally, algorithms can work over generic types or at least specify int or float via templates to maximize usability.
This is a dream/stretch goal, but to provide a way to create effects and instruments in a visual programming environment, similar to PureData. This can become a powerful no code environment, especially if we integrate tools that can study raw audio input and try to evolve different architectures into the same sound, so that it can be parameterized and, like hxal code, generate clean target code.
Prefer letting the C++ or Rust compiler reject code, with hopefully clear error messages, over engaging in the daunting task of building all assurances into hxal itself. This isn’t idiomatically haxian, I admit
Do high-level optimizations in ways that enable compilers to do low-level optimizations. When connecting nodes together, hxal will effectively inline it into a single function with each node’s operations done in sequence, which compilers can then find ways to optimize.
Find ways to use SIMD optimizations for the Node (per frame/sample) interface after summing, but at least use them for Processor (block at a time) code.
Err on the side of less, reliable functionality before adding things. It’s easier to maintain something that does less.