biased opinions on subjects that scarcely matter

2025-05-28

@adgaps this is nice, font looks good! just wanted to let you know, you have a typo there in the second bullet: msot common type of church 😌

boosted:
2025-05-26

"Emacs is a Gnostic cult. And you know what? That’s fine. In fact, it’s great. It makes you happy, what else is needed? You are allowed to use weird, obscure, inconvenient, obsolescent, undead things if it makes you happy." -- You can choose tools that make you happy

borretti.me/article/you-can-ch

#emacs

2025-05-25

@tommorris maybe instead abolish copyright altogether, that benefits largely big corpos and not authors/artists?

2025-05-24

This is cool, I didn't know about FSRS. Maybe that's why Anki kind of never grew on me 🤔 going to try that out

domenic.me/fsrs/

boosted:
2025-05-21

I got access to Gemini Diffusion, Google's first diffusion LLM, and the thing is absurdly fast - it ran at 857 tokens/second and built me a prototype chat interface in just a couple of seconds, video here: simonwillison.net/2025/May/21/

boosted:
mobidicmobidic
2025-05-21

@tealeg Same source - another view: technologyreview.com/2025/05/2

By the way - you can read the ALT-Text for more ( ;

I use a local LLM via the Msty.app, which runs entirely offline—without relying on any cloud infrastructure. For my purposes—developing native, low-energy, local JavaScript applications—this setup is sufficient.

My laptop, powered only by a quad-core CPU (no GPU), consumes roughly 300–900 joules per minute under load. I no longer need a search engine to debug code. I understand the output and don’t rely on pre-built libraries.

My code uses only native browser functions because I believe the client—not server farms—should do the computing. These farms are built near cities and drain precious water for rack cooling. Decentralization is one of my goals, and local LLMs are a modern way to access knowledge.

Had I used hundreds of web searches daily to reach the same knowledge via trial and error and docs, I would have consumed more infrastructure energy, even when training costs are included.

This debate isn't about energy. It’s about how OpenAI, once nonprofit, scraped open web knowledge and now profits via Microsoft’s enterprise tools. Developers whose code filled GitHub get nothing. That’s unfair. That’s fact.

Fact: LLMs are now free and available, e.g., on huggingface.com. We still have a choice: use LLMs locally to build meaningful tools—or ignore a technology that’s already being weaponized. I choose to use it—locally, purposefully, on my terms.

Let’s share knowledge with good intent—for people, animals, and the climate—so we can live well together on this planet.
2025-05-21

@tealeg there are alternative, less bleak estimates. This also doesn't take into account that AI gets cheaper. You can literally run a small model that takes less memory than your browser, which was not possible a year ago. Besides, how much energy is saved by AI doing a task in minutes that previously required hours to complete?

boosted:
2025-05-21

DeGoogling is possible, and it doesn't need to be difficult. 👏

Take a look at our in-depth guide of Google alternatives to learn how you can take back your privacy in 2025. ❤️🔒

👉 tuta.com/blog/how-to-leave-goo

Have you already DeGoogled? If so, let us know your favorite Google-free apps.

Image titled : DeGoogle Your Life. Check out the best alternative recommendations. With lists of alternative apps and services to replace Google owned products.
boosted:
2025-05-20

I was planning to write here a short text about how, when considering whether to use AI for a task, that one should take into account not only the difficulty/complexity of the task, but also the acceptable failure rate; for instance, using an AI to suggest a recipe for dinner has an acceptable failure rate when just cooking for oneself, but would be inadvisable for a head chef preparing a state banquet for a high-profile diplomatic function, even if the two tasks are essentially of comparable difficulty (per person served, at least). But I realized that since this writing task was itself simple and with a high acceptable failure rate, it made sense to just to let an AI summarize this point directly in a table form, as enclosed below; it contains minor imperfections, but certainly suffices for the task at hand. [My prompt for this can be found at chatgpt.com/share/67e813bc-590 ]

An AI-generated matrix to summarize when AI tooling is recommended for use.
2025-05-18

Hmm just realized I don't see toots from some of the people I follow in my feed. They are on other instances. I thought was supposed to pull their toots for me after I follow them? Is this not how it works? If I open up their profile I see their toots that they made after I subscribed to them.

2025-04-15

> register on twitter
> follow some science people which are still predominantly there
> you have reached the limit of that action for today, add your phone #
> delete twitter

2024-06-16

My points:
- cost of running LLMs has been and will continue to plunge. We've not yet reached the limit there
- not everyone needs an LLM or "AI" or "chatbots"
- the use-cases boil down to: dealing with mundane, routine, center-of-the-distribution texts. It's your job then to go out of the center into the fringes

(2/2)

2024-06-16

It's interesting to see that my mastodon bubble takes a radically opposite view towards and 's than my own. The former being "all AI is over-hyped bullshit, a waste of resources for no actual use-cases, and the bubble will soon burst".

(1/2)

2024-06-01

Can someone 's / 's / whatever, that are chaotic, broadly-themed and eschew the keywords "rationalist", "bayesian", "effective altruism" and the like? I know this is very unspecific but no way I could make that more precice than: think anti-Scott Alexander.

2024-05-18

@galdor@emacs.ch @hajovonta Yes! These are the exact words told to newcomers to the land, that if they want total control over the software, Emacs is the way to go. Once you already know how to operate it, it becomes evident how to utilize the power it gives to you. But before that, all you see is a rather unfriendly window and a lengthy tutorial, and you ask yourself, is it worth it?

By the way I *did* set up to read my feed after all but not sure if I want to continue dabbling in it

2024-05-18

Trying out Gnus is a humbling experience that also provides a perspective on why people might not want to deal with Emacs, preferring alternative editors: it is not immediately obvious that overcoming a steep learning curve would bring benefits compared to an easier solution (like using a different news client). I just want to read my RSS feed, presented in a concise, elegant fashion, I don't want to battle with an UI that might've made sense back in the modem era

boosted:
2023-09-26

Join us tonight at #Emacs London meetup (Sept 26th).

Register to attend at meetup.com/london-emacs-hackin

Add your topics to github.com/london-emacs-hackin or ping me and I can add.

Help get the word out and boost 🙏

boosted:

LinkedIn job search hit absolute rock bottom. If you search for "Elixir" jobs in NL, it returns you 686 results. Guess how many of them even have the word "Elixir" in them. Two!

LinkedIn literally ignores what you ask for and instead returns 17 pages of "promoted" irrelevant ads.

For Python, the ratio is about 34%. Which is better but I think more a coincidence, lots of positions today at least mention Python.

#python #elixir #linkedin #jobsearch

$ cat elixir.jsonl | wc -l                       
686

$ cat elixir.jsonl | grep -E '(e|E)lixir' | wc -l
2

$ cat python.jsonl | wc -l                       
1000

$ cat python.jsonl | grep -E '(p|P)ython' | wc -l                                       
340
2023-09-22

@qed@emacs.ch Thanks! I actually learned this today when I was reading on the universal-argument

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst