#multithreading

Florian Engelhardtflowcontrol@phpc.social
2026-01-20

I’m heading to @ConFooCa 2026 in Montréal 🇨🇦 in a few weeks, where I’ll be speaking about two things I care a lot about in the PHP world: observability and multithreading.

If you’re going too, let’s connect! Always happy to chat observability, profilers, and real-world performance wins in PHP 🎉

#ConFoo #PHP #Observability #Performance #Multithreading

Conference slide with a blue background and a photo of Florian Engelhardt on the right. Large white headline reads ‘Observing PHP for Fun and Profit.’ Bottom-left shows the ConFoo.ca Developer Conference logo and the text ‘Montreal / Feb 25–27, 2026.’Conference slide with a blue background and a photo of Florian Engelhardt on the right. Large white headline reads ‘Parallel Futures: Unlocking Multithreading in PHP’ Bottom-left shows the ConFoo.ca Developer Conference logo and the text ‘Montreal / Feb 25–27, 2026.’
Jan :rust: :ferris:janriemer@floss.social
2026-01-11
2026-01-03

Под капотом многопоточной синхронизации в Java: как потоки договариваются через Mark Word

Когда вы пишете synchronized(obj), под капотом происходит целая цепочка событий, которую можно отследить до Mark Word — восьмибайтового служебного поля в каждом Java-объекте. В современных реализациях JVM (таких как HotSpot, OpenJ9, GraalVM) используется динамическая, адаптивная система, которая выбирает наиболее эффективную стратегию блокировки в зависимости от реального поведения потоков.

habr.com/ru/articles/982600/

#java #multithreading #monitor #mutex

2011-02-12

Ruby-Elf and collision detection improvements

While the main use of Ruby-Elf for me lately has been quite different – for instance with the advent of elfgrep or helping verifying LFS support – the original reason that brought me to write that parser was finding symbol collisions (that’s almost four years ago… wow!).

And symbol collisions are indeed still a problem, and as I wrote recently they don’t get very easy on the upstream developers’ eyes, as they are mostly an indication of possible aleatory problems in the future.

At any rate, the original script ran overnight, generated a huge amount of database, and then required more time to produce a readable output, all of which happened using an unbearable amount of RAM. Between the ability to run it on a much more powerful box, and the work done to refine it, it can currently scan Yamato’s host system in … 12 minutes.

The latest set of change that replaced the “one or two hours” execution time with the current “about ten minutes” (for the harvesting part, there are two more minutes required for the analysis) was part of my big rewrite of the script so that it used the same common class interfaces as the commands that are installed to be used with the gem as well. In this situation, albeit keeping the current single-threaded (more on that in a moment), each file analysed consists of three calls to the PostgreSQL backend, rather than being something in the ballpark of 5 plus one per symbol, and this makes it quite faster.

To achieve this I first of all limited the round-trips between Ruby and PostgreSQL when deciding whether a file (or a symbol) has been already added or not. In the previous iteration I was already optimising this a bit by using prepared statements (that seemed slightly faster than direct queries), but they didn’t allow me to embed the logic into them, so I had a number of select and insert statements depending on the results of those, which was bad not only because each selection would require converting data types twice (from PostgreSQL representation to C, then from that to Ruby), but also because it required to call into the database each time.

So I decided to bite the bullet and, even though I know it makes it a bunch of spaghetti code, I’ve moved part of the logic in PostgreSQL through stored procedures. Long live PL/SQL.

Also, to make it more solid in respect to parsing error on single object files, rather than queuing all the queries and then commit them in one big single transaction, I create single transactions to commit all the symbols of an object, as well as when creating the indexes. This allows me to skip over objects altogether if they are broken, without stopping the whole harvesting process.

Even after introducing the transaction on symbols harvesting, I found it much faster to run a single statement through PostgreSQL in a transaction, with all the symbols; since I cannot simply run a single INSERT INTO with multiple values (because I might hit an unique constrain, when the symbols are part of a “multiple implementations” object), at least I call the same stored procedure multiple times within the same statement. This had tremendous effect, even though the database is accessed through Unix sockets!

Since the harvest process now takes so little time to complete, compared to what it did before, I also dropped the split between harvest and analysis: analyse.rb is gone, merged into the harvest.rb script for which I have to write a man page, sooner or later, and get installed properly as an available tool rather than an external one.

Now, as I said before, this script is still single-threaded; on the other hand, all the other tools are “properly multithreaded”, in the sense that their code fires up a new Ruby thread per each file to analyse and the results are synchronised not to step on each other’s feet. You might know already that, at least for what concerns Ruby 1.8, threading is not really implemented and green threads are used instead, which means there is no real advantage in using them; that’s definitely true. On the other hand, on Ruby 1.9, even though the pure-Ruby nature of Ruby-Elf makes the GIL a main obstacle, threading would improve the situation by simply allowing threads to analyse more files while the pg backend gem would send the data over to PostgreSQL (which would probably also be helped by the “big” transactions sent right now). But what about the other tools that don’t use external extensions at all?

Well, threading elfgrep or cowstats is not really any advantage on the “usual” Ruby versions (MRI18 and 1.9), but it provides a huge advantage when running them with JRuby, as that implementation has real threads, it can scan multiple files at once (both when using asynchronous listing of input files with the standard input stream, and when providing all of them in one single sweep), and then only synchronise to output the results. This of course makes it a bit more tricky to be sure that everything is being executed properly, but in general makes the tools just the more sweet. Too bad that I can’t use JRuby right now for harvest.rb, as the pg gem I’m using is not available for JRuby, I’d have to rewrite the code to use JDBC instead.

Speaking about options passing, I’ve been removing some features I originally implemented; in the original implementation, the arguments parsing was asynchronous and incremental, without limits to recursion; this meant that you could provide a list of files preceded by the at-symbol as the standard input of the process, and each of that would be scanned for… the same content. This could have been bad already for the possible loops, but it also had a few more problems, among which there was the lack of a way to add a predefined list of targets if none was passed (which I needed for harvest.rb to behave more or less like before). I’ve since rewritten the targets’ parsing code to only work with a single-depth search, and relying on asynchronous arguments passing only through the standard input, which is only used when no arguments are given, either on command line or by default of the script. It’s also much faster this way.

For today I guess all these notes about Ruby-Elf would be enough; on the other hand, in the next days I hope to provide some more details about the information the script is providing me.. they aren’t exactly funny, and they aren’t exactly the kind of things you wanted to know about your system. But I guess this is a story for another day.

#Collisions #JRuby #Multithreading #PostgreSQL #Ruby #RubyELF
2008-08-03

Ruby-elf and documentation

After my checklist post I got asked for some documentation about ruby-elf tools like cowstats and missingstatic.

As it turns out I wrote little to no documentation at all, and I relied exclusively on the scripts being self-documenting, for the most part. Probably not a good idea if I want to have a broader audience.

For this reason, I think I’ll start by writing some man pages for the tools, hopefully today or tomorrow, before I get to the hospital again. I’ll see also to actually release a version of this so I can add it to portage too, so that it’s actually available for developers who are interested (for now you can get it from my overlay as dev-ruby/ruby-elf-9999.

I also started working on improving the way cowstats decides what whether a symbol is in a copy on write section or not. Before I only used the name of the section and, as it turns out, I used to ignore the TLS sections (no, not SSL successor but Thread-local storage).

The TLS problem is solved now but I decided using the name of the section to decide whether it’s CoW or not is not very feasible. I added code that checks the type and the flags of the sections, to an extent, so that it ignores automatically all the sections containing executable code, and all the read-only sections. It also considers .bss and equivalent sections just by type rather than by name (if I did this in the first place I would have supported .tbss in the first place too).

On a different note, I forgot to write that while I was hospitalised, my Nokia decided to go crazy and corrupted the fring app I was using to chat from the E61 itself. I think (and from one side hope) that the MiniSD I was using was broken, because then the rest of the phone would be fine. The problem is that the internal memory is very tiny and the MiniSD that Nokia gave me with the phone, which I just put back in it, is half full of Nokia’s own software, like the MailForExchange launcher (which I don’t care of, or TravelMate). I think I’ll have to pick up a new MiniSD hoping that will work. Last time I bought a Corsair 1GB, this time I think I’ll stop with a Trascend one as they never failed me up to now. Interestingly enough, at my supplier, the MiniSD card would be pretty cheap (€5) while the shipping costs would be over that price. I should check if they have cheap SD cards too, in the stores around here they are tremendously expensive still (€10 for a 2GB card!).

#Documentation #E61 #ELF #Multithreading #Nokia #Phones #RubyELF
2008-01-26

Some notes about multi-threading

Ah, multithreading, such a wonderful concept, as it allows you to create programs in such a way that you don’t have to implement an event loop (for the most part). Anybody who ever programmed in the days of DOS has at least once implemented an event loop to write an interactive program. During my high school days I wrote a quite complex one for a game I designed for a class. Too bad I lost it 🙁

Multithreading is also an interesting concept nowadays that all the major CPUs are multi-core; for servers they were already for some time, but we know that mainstream is always behind on these things, right?

So, now that multithreading is even more intersting, it’s important to design programs, but even more importantly, libraries to be properly multithreaded.

I admit I’m not such a big expert of multithreading problems, I admit that, but I do know a few things that come useful when developing. One of these is that static variables are evil.

The reason why static variables are evil is because they are implicitly shared between different threads. For constants this is good, as they use less memory, but for variables this is bad, because you might overwrite the data another thread is using.

For this reason, one of the easiest thing to spot in a library to tell if it’s multithread-safe or not is to check if it relies on static variables. If it does, it’s likely not thread safe, and almost surely not thread optimised.

You could actually be quite thread safe even when using static variables, the easy way to do that is to have a mutex protecting every and all accesses to the variable, this way only one thread at a time can access it, and noone can overwrite someone else’s data.

That cause a problem though, as this serialises the execution of a given function. Let’s take for instance a function that requests data through the net with an arbitrary protocol (we don’t care which protocol it is), saves it on a static buffer, and then parse it filling a structure with the data received and parsed. If such a function is used in a multithreaded program, it has to be protected by a mutex, as it uses a static buffer. If four threads require access to that function almost simultaneously (and that might happen, especially on multi-core systems!), then the first one arriving will get the mutex, the other three will wait till the first one completed processing. Although in general, on a multicore system you’d then have other processes scheduled to be executed at that point, you’re going to waste time by waiting for a thread to complete its operation, before the next one can be resumed.

This is extremely annoying, especially now that the future of CPUs seems to be an increase in number of cores, rather than in their speed (as we’re walking around a physical limit of 3GHz as far as I can see). The correct way to handle such a situation is not to use a static buffer, but rather use a heap-allocated buffer, even if that is slightly slower for a single thread (as you have to allocate the memory and free it afterward); this way the four threads are independent and can be executed simultaneously. For this reason, libraries should try to never use static buffers, as they might not know if the software using them is multi-threaded or not.

When a library is blatantly not thread-safe, there is even a bigger problem, which can be solved in two ways: the first is to limit access to that library to a single thread. This way there are no problems with threading, but then all the requests that need to be sent to that library has to be passed to the thread, and the thread has to answer to them; while cheaper than IPC, ITC is still more expensive than using a properly thread-safe library.

The other option is to protect every use of the library with a mutex. This makes a library thread-safe if it’s at least re-entrant (that is, no function depends on the status of global variables set by other functions), but acts in the same way as the “big kernel lock” does: it does not allow you to run the same function from two threads at once, or even any function of that library from two threads at once – if the functions use shared global variables.

How should libraries behave, then, when they need to keep track of the state? Well there easiest way is obviously to have a “context” parameter, pointer to a structure that keeps all the needed state data, allowing two threads to use different contexts, and call the library simultaneously.

Sometimes, though, you just need to keep something similar to an errno variable, that is global and set by all your functions. There’s no way to handle that case gracefully through mutexes, but there’s an easy way to do that through Thread-Local Storage. If you mark the variable as thread-local, then every thread will see just one copy of that variable, and doesn’t need an explicit mutex to handle that (the implementation might use a mutex, I don’t really know the underlying details).

This is also quite useful for multi-threaded programs that would like to use global variables rather than having to pass a thread structure to all the functions. Take this code for instance:

/* Instantiated a few times simultaneously */void *mythread(void *address) {
  mythread_state_t *state = malloc(sizeof(mythread_state_t));

  set_address(state, address);
  do_connection(state);
  check_data(state);

  do_more(state);
}
Code language: JavaScript (javascript)

While for library API calls having a context parameter is an absolutely good idea, if the code has no reason to be reentrant, passing it as parameter might be a performance hit. At the same time, while using global variables in libraries is a very bad idea, for programs it’s not always that bad, and it can actually be useful to avoid passing parameters around or using up more memory. You could then have the same code done this way:

__thread mythread_state_t thread_state;

/* Instantiated a few times simultaneously */void *mythread(void *address) {
  set_address(address);
  do_connection();
  check_data();

  do_more();
}
Code language: JavaScript (javascript)

The thread_state variable would be one per thread, needing neither a mutex to protect it, nor to e passed once to every function.

There are a few notes about libraries and thread safety which I’d like to discuss, but I’ll leave those for another time. Two tech posts a day is quite a lot already, and I need to resume my paid job now.

#C #Multithreading #Programming
2025-12-19

Kürzlich habe ich einen Artikel gelesen, in dem es um Fragen in einem Vorstellungsgespräch als Java-Entwickler ging. Es wurden einige Fragen vorgestellt und die These aufgestellt, dass die meisten Bewerber diese Fragen nicht beantworten können. Aus diesem Grund möchte ich in dieser Serie auf die

magicmarcy.de/java-interview-f

#Java-Interview #Multithreading #Prozess #Thread #Kontextwechsel #Synchronisation #deadlock

🅴🆁🆄🅰 🇷🇺erua@hub.hubzilla.de
2025-12-19
Для прикладных задач следует использовать каналы, в рамках CSP-подхода к многопоточности (Communicating sequential processes). Поскольку этот подход (концепция) проще для восприятия и контроля происходящего. Легко моделируется и верифицируется посредством всяких TLA+ и аналогов.

Бывает необычный Go'шный код? В тех случаях, когда пишется расширение стандартной библиотеки в плане контейнеров. В конце концов, те же самые каналы как раз под капотом у себя и содержат мьютексы. Вот только вещи такие крайне редко пишутся, такого рода специальные средства библиотек создаются одним человеком на миллион обычных пользователей языка.
По части интеллекта надо быть крайне альтернативно одарённым, чтобы свою задачу ставить в один ряд с чем-то подобным.

Крайне редко кто-то пишет специальные универсальные контейнеры и компоненты библиотек, предоставляющие какие-то примитивы и сущности, сродни каналов, мейлбоксов. Поэтому работает хорошее правило — как только возникает потребность в любом мьютексе, значит есть ошибка в проектировании. И сперва она вылезает в виде ухода от CSP в пользу подхода Shared Memory, становясь явным техническим долгом, а потом делает кодовую базу непригодной к поддержке или наращиванию. Всё что можно будет сделать с таким кодом, это обращаться с ним как с одноразовым — выкинуть и забыть.

Ловушка мышления тут в том, что одноразовый код не представляет ничего особенно страшного для людей привыкших придерживаться нормальной микросервисной архитектуры. Где считается вот этим самым микросервисом то самое, что проще каждый раз написать с нуля, чем изменять и отлаживать изменения :)
Таким людям крайне сложно объяснить почему кодовая база вообще должна быть без технического долга и такого рода косяков. В дополнению к вложению кучи времени и сил в создание и поддержание истинно микросервисной архитектуры. Встречается это в in-house разработка — компаний живущих с онлайн-сервисов (Авито, Яндекс, Озон, ВК и т.д. и т.п.)

Как ставить в очередь на увольнение? Выносить дисциплинарное взыскание штатным порядком, приказом по организации. Если за год набирается три штуки таких официальных взыскания, то можно увольнять «по статье» полностью соблюдая ТК РФ и нюансы Трудового Права РФ (которые не ограничиваются одним лишь ТК РФ).

#могопоточность #multithreading #CSP #software-development #golang #go #программирование #programming #lang_ru @Russia
Jan :rust: :ferris:janriemer@floss.social
2025-12-16

While #Mozilla wants to put even more #AI features into their #browser that nobody wants...

blog.mozilla.org/en/mozilla/le

...#Servo has just implemented parallel #CSS parsing 🚀

Main PR for this change:
github.com/servo/servo/pull/40

A lot of other stuff has happened in the Servo project during November - check it out:
servo.org/blog/2025/12/15/nove

#Contrast #Performance #MultiThreading

ENEP Linuxoidenep
2025-11-29

Добавил красивый вывод, многопоточность

2025-11-28

Многопоточность для самых маленьких. Виртуальные потоки. Часть 2

Всем привет! Многопоточность в Java развивается очень быстро, а многие всё ещё ограничиваются обычными потоками и ключевым словом synchronized. Сегодня я хочу рассказать именно о виртуальных потоках: как с ними работать, почему они меняют подход к многопоточности и какие задачи решают лучше традиционных механизмов. Буду объяснять просто и понятно, чтобы материал был полезен как новичкам, которые только знакомятся с виртуальными потоками, так и опытным разработчикам, которые хотят понять современные практики и возможности Project Loom.

habr.com/ru/articles/971350/

#java #multithreading #virtual_threads #многопоточность #виртуальные_потоки #обучение_программированию

2025-11-26

The bus deadlock in Oslo has illustrative value

#dev #programming #buses #multithreading

Two pictures of buses. 

The first picture shows a neat line of double-decker buses with the text "multithreading example in documentation".

The second picture shows four long, articulated buses in a round-about each blocking each other from proceeding or exiting from the roundabout. Nicely illustrating the concept of a "dead-lock". Superimposed text reads: "multithreading implementation in my program".
2025-11-24

Java. Многопоточность для самых маленьких. Часть 1

Всем привет! Многопоточность в Java не стоит на месте, а многие до сих пор используют только synchronized и создают потоки через new Thread() . С этого дня я запускаю серию уроков по современной многопоточности: как её правильно строить, в чём преимущества новых подходов по сравнению со старыми и что из классики всё ещё стоит использовать. Постараюсь объяснять максимально просто и наглядно, чтобы уроки были полезны и стажёрам, которые только начинают разбираться в теме, и опытным разработчикам, которым интересно узнать современный стиль работы с потоками. Поехали!

habr.com/ru/articles/969820/

#java #multithreading #virtual_threads #concurrency #многопоточность #виртуальные_потоки #обучение #обучение_программированию

Angelo Theodorou :amiga:encelo@mastodon.gamedev.place
2025-11-20

After three months of work, my presentation on the nCine multi-threaded job system is finally online.
Concurrency basics, atomics, acquire/release, false sharing, ABA, CPU topologies, ECS experiments, benchmarks & profiling.
encelo.github.io/nCine_JobSyst

#gamedev #cpp #concurrency #multithreading #ECS #nCine #GameEngine #opensource #indiegamedev

2025-11-16

Многопоточность без боли: моя шпаргалка для собесов в Java

Всем привет!) Я работаю Senior Java Developer в одном из банков, и за последние годы мне пришлось пройти не одно собеседование, услышать десятки каверзных вопросов и потратить уйму времени на подготовку. И вот что я понял: многопоточность — это одна из самых сложных и любимых тем на Java-собеседованиях , независимо от уровня кандидата. Поэтому в этой статье я хочу помочь вам уверенно подготовиться к секции по concurrency: разберём ключевые термины, посмотрим, как это работает на практике, и дам несколько советов, которые реально помогают на собесах. Поехали!

habr.com/ru/articles/966892/

#java #kotlin #multithreading #многопоточность #многопоточное_программирование #собеседование #собеседование_в_it #thread #concurrency #интервью

2025-11-12

"It's not bad old multi-tasking though. I'm not interrupting one action to take another. I'm putting reflection in the background as I always do, putting on another burst of action, & reflecting on that."

#ai #productivity #multitasking #multithreading

tidyfirst.substack.com/p/retur

HessenheldenHexangon
2025-11-01

@sadmin Gute idee! jetzt muss ich nur noch etwas größeres kopieren um was brauchbares zu sehen.

HessenheldenHexangon
2025-11-01

Was mich etwas nervt und gleichzeitig wundert. Ich habe als PC unter Fedora 42 eine AMD Ryzen 9 5900X am laufen. Aktuell mache ich eine Datensicherung auf eine externe Festplatte und nutzte dafür in der Console den MC. Mein PC müsste sich im Grunde langweilen, aber wenn ich parallel etwas anderen erledige, dann fühlt sich mein PC an als hätte ich eine 386er CPU unter der Haube. RAM ist genug da. Ich vermute die Multithreading Fähigkeit ist nicht berauschend.

2025-10-30

I have expertise with multi-threading and concurrency issues in software. Designing for ways to do that stuff right, and identifying when legecy code is doing it wrong, and then fixing it.

I know why folks coined the term "race" to mean what it means, in software context. I get it. And it makes sense. But I've always been uncomfortable with it. Why? Because it makes it *hard* to talk about, safely, in public, when your words might be stumbled across later and taken "out of context" *especially* by a trigger-happy liberal bully, or an ethnic minority person who might be feeling sensitive.

"He is such a racist! Look at his own words!"

So I'm hoping to coin a different term and get folks to use it instead:

*malconcurrency*

Per it roots: *bad* concurrency... get it? Like malware.

#multithreading
#concurrency

2025-10-16

Пул интерпретаторов в Python 3.14. Что, зачем и почему?

Как все знают, GIL (Global Interpreter Lock) не позволяет нескольким потокам CPython выполнять CPU-bound задачи параллельно. Глобальная блокировка интерпретатора предоставляет каждому потоку лишь небольшой интервал времени для работы. При этом планирование работы потоков (какому именно потоку из ожидающих предоставить разрешение на выполнение) осуществляется планировщиком операционной системы. Интерпретатор не является полноценным планировщиком работы потоков, он делегирует эту функцию операционной системе. GIL использует мьютексы ОС для блокировки работы потоков так, чтобы в один момент времени мог выполняться только один поток из нескольких.

habr.com/ru/articles/957058/

#Python_314 #parallelism #multithreading

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst