#Clang

Felix Palmen :freebsd: :c64:zirias@bsd.cafe
2025-06-20

For two days straight, I just can't reproduce #swad #crashing with *anything* in place (#clang #sanitizer instrumentation, attached #debugger like #lldb) that could give me the slightest hint what's going wrong. 😡

But it *does* crash when "unobserved". And it looks like this is happening a lot sooner (or, more often?) when using #LibreSSL ... but I also suspect this could be a red herring in the end.

Situation reminds me of my physics teacher back at school, who used to say something in german I just can't ever forget:

"Wer misst, misst Mist."

Feeble attempt in english would be "the one who measures measures crap", it was his humorous way to bring one consequence of #Heisenberg's indeterminacy principle to the point. And indeed, #debugging computer programs always suffers from similar problems...

Christos Argyropoulos MD, PhDChristosArgyrop@mstdn.science
2025-06-20

Now that I finished the #Clang backend of the topic (a CPU/GPU , through #openMP, accelerated library for managing bitsets) for the summer conference community talk (papercall.io/perlcommunity) in 15 days, I find that I will probably have no time to complete the #Perl front-end before the deadline.
Lessons learned :

1. Reference counting kicks ass (it's the default way to manage memory mappings for the GPU with #openMP), especially if one can control when the memory is released back to the pool

Christos ArgyropoulosChristosArgyrop@mast.hpc.social
2025-06-20

Now that I finished the #Clang backend of the topic (a CPU/GPU , through #openMP, accelerated library for managing bitsets) for the summer conference community talk (papercall.io/perlcommunity) in 15 days, I find that I will probably have no time to complete the #Perl front-end before the deadline.
Lessons learned :

1. Reference counting kicks ass (it's the default way to manage memory mappings for the GPU with #openMP), especially if one can control when the memory is released back to the pool

Felix Palmen :freebsd: :c64:zirias@bsd.cafe
2025-06-18

I need help. First the question: On #FreeBSD, with all ports built with #LibreSSL, can I somehow use the #clang #thread #sanitizer on a binary actually using LibreSSL and get sane output?

What I now observe debugging #swad:

- A version built with #OpenSSL (from base) doesn't crash. At least I tried very hard, really stressing it with #jmeter, to no avail. Built with LibreSSL, it does crash.
- Less relevant: the OpenSSL version also performs slightly better, but needs almost twice the RAM
- The thread sanitizer finds nothing to complain when built with OpenSSL
- It complains a lot with LibreSSL, but the reports look "fishy", e.g. it seems to intercept some OpenSSL API functions (like SHA384_Final)
- It even complains when running with a single-thread event loop.
- I use a single SSL_CTX per listening socket, creating SSL objects from it per connection ... also with multithreading; according to a few sources, this should be supported and safe.
- I can't imagine doing that on a *single* thread could break with LibreSSL, I mean, this would make SSL_CTX pretty much pointless
- I *could* imagine sharing the SSL_CTX with multiple threads to create their SSL objects from *might* not be safe with LibreSSL, but no idea how to verify as long as the thread sanitizer gives me "delusional" output 😳

Felix Palmen :freebsd: :c64:zirias@bsd.cafe
2025-06-18

Yep, there's a second bug. #clang #thread #sanitizer had nothing to complain, and the output from #assert doesn't help much. So, first step: "pimp your assert" 😂 --- #FreeBSD, like some other systems, provides functions to collect and print rudimentary stacktraces, use these if available:
github.com/Zirias/poser/commit

Now I got closer, see screenshot. That's enough to understand, the issue is with the global event firing when a #child #process exits, this was used from multiple threads. Ok, it obviously doesn't work that way, so, back to the drawing board regarding my handling for child processes... 🤔

Next #swad release: Soon, so I hope 🙈

swad printing a stacktrace for a failed assert, filtering that with "addr2line" to obtain more useful information.
2025-06-17

@doctormo@floss.social Unfortunately, not #gcc, but it's not a propriatary tool or a google thing: You can use clang-include-cleaner. It depends on a compilation database, which can be output with cmake's -DCMAKE_EXPORT_COMPILE_COMMANDS=ON option. Running cmake-include-cleaner --edit path/to/source.cpp will analyze and clean that file's headers, removing unused headers.

#clangd can do it in a #LSP manner, as shown in my included image. In my experience, this also relies on the compilation database.

It's included in the
clang-tools-extra package in Fedora, and I think should be in the standard repos in most distros.

You don't have to move to
#clang to use these. I use GCC for my building, but still use clangd and clang-include-cleaner. Your codebase might need to be in a state that clang could feasibly compile it, though.

A sample source code program that showcases the abliity of clangd to remove unused headers.  The source of the program is as follows:

#include <stdio.h>
#include <iostream>
#include <cmath>

int main() {
    std::cout << "hello world\n";
    return 0;
}
Felix Palmen :freebsd: :c64:zirias@bsd.cafe
2025-06-16

Next #swad release will still be a while. 😞

I *thought* I had the version with multiple #reactor #eventloop threads and quite some #lockfree stuff using #atomics finally crash free. I found that, while #valgrind doesn't help much, #clang's #thread #sanitizer is a very helpful debugging tool.

But I tested without #TLS (to be able to handle "massive load" which seemed necessary to trigger some of the more obscure data races). Also without the credential checkers that use child processes. Now I deployed the current state to my prod environment ... and saw a crash there (only after running a load test).

So, back to debugging. I hope the difference is not #TLS. This just doesn't work (for whatever reason) when enabling the address sanitizer, but I didn't check the thread sanitizer yet...

2025-06-16

c/c++ devs of fediverse, what does your debugging workflow look like? I've used gdb manually a bit but it's quite laborious to set up each session. I need to be able to do step-through debugging with variable inspection.

[VS]Code and studio are very good for step through debugging once they're set up, but I'd rather avoid them altogether if possible, especially since you have to jump through a series of flaming hoops to get c debugging working in the non-telemetry open source version of code.

Any/all suggestions appreciated, other than 'use rust' #programming #clang #cpp

Guillaume Racicotgracicot
2025-06-13

Did I managed to break clang again? Yes! I've seen crashes, miscompiles, but this is the first time I see a "sorry, unimplemented" when I only used lambas and requires

sorry, unimplemented: mangling error_mark
sorry, unimplemented: mangling error_mark
sorry, unimplemented: mangling error_mark
sorry, unimplemented: mangling error_mark
sorry, unimplemented: mangling error_mark
sorry, unimplemented: mangling error_mark
Peter N. M. Hansteenpitrh
2025-06-12
2025-06-12

Fabien Sanglard published a blog series on driving C compilers, i.e. running the compiler toolchain to build executable programs:

fabiensanglard.net/dc/index.ph

More recently Julia Evans @b0rk posted on the related topic of using Make to compile C programs, which nicely complements Fabien's series:

jvns.ca/blog/2025/06/10/how-to

#clang #gcc #compilers

2025-06-10

Part2: #dailyreport #cuda #nvidia #gentoo #llvm #clang

I learned cmake config files and difference between
Compiler Runtime Library (libgcc and libatomic,
LLVM/Clang: compiler-rt, MSVC:vcruntime.lib) and C
standard library (glibc, musl) and C++ Standard Library
(GCC: libstdc++, LLVM: libc++, MSVC STL) and linker
(GCC:binutils, LLVM:lld) and ABI. Between “toolchain”
and “build pipeline”.

Gentoo STL:
- libc++: sys-devel/gcc
- libstdc++: llvm-runtimes/libcxx

Gentoo libc: sys-libs/glibc and sys-libs/musl

I learned how Nvidia CUDA and CUDNN distribud and what
tools PyTorch have.

Also, I updated my daemon+script to get most heavy
current recent process, which I share at my gentoo
overlay as a package.

2025-06-10

Part1: #dailyreport #cuda #nvidia #gentoo #llvm #clang
#programming #gcc #c++ #linux #toolchain #pytorch

I am compiling PyTorch with CUDA and CUDNN. PyTorch is
mainly a Python library with main part of Caffe2 C++
library.

Main dependency of Caffe2 with CUDA support is
NVIDIA "cutlass" library (collection of CUDA C++
template abstractions). This library have "CUDA code"
that may be compiled with nvcc NVIDIA CUDA compiler,
distributed with nvidia-cuda-toolkit, or with LLMV
Clang++ compiler. But llvm support CUDA only up to 12.1
version, but may be used to compile CUDA for sm_52
architecture. Looks like kneeling before NVIDIA. :)

Before installing dev-libs/cutlass you should do:
export CUDAARCHS=75

I sucessfully compiled cutlass, now I am trying to
compile PyTorch CUDA code with Clang++ compiler.

2025-06-07

Netbeans 26 C++ (clang/clangd) + build system

Всем привет. Стал нужен IDE минимальный, и я вспомнил, что есть Нетбинс. Скачал мне он очень понравился - удобный, но чего-то не хватает. Как сделать рабочим Нетбинс 26(с++, clangd), когда какой-то модуль, который раньше в плагинах работал теперь не работает. Давайте рассмотрим этот нюанс.

habr.com/ru/articles/916534/

#C++ #Netbeans_26 #clang

2025-06-07

All I want is just a collection of #binutils, #GCC, #llvm+#clang, #glibc and #musl that are "free standing" / relocatable, which I can pack into a #squashfs image to carry around to my various development machines.

You'd think that for something as fundamental as compiler infrastructure with over 60 years of knowledge, the whole bootstrapping and bringup process would have been super streamlined, or at least mostly pain free by now.

Yeah, about that. IYKYK

Jari Komppa 🇫🇮sol_hsa@mastodon.gamedev.place
2025-06-06

Laptop number three, installed without issue.

For fun, tried to make a simple c++ project (literally hello world) using CLion, and it doesn't compile. No idea why. Googling the error gives some random config advice that doesn't apply.

Like literally, I installed clang (and clang-tools) with apt. Installed CLion. Everything looks fine. Create a new c++ project. Hit build. Error.

I know I could get stuff working fine at command line, but I'd rather not.

#clion #ubuntu #clang

So... apparently the compile_commands.json generated by #clang with `-MJ compile_commands.json` has invalid syntax so #clangd fails (silently) to load it.
And even after fixing it manually I get a lot of "Unknown argument" errors.
I'm not even working on a big project. This is just a single .cpp file. Not even a header.
It's all tools from the #LLVM project. I'm not doing anything fancy!
This is so frustrating. 🤬

Cam Cookscrum_log
2025-05-28

Compiling gcc from source in your container is where you have to back up and re-evaluate everything.

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst