danzin

Brazilian (experienced) hobbyist Pythonista. Pythonista amador (experiente).

He/him/ele

danzin boosted:
2026-03-15

Three recent issues all describing hangs: anyone want to take a look? Even checking if you can reproduce one will be helpful!
- github.com/coveragepy/coverage
- github.com/coveragepy/coverage
- github.com/coveragepy/coverage

danzin boosted:
Alexandre B A Villares 🐍villares@ciberlandia.pt
2026-03-14
danzindanzin
2026-03-14

@procrastiwalter Pra mim também. E conheço diversos autistas, meu filho incluso, que têm grande facilidade com matemática em geral.

Sempre que vejo qualquer generalização sobre autistas fico com um pé atrás. Mesmo as dificuldades de socializar são muito variáveis, o jeito de lidar com a empatia (exagerada ou suprimida), a forma de lidar com frustrações (o que pode passar a impressão de facilidades ou dificuldades), o masking interfere muito em tudo, etc.

danzindanzin
2026-03-11

@simon What's your opinion on using AI to improve existing code? I'm taking a stab at it in github.com/devdanzin/code-revi. It defines agents and scripts to review code, and you can ask your agent to use it and improve/fix the issues it finds.

For my projects, it seems to catch important improvement opportunities and issues to fix. Works like a supercharged linter, linting behavior, architecture, macro and micro implementation details, etc. Not sure whether it's a real effect or just placebo though.

danzin boosted:
Alexandre B A Villares 🐍villares@ciberlandia.pt
2026-03-11

#Python now has a wonderful colorful REPL, and in 3.14 it also made #argparse look amazing! Check my silly tool for converting PNGs to #GIF animations! Thank you @hugovk, @ambv and everyone else involved... github.com/villares/sketch-a-d

A screenshot from the terminal showing the help for my pngs_to_gif.py script, all nicely colored.
danzindanzin
2026-03-06

@v_raton Se vc não for contra por princípio, habilita a sugestão de next edit só pra ver um pouco. Acho mágico, até pra escrever textos aleatórios em português esse sistema acerta muito.

danzin boosted:
Python Software FoundationThePSF@fosstodon.org
2026-03-06

How do you use Python and its related technologies? Let us know in the 2026 Python Developers Survey! 🐍 #python #pythondevsurvey
surveys.jetbrains.com/s3/pytho

danzindanzin
2026-03-02

labeille Package Registry stats

Top 3.15 Blockers (364 packages):
* PyO3 / Rust / maturin: 111
* C extension build failures: 108
* pydantic-core (transitive PyO3): 69
* numpy / scipy / meson: 43

Once PyO3 adds 3.15 support, ~180 more packages will unlock (PyO3 direct + pydantic-core transitive)

Skip Reasons (418 packages):
* Monorepo subpackage (Azure, GCloud, etc.): 214
* No test suite found: 70
* No source repository: 52
* Type stub packages: 42

danzindanzin
2026-03-02

labeille Package Registry stats

We've grown the registry: github.com/devdanzin/labeille/

* Total packages: 1,500
* Enriched (information collected and present): 1,500 (100%)
* Fully runnable on CPython 3.15: 654 (43.6%)
* Skipped (no tests, monorepo, etc.): 418 (27.9%)
* 3.15-specific blockers (skip_versions): 364 (24.3%)
* pytest: 95.1% (1,427 packages)
* unittest: 4.8% (72 packages)
* GitHub: 96.4% of repos
* Same JIT crash found in 7 packages

danzindanzin
2026-02-28

@miguelgrinberg I see your point, makes sense. I agree disclosure would help.

One data point you might want to take a look at for your article is searching for "LLM" and specific tool names in issues and PRs. It's mostly core developers asking people not to use these tools as they're using them. But there are a couple of cases of contributors disclosing that analysis or coding was made or helped by such a tool.

danzindanzin
2026-02-28

@miguelgrinberg I've had to e.g. ask Claude Code to credit Gemini as the LLM tool I used, so transparency is feasible and IMO helps. People who want nothing to do with LLM generated code, reviewers, all get a clear signal.

The DevGuide policy doesn't mention disclosure of or crediting LLM tools. I expect more contributions have been with them. If your position is that it should always be disclosed, I can agree to that to a point. Say, when a significant part of the code was generated by an LLM.

danzindanzin
2026-02-28

@miguelgrinberg Not sure I agree that has to be said, but I agree that it should be stated that LLMs are allowed to all kinds of contributors, as long a human signs the CLA and takes full responsibility for the contributions.

danzindanzin
2026-02-28

@miguelgrinberg @hugovk So raising contributor productivity would be one, maybe the biggest, reason people use LLMs on CPython, especially for code changes that span a lot of tedious, mechanical work.

Submitting higher quality code is another: LLMs can do a (internal) pre-review of the PR, can tell you about details in the docs or the code you forgot about, can write comprehensive tests faster, etc.

About crediting, I think it's a transparent way to say "(some) of this was written by a tool".

danzindanzin
2026-02-28

@miguelgrinberg @hugovk I kinda agree it's somewhat vague if you're trying to figure out whether LLM assisted/generated code is allowed or not. Coming to the page thinking that it is allowed makes the page a lot clearer: it's about HOW it's allowed. I'd support a minor rewrite to state that LLMs are allowed, to handle the first case.

You say "I can't really imagine that CPython is having issues finding contributors". There are over 7k open issues, contributors cannot handle all in their plates.

danzindanzin
2026-02-28

The most important and tedious part of labeille is the registry.

So far with 350+ PyPI packages, each with a repo URL, install and test commands, metadata about whether it has C extensions, what Python versions to skip, and whether it needs xdist disabled.

"Just run pytest" doesn't work for all packages. Some need specific test markers or editable installs. Some have tests that might hang. Some need extra dependencies that aren't in their dev requirements.

danzindanzin
2026-02-28

I built labeille to find CPython JIT crashes, but it's a "run real world test suites at scale" platform.

It also works for:
— Checking which packages pass their tests on a new CPython version
— Testing free-threaded (no-GIL) CPython compatibility
— Measuring coverage.py or memray overhead across hundreds of packages
— Comparing CPython vs PyPy performance on real code

The registry of 350+ packages with install/test commands is the core.

danzindanzin
2026-02-28

labeille can compare 2 test runs and show what changed and why it changed.

When it goes from PASS to CRASH, labeille looks at the package's repo. If the commit is the same, it's a CPython/JIT regression. Otherwise, it might be the package:

requests: PASS → CRASH
Repo: abc1234 → abc1234 (unchanged — likely a CPython/JIT regression)

flask: CRASH → PASS
Repo: 222bbbb → 333cccc (changed)

This allows figuring out "3 of these are JIT regressions".

danzindanzin
2026-02-28

labeille has a bisect command that binary-searches through a package's git history to find the commit that triggers a JIT crash:

labeille bisect requests --good=v2.30.0 --bad=HEAD --target-python /path/to/cpython-jit

github.com/devdanzin/labeille#

Commits that won't build get skipped automatically (like git bisect skip), revisions get a fresh venv so dependency versions don't leak, and you can filter by crash signature when a package has distinct crashes.

danzindanzin
2026-02-28

labeille runs test suites from popular PyPI packages against a JIT-enabled CPython build and catches crashes: segfaults, assertion failures, etc.

If all of requests, flask, attrs, etc. pass their tests under the JIT, that shows the JIT is working. If one crashes, there's a bug with a reproducer. We've found one crash so far: github.com/python/cpython/issu

This requires curating a local package registry with repo URLs, install and test commands, etc.

danzindanzin
2026-02-28

I've been working on a new Python tool: labeille. Its main purpose is to look for CPython JIT crashes by running real world test suites.

github.com/devdanzin/labeille

But it's grown a feature that might interest more people: benchmarking using PyPI packages.

How does that work?

labeille allows you to run test suites in 2 different configurations. Say, with coverage on and off, or memray on and off. Here's an example:

gist.github.com/devdanzin/6352

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst