patalt :julia:

Researching Trustworthy AI at TU Delft & ING. Counterfactual Explanations and Probabilistic ML. Tools: Julia , Quarto

📜 Recent: arxiv.org/abs/2312.10648
🌐 Blog: paltmeyer.com/blog/
📦 Julia: paltmeyer.com/content/software

2024-09-30

To address this need, CounterfactualExplanations.jl now has support for Trees for Counterfactual Rule Explanations (T-CREx), the most novel and performant approach of its kind, proposed by Tom Bewley and colleagues in their recent hashtag #ICML2024 paper: proceedings.mlr.press/v235/bew

Check out our latest blog post to find out how you can use T-CREx to explain opaque machine learning models in hashtag #Julia: taija.org/blog/posts/counterfa

Example of a Counterfactual Rule Explanation for a synthetic dataset.
2024-09-30

When we are primarily interested in explaining the general behavior of opaque models, however, local explanations may not be ideal. Instead, we may be more interested in group-level or global explanations.

2024-09-30

Counterfactual Explanations are typically local in nature: they explain how the features of a single sample or individual need to change to produce a different model prediction. This type of explanation is useful, especially when opaque models are deployed to make decisions that affect individuals, who have a right to an explanation (in the EU).

2024-09-20

Something's been cooking this week at [CounterfactualExplanations.jl](github.com/JuliaTrustworthyAI/) ...

One of my favorite papers @ICMLConf this year proposes a new model-agnostic approach for generating global and local counterfactual explanations through surrogate decision trees: arxiv.org/abs/2405.18875

Will be shipped with next release :julia:

Counterfactual Rule Explanation generated in CounterfactualExplanations.jl.
2024-09-18

Me: Well ... yes, of course, my PhD has practical value.

The practical value:

```julia
"""
issubrule(rule, otherrule)

Checks if the `rule` hyperrectangle is a subset of the `otherrule` hyperrectangle. $DOC_TCREx
"""
function issubrule(rule, otherrule)
return all([x[1] <= y[1] && x[2] <= y[2] for (x, y) in zip(rule, otherrule)])
end
```

2024-09-11

Nice, thanks a lot @adam_wysokinski

2024-09-11

Taija, the organization for Trustworthy AI in Julia :julia:, has its own website and blog now: taija.org/

Lots of interesting stuff upcoming, including blog posts from our two Google Summer of Code/Julia Season of Contributions students.

We'll use the blog to share any relevant updates, such as the recent release of a small new package for sampling from model distributions: taija.org/blog/posts/new-packa

Stay tuned!

patalt :julia: boosted:
Sue is Walking the Earth 🌱susankayequinn@wandering.shop
2024-08-22

The next time someone tells you some tech is "inevitable" please laugh directly in their face. And then tell them that's been used as an excuse for exploitation forever, it's a red flag, and if they were smart, they'd avoid it, well, like the plague. But we know how well that's going.

arxiv.org/abs/2408.08778

@davidthewid @histoftech

Watching the Generative Al

While the Generative AI hype bubble is slowly deflating, its harmful effects will last.

David Gray Widder

Digital Life Initiative, Cornell University, New York City

Mar Hicks

School of Data Science, University of Virginia, Charlottesville

screenshot of introduction of article with red boxes emphasizing these two passages:



"Only a few short months ago, Generative Al was sold to us as inevitable..."

"...even as the Generative Al hype bubble slowly deflates, its harmul effects will last: carbon can t be put back in the ground, workers continue to need to fend off Al's disciplining effects, and the poisonous effect on our information commons will be hard to undo."
patalt :julia: boosted:
2024-07-24

AI models collapse when trained on recursively generated data.

#machinelearning

nature.com/articles/s41586-024

2024-07-23

@spdrnl during a tutorial at ICML on data-efficient ML

2024-07-22

Shocking! Against all odds, it turns out that scale is not all you need. If you spot a confused tech bro, give them a hug!

Slide from the ICML 2024 tutorial on data-efficient ML. Illustrates the concept of diminishing returns to data set size.
2024-07-20

Good to know I landed at the right airport @ICMLConf 😂

Standing in front of a conveyer belt with an ICML welcome display at Vienna airport.
patalt :julia: boosted:
2024-07-20

Imagine, ten years from now, starting work in a codebase that was built mostly with AI tools. I can't see how that wouldn't be awful.

Like, we all know how much harder it is to work in a codebase where the original authors are long gone. Imagine a world in which not only does nobody still there know why certain decisions were made, but _nobody_ ever knew.

If ML is the high interest credit card of tech debt, GenAI may just be running the printing press.

2024-07-19

With respect to *emergent properties* of LLMs, "It would appear that we are training LLMs as a very expensive method to discover what data exists on the Web" 🤣 Google learned that lesson the hard way recently when Reddit shitposts entered their AI summaries. Ugggh you'd think scientists have come up with more efficient ways to retrieve information by now ... oh wait!

If you're curious about the AGI debate, do yourself a favor and read this ICML position paper: arxiv.org/abs/2308.07120

2024-07-18

And if you'd like to learn more about why Andrew M. Demetriou, Antony Bartlett, @cynthiacsliem and I think that researchers should "Stop Making Unscientific AGI Performance Claims", come find me during the poster session on Wednesday: icml.cc/virtual/2024/poster/34.

Pre-print: arxiv.org/abs/2402.03962
Related blog post: patalt.org/blog/posts/spurious

2024-07-18

Excited to be heading to Vienna next weekend to attend @ICMLConf for the first time.

If you're around and interested in anything related to #xai, counterfactual explanations, interpretability, finance or economics, do reach out - I'm always happy to chat! You can find out more about my research interests on my website: patalt.org

2024-07-18

I was today years old when I realised you can interpolate Julia objects in shell mode:

julia> x = "hello world"
"hello world"

shell> echo $x
hello world

2024-07-17

I've come across #tabbyml and have started using it as a replacement for GH copilot. It's open-source, comes with a VSCode extension and allows you to host your own LLM for chat and code complete (also locally). Currently running the StarCoder2-3B for code complete on my MacBook M2. First impression: model unsurprisingly less powerful than copilot but inference time is fast enough. Since it's running locally I don't even need WiFi. Very neat!

tabby.tabbyml.com

2024-07-12
2024-07-12

Forget about Mojo! Python.jl 🔥 is the only superset of Python you’ll ever need (developing …)

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst