#causalai

2025-07-23

AI: Explainable Enough

They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem. 

Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.

Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details. 

What the domain expert user doesn’t want:
– How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor. 

What the domain expert desires: 
– Help at the lowest level of detail that they care about. 
– AI identifies features A, B, C, and that when you see A, B, & C it is likely to be disease X. 

Most users don’t care how a deep learning really works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.

This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,  easily.  So in a Betty Crocker cake mix kind of way, let the user add the egg. 

Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an AI-Human causal prediction machine. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking. 

I’m excited by some new developments like REX which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.

#AI #AIAdoption #AICommunication #AIExplainability #AIForDoctors #AIInHealthcare #AIInTheWild #AIProductDesign #AIUX #artificialIntelligence #BettyCrockerThinking #BiomedicalAI #Business #CausalAI #DataProductDesign #DeepLearning #ExplainableAI #HumanAIInteraction #ImageAnalysis #LLMs #MachineLearning #StartupLessons #statistics #TechMetaphors #techPhilosophy #TrustInAI #UserCenteredAI #XAI

2025-07-10

My PR to the #EconML #PyWhy #opensource #causalai project was merged! 🎉 I made a small contribution by allowing a flexible choice of evaluation metric for scoring both the first stage and final stage models in Double Machine Learning (#DML). Before, only the mean square error (MSE) was implemented. But as an ML practitioner "in the trenches" I have found that MSE is hard to interpret and compare across models. My new functions allow that 🙂 #CausalInference #machinelearning #datascience

Gareth Emslie 🇿🇦 🇪🇦 🇨🇭keyoke_za@hachyderm.io
2023-07-25

Causal AI, also known as deterministic AI, is revolutionizing the way organizations understand the causes and effects of events or behaviors. Unlike traditional correlation-based machine learning, which only predicts probabilities, causal AI uses fault-tree analysis to determine the precise root cause of issues. This systematic approach allows for automatic... dynatrace.com/news/blog/what-i #CausalAI #DataDrivenDecisions #ImproveDevOps #softcorpremium

2023-07-15

Causal AI is a new branch of artificial intelligence that can understand cause-and-effect relationships in complex systems. Unlike traditional AI, which can only find correlations in data, causal AI can infer causal mechanisms and predict the outcomes of interventions. Causal AI could revolutionize fields such as medicine, economics and social sciences, where causality is essential for decision making.

#causalAI #causality #AIethics weforum.org/agenda/2023/06/wha decision-making

2023-04-28

ShowWhy is a suite of no-code interfaces for performing data analysis using #causalML techniques and libraries github.com/microsoft/showwhy

#AI #CausalAI #github #microsoft

2022-11-09

I want to focus my research on #ReinforcementLearning and #CausalAI for #robotics. I am currently working on RL for robotics and am starting to learn more about causality. I have a list of researchers from various labs whose work I find interesting.

2022-11-08

Day 2⃣ of the Causal Data Science Meeting 2⃣0⃣2⃣2⃣ with a great program and a fantastic keynote by
Silvia Chiappa from DeepMind later this evening. #CDSM22 #Causality #CausalAI #DataScience

Link to the program: causalscience.org/meeting/prog

2022-11-07

Start of the Causal Data Science Meeting 2022. Two days with a fantastic program, including two great keynotes and an industry round table. #CDSM22 #Causality #CausalAI #DataScience

Piotr Gabryś 🇵🇱🇪🇺🇺🇦PiotrGabrys@sigmoid.social
2022-11-06

#Introduction 👋
Hello everyone!

I'm Peter and I live in Warsaw, Poland. I'm working on Graph Representation Learning at a start-up - Gyfted. I'm mainly interested in learning more about Graph Neural Networks , Ethical AI and Causality.

I will try to post some Toots about GNNs from my work experience and learning path. 📚 🤖

#gnn #ethicalai #causalai

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst