#ExplainableAI

AIXPERT projectAIXPERT_project
2026-02-03

NeurIPS 2025 brought together the global community, and was proud to take part.

Our partners Athena Research Center and the Vector Institute presented multiple papers and posters, sharing insights on and .

📖 Read the full article and explore the publications in our library:
👉 aixpert-project.eu/2026/01/28/

Insights from NeurIPS
AtomLeap.aiAtomLeap_ai
2026-02-02

We used to write code and understand every line.
Now we build AI systems we can’t fully explain.

Scientists are studying them the way we study brains.

blog.atomleap.ai/blog/not-just

, , , , , , , , , , , , , ,

Gašper Beguš (@begusgasper)

OpenAI에서 열린 강연 및 파이어사이드 채팅에서 Kevin Weil이 AI 해석가능성(interpretability)이 과학적 발견을 돕는 방식에 대해 발표했습니다. 특히 AI 해석을 통해 과학 연구에서 얻을 수 있는 통찰과 적용 가능성을 설명하며, 세 가지 주요 사용사례를 제시했다고 요약됩니다.

x.com/begusgasper/status/20173

#interpretability #ai #research #explainableai

2026-01-30

🏆 𝗕𝗲𝘀𝘁 𝗣𝗮𝗽𝗲𝗿 𝗔𝘄𝗮𝗿𝗱 𝗳𝗼𝗿 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗮𝗯𝗹𝗲 𝗔𝗜!

A paper co-authored by our AI Department Head Wojciech Samek was awarded the 2025 Information Fusion Best Paper Award, setting out a roadmap for the future of #ExplainableAI research.

🔗 hhi.fraunhofer.de/en/press/new

#XAI #ResponsibleAI

[당신의 AI가 왜 그런 말을 했는지, 아무도 모른다면? (블랙박스 현상과 XAI)

AI의 블랙박스 현상에 대한 설명. 개발자가 AI의 내부 동작을 완전히 이해하지 못하는 문제를 다루고 있으며, XAI(설명 가능한 AI)를 통해 해결할 수 있는 방법을 제시.

news.hada.io/topic?id=26200

#xai #ai #blackbox #explainableai #machinelearning

IT FinanzmagazinIT_Finanzmagazin
2026-01-28

Die HanseMerkur setzt KI strikt als unterstützendes Werkzeug ein – alle wesentlichen Entscheidungen bleiben beim Menschen. Dank unternehmensweiter KI-Governance und strikter Kontrollinstanzen werden Transparenz, Erklärbarkeit und Diskriminierungsfreiheit gewährleistet.

it-finanzmagazin.de/ki-bei-han...
it-finanzmagazin.de/ki-bei-han

IT FinanzmagazinIT_Finanzmagazin
2026-01-28

ING zeigt, wie Explainable AI im Finanzsektor praktisch umgesetzt wird: Mit strenger Governance, Fokus auf Datenethik und innovativen Tools wird KI-Transparenz garantiert. Für Informatiker: Einblicke in praxisnahe XAI-Implementierung und aktuelle Regulatorik.

it-finanzmagazin.de/x26ai-expl...
it-finanzmagazin.de/x26ai-expl

RC Trustworthy Data Sciencerctrust@ruhr.social
2026-01-27

Building ML Tools Scientists Will Actually Use

The Gap Between Models and Tools I've seen a lot of impressive ML models in biopharma that never get used. Not because the science is wrong, but because the tool doesn't fit into anyone's workflow. The model might be published in Nature Methods with beautiful receiver operating characteristic curves, but if a discovery scientist can't access it without filing an IT ticket or if it requires command-line expertise, it sits unused. This is the reality of building ML tools for scientific users: […]

kemal.yaylali.uk/building-ml-t

2026-01-24

Explainable AI helps doctors understand medical AI decisions—but transparency doesn’t equal truth. Here’s why trust, limits, and trade-offs matter. hackernoon.com/explainable-ai- #explainableai

2026-01-22

AI pressure is already hitting the SOC.
Boards want ROI. Teams inherit risk.

The issue isn’t AI—it’s tools that add noise, unchecked automation, and zero proof of impact.

7 bubble-proof moves to invest in AI you can defend.
Read more: graylog.org/post/how-to-ignore

#securityAI #SOC #ExplainableAI

AI pressure is already hitting the SOC. Boards want ROI. Teams inherit risk. The issue isn’t AI—it’s tools that add noise, unchecked automation, and zero proof of impact. 7 bubble-proof moves to invest in AI you can defend. Read more: graylog.org/post/how-to-... #securityAI #SOC #ExplainableAI

How to Ignore Cybersecurity AI...

2026-01-19

🔧 Sinh viên CS tự làm engine phân tích hoạt động GitHub có khả năng giải thích điểm số và mức tin cậy. Yêu cầu phản hồi: cách tính điểm, giả định yếu, cải tiến nếu dùng nội bộ. Không muốn khen ngợi, chỉ chờ chỉ trích. #GitHub #ExplainableAI #SideProject #PhânTích #LậpTrình #Feedback

reddit.com/r/SideProject/comme

2026-01-15

Would you trust a decision you can’t explain? 🤔

🎬 The 𝗻𝗲𝘄 𝗲𝘅𝗽𝗹𝗮𝗶𝗻𝗲𝗿 𝘃𝗶𝗱𝗲𝗼 𝘀𝗲𝗿𝗶𝗲𝘀 from our #XAI group kicks off with 𝗟𝗮𝘆𝗲𝗿-𝗪𝗶𝘀𝗲 𝗥𝗲𝗹𝗲𝘃𝗮𝗻𝗰𝗲 𝗣𝗿𝗼𝗽𝗮𝗴𝗮𝘁𝗶𝗼𝗻 (𝗟𝗥𝗣) — a foundational method for opening the “black box” 📦 and making AI decisions transparent and trustworthy.

▶️ youtube.com/watch?v=b26IZ2aYGjU #ExplainableAI

IT FinanzmagazinIT_Finanzmagazin
2026-01-12

81 % der Banken setzen KI-Modelle ohne ausreichende Transparenz und Kontrolle ein – ein massives Risiko. Nur ein starkes Governance Framework macht KI in der Finanz-IT wirklich vertrauenswürdig.

it-finanzmagazin.de/x26ai-ambi
it-finanzmagazin.de/x26ai-ambi

2026-01-07

Every January, the gym fills up .
By February, it empties.
Security AI implementation follows the same arc. Fast launches. Big promises. Then, analysts spend more time validating outputs than stopping threats. The misses come from skipping explainability, governance, and context inside SOC workflows.

Seth Goldhammer explains why most AI efforts stall after launch: graylog.org/post/why-ai-transf
#SecurityAI #SOC #ExplainableAI #governance 

IT FinanzmagazinIT_Finanzmagazin
2026-01-07

Generative KI revolutioniert Bankenprozesse, aber erst Agentenlogik und Explainable AI ermöglichen regulatorisch sichere und nachvollziehbare Implementierungen. Über 80 % der Banken nutzen bereits generative KI, jedoch bleibt die komplexe Integration und Kontrolle die zentrale Herausforderung.

it-finanzmagazin.de...
it-finanzmagazin.de/x26ai-mehr

IT FinanzmagazinIT_Finanzmagazin
2026-01-05

LLMs stoßen bei kritischen, nachvollziehbaren Entscheidungen an ihre Grenzen, da ihre Outputs oft nicht erklärbar sind. Explainable-Agentic AI macht KI-Entscheidungen transparent und automatisiert Prozesse zuverlässig.

it-finanzmagazin.de/llm-mit-ei...
it-finanzmagazin.de/llm-mit-ei

2026-01-02

Kiểm toán quyết định của AI: Một kỹ thuật mới sử dụng đồ thị ngữ cảnh giúp chúng ta hiểu được quá trình suy luận của AI, khiến nó trở nên minh bạch hơn.

#AI #ExplainableAI #Auditing #ContextGraph #Technology #MachineLearning #AIgiảithích #Kiểmtoán #Côngnghệ #Họcmáy #TintứcAI

reddit.com/r/LocalLLaMA/commen

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst