#explainableAI

ESWC Conferenceseswc_conf@sigmoid.social
2025-06-02

🔍 The 1st XAI+KG Workshop is now underway at #ESWC2025!
📍 Room 7 – Nautilus, Floor 0 (First Half of the Day)

XAI+KG 2025 explores how Knowledge Graphs can enhance the interpretability and transparency of AI models — especially deep learning systems — and how Explainable AI (XAI) techniques can, in turn, improve the construction and refinement of Knowledge Graphs. 🤝🧠

Join us for thought-provoking discussions at the intersection of explainability and semantics.

#XAIKG2025 #ExplainableAI #AI

Dr. Thompsonrogt_x1997
2025-06-01

🧠 What if the real future of AI isn’t just power… but transparency?
Explainable AI (XAI) is rapidly becoming the new backbone of responsible ML systems — and might just be the thing that saves machine learning from collapse.

🌍 From black-box fear to clarity-led trust, here’s how the next generation of models is being reimagined from the inside out.

🔍 Read Now:
👉 medium.com/write-a-catalyst/be


medium.com/write-a-catalyst/be

2025-05-31

Portoroz coastline by night. Tomorrow, ESWC 2025 will start with 2 days of workshops & tutorials, before the main conference with the "official" opening on June 3rd. Looking forward to a great conference! :)

2025.eswc-conferences.org/abou

#eswc2025 #semanticweb #semweb #knowledgegraphs #reliableAI #explainableAI #llms #slovenia @fiz_karlsruhe @fizise @tabea @enorouzi @sashabruns

Portoroz coastline with the harbour and the lights by night.
Dr. Thompsonrogt_x1997
2025-05-27

🧠⚠️ What if your AI could sense failure before it even runs?
We built a system that predicts agent execution outcomes with 88.6% accuracy using interpretable ML, prompt entropy, and chain depth indicators.
This is not sci-fi—it's agentic foresight in action.

👇 Read now:
medium.com/@rogt.x1997/why-40-


medium.com/@rogt.x1997/why-40-

Sanjay Mohindroosmohindroo1@vivaldi.net
2025-05-27

Delve into the darker realms of artificial intelligence with this reflective exploration of AI bias, toxic data practices, and ethical dilemmas. Discover the challenges and opportunities facing IT leaders as they navigate the complexities of AI technology. #ArtificialIntelligence #AIethics #DataEthics #TechnologyEthics #ExplainableAI #ChatGPT #EthicalAI #Regulation #AGI #SanjayMohindroo
medium.com/@sanjay.mohindroo66

Tommaso Turchitommasoturchi
2025-05-23

Can you spare 10–15 min for an research project? My student is studying how explanations impact trust in AI decisions. No AI experience needed!

👉English: survey.trx.li/index.php/193548
👉Italiano: survey.trx.li/index.php/193548

RTs greatly appreciated!

Dr. Thompsonrogt_x1997
2025-05-22

🚀 What if your AI could say “I might fail” before even trying?
🔍 10,254 logs analyzed
📊 88.9% accurate predictive model
đź§  SHAP-powered, self-aware agentic system
Discover how Manus AI is turning agents into intelligent evaluators.

👉 Read the full story: medium.com/@rogt.x1997/88-9-ac

medium.com/@rogt.x1997/88-9-ac

Dr. Thompsonrogt_x1997
2025-05-22

🔍 From Black Box to Glass House
AI is no longer behind the curtain—it's in courtrooms, hospitals, and hiring panels. But can we trust what we can't see?
Discover the real cost of algorithmic opacity and why transparency is non-negotiable in modern AI ethics. ⚖️

👉
medium.com/@rogt.x1997/from-bl

Valeriy M., PhD, MBA, CQFpredict_addict@sigmoid.social
2025-05-17

Transparent, human-readable models from raw data. At this point, KAN sceptics haven’t just eaten their hats — they’ve devoured the rest of their wardrobe too.

arxiv.org/pdf/2505.07956

#AI #KAN #LLM #SymbolicRegression #ExplainableAI #MachineLearning

Tim Greenrawveg@me.dm
2025-05-06

AI's growing power demands transparency—understanding how decisions are made is key to trust and accountability. Explainable AI (XAI) bridges the gap between black-box algorithms and human collaboration.
Discover more at rawveg.substack.com/p/unlockin
#HumanInTheLoop #AIethics #ExplainableAI #TechInnovation

CSBJcsbj
2025-04-15

🧬 Could AI deliver skin cancer diagnoses with the clarity and reasoning of a dermatologist?

đź”— A two-step concept-based approach for enhanced interpretability and trust in skin lesion diagnosis. DOI: doi.org/10.1016/j.csbj.2025.02

📚 CSBJ Smart Hospital: csbj.org/smarthospital

A two-step concept-based approach for enhanced interpretability and trust in skin lesion diagnosis. Computational and Structural Biotechnology Journal, DOI: https://doi.org/10.1016/j.csbj.2025.02.013
caravanecaravane
2025-04-13
Miguel Afonso Caetanoremixtures@tldr.nettime.org
2025-04-11

"Finally, AI can fact-check itself. One large language model-based chatbot can now trace its outputs to the exact original data sources that informed them.

Developed by the Allen Institute for Artificial Intelligence (Ai2), OLMoTrace, a new feature in the Ai2 Playground, pinpoints data sources behind text responses from any model in the OLMo (Open Language Model) project.

OLMoTrace identifies the exact pre-training document behind a response — including full, direct quote matches. It also provides source links. To do so, the underlying technology uses a process called “exact-match search” or “string matching.”

“We introduced OLMoTrace to help people understand why LLMs say the things they do from the lens of their training data,” Jiacheng Liu, a University of Washington Ph.D. candidate and Ai2 researcher, told The New Stack.

“By showing that a lot of things generated by LLMs are traceable back to their training data, we are opening up the black boxes of how LLMs work, increasing transparency and our trust in them,” he added.

To date, no other chatbot on the market provides the ability to trace a model’s response back to specific sources used within its training data. This makes the news a big stride for AI visibility and transparency."

thenewstack.io/llms-can-now-tr

#AI #GenerativeAI #LLMs #Chatbots #ExplainableAI #Traceability #AITraining

CSBJcsbj
2025-04-10

🧠 Is AI ready to be your doctor’s second opinion — or is it still a black box?

đź”— From explainable to interpretable deep learning for natural language processing in healthcare: How far from reality?. Computational and Structural Biotechnology Journal, DOI: doi.org/10.1016/j.csbj.2024.05

📚 CSBJ Smart Hospital: csbj.org/smarthospital

From explainable to interpretable deep learning for natural language processing in healthcare: How far from reality?. Computational and Structural Biotechnology Journal, DOI: https://doi.org/10.1016/j.csbj.2024.05.004
CSBJcsbj
2025-04-09

🫀 Could AI help us understand what's really happening in heart failure — down to the metabolites?

đź”— Insights into heart failure metabolite markers through explainable machine learning. Computational and Structural Biotechnology Journal, DOI: doi.org/10.1016/j.csbj.2025.02

📚 CSBJ: csbj.org/

Insights into heart failure metabolite markers through explainable machine learning. Computational and Structural Biotechnology Journal, DOI: https://doi.org/10.1016/j.csbj.2025.02.041
CSBJcsbj
2025-04-08

🤖 Can we decode how radiologists think — so machines can think with them?

đź”— Bridging human and machine intelligence: Reverse-engineering radiologist intentions for clinical trust and adoption. Computational and Structural Biotechnology Journal, DOI: doi.org/10.1016/j.csbj.2024.11

📚 CSBJ Smart Hospital: csbj.org/smarthospital

Bridging human and machine intelligence: Reverse-engineering radiologist intentions for clinical trust and adoption. Computational and Structural Biotechnology Journal, DOI: https://doi.org/10.1016/j.csbj.2024.11.012
CSBJcsbj
2025-04-07

🧬Can we trust AI in bioinformatics if we don’t understand how it makes decisions?

As AI becomes central to bioinformatics, the opacity of its decision-making remains a major concern.

đź”— Demystifying the Black Box: A Survey on Explainable Artificial Intelligence (XAI) in Bioinformatics. Computational and Structural Biotechnology Journal, DOI: doi.org/10.1016/j.csbj.2024.12

📚 CSBJ: csbj.org/

Demystifying the Black Box: A Survey on Explainable Artificial Intelligence (XAI) in Bioinformatics. Computational and Structural Biotechnology Journal, DOI: https://doi.org/10.1016/j.csbj.2024.12.027
2025-04-03

We did it! We selected the 61 best students from the overall more than 100 applications. Due to always possible visa issues and other unfortunate circumstances for late cancellations, we've also created a waiting list for potential additional candidates. Congratulations to all of you!!
Looking forward to meeting you all in Bertinoro in June!

2025.semanticwebschool.org/

#semanticweb #knowledgegraphs #AI #responsibleAI #reliableAI #explainableAI #llms #summerschool #academiclife

Group picture of ISWS 2018. from left to right (front) Harald Sack, Marieke van Erp, Tabea Tietz, Andrea Giovanni Nuzzolese, ...

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst