#explainableAI

2025-12-18

🤖 Wir können viele (Routine-)Aufgaben an KI-Systeme abgeben.

⚠️ Die Verantwortung für die Ergebnisse aber nicht.

Welche Voraussetzungen verantwortungsvoller Gebrauch von #KI hat und warum menschliche Expertise weiterhin gefragt ist, erklärt Matthias Peissner, Fraunhofer IAO und Mitglied der Plattform @LernendeSysteme, im Video.
➡️ youtube.com/shorts/WyXxwEbnXHU

🎥 Zur Langversion des Interviews zu Job-Profilen im Wandel durch KI:

➡️ youtube.com/watch?v=uaf__RYGWQg

#GenerativeKI #ExplainableAI

Bild eines Mannes mit Bart vor blauem Hintergrund. Auf dem Bild steht: Nachgefragt zu KI. KI und Verantwortung.
Alex JimenezAlexJimenez@mas.to
2025-12-12

We want AI to explain itself, but today’s explainable AI mostly offers post-hoc rationalizations, not real insight into how decisions are made. True explainability still remains elusive, especially in complex models.

buff.ly/eQkK7Yg

#AI #ExplainableAI #ResponsibleTech #AIEthics

RC Trustworthy Data Sciencerctrust@ruhr.social
2025-12-09
RC Trustworthy Data Sciencerctrust@ruhr.social
2025-11-26
Harald KlinkeHxxxKxxx@det.social
2025-11-25

A fascinating long-term research project from the University of Zurich: “The Canon of Latent Spaces: How Large AI Models Encode Art and Culture.”
It investigates how multimodal AI models—trained on millions of image-text pairs—encode cultural memory, reproduce biases, and shape future artistic and cultural production. A much-needed analytical and critical perspective on the aesthetics, politics, and epistemology of latent spaces.
#DigitalArtHistory #ExplainableAI
latentcanon.github.io/

Industry ExaminerIndustryExaminer
2025-11-11

Explainable > black box. ACE could give its first regulator-friendly ageing readout: pathway-level features, pre-specified Context of Use, and a credible validation plan. Our take
biotech.industryexaminer.com/e

AIXPERT projectAIXPERT_project
2025-11-03

A consortium of 17 partners is creating human-centered AI grounded in fairness, accountability, transparency, and ethics. 🤝

A clear mission, with a precise solution.

Find out our vision for human-centered AI in our first official press release: aixpert-project.eu/2025/11/03/


AIXPERT First Press Release
Knowledge Zonekzoneind@mstdn.social
2025-10-31

#ITByte: #ExplainableAI (#XAI), often known as Interpretable AI, or Explainable Machine Learning (XML), either refers to an AI system over which it is possible for humans to retain intellectual oversight, or to the methods to achieve this.

The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent.

knowledgezone.co.in/trends/exp

2025-10-31

🧠 “Explainability is not a luxury – it’s a competitive advantage.”
In connect professional, our Head of Artificial Intelligence, Prof. Wojciech Samek, explains why #ExplainableAI is key to improving systems, building trust & driving innovation.

đź”— connect-professional.de/softwa

AIXPERT projectAIXPERT_project
2025-10-21

🤖 What does it take to make AI trustworthy, explainable, and ethical?
đź’ˇ AIXPERT's newly launched website gives you our take on building AI that earns your trust, and so much more! From our mission, goals, and partners information, to application fields!
đź”— aixpert-project.eu/

AIXPERT - AI with a Human Touch. Explore our newly launched website!
RC Trustworthy Data Sciencerctrust@ruhr.social
2025-10-13
2025-10-10

We especially appreciated the evening talks:
➡️ Jan Rybicki (Jagiellonian University) asked whether #ChatGPT could imitate Hemingway,
➡️ Katherine Bode (Australian National University) explored the materiality of #computing,
➡️ Damien Garreau (University of Würzburg) introduced #explainableAI. (2/4)

Annual Computer Security Applications ConferenceACSAC_Conf@infosec.exchange
2025-10-02

The last paper presented was Hegde et al.'s "Model-Manipulation Attacks Against Black-Box Explanations," exploring vulnerabilities in explanation methods like LIME and highlighting the need for trustworthy alternatives. (acsac.org/2024/program/final/s) 6/6
#TrustworthyAI #ExplainableAI

Hegde et al.'s "Model-Manipulation Attacks Against Black-Box Explanations"
2025-09-30

Two weeks left to submit your poster paper to our Symposium!
Call for Posters: DiTraRe Symposium on Digitalisation of Research 2025
Submission deadline: Oct 17, 2025
Submission link: easychair.org/my/conference?co
Conference website: ditrare.de/en/symposium-2025

#AI #generativeAI #explainableAI #agenticAI #reliableAI #digitalisation #research #cfp @fiz_karlsruhe @fizise @KIT_Karlsruhe @ITAS @AnnaJacyszyn @Feelix #knowledgegraphs #semanticweb #datascience #science #legal #ethics #innovation #cfp

Call for posters for the DiTraRe Symposium 2025. 
List of Topics
The following areas are especially aligned with our Symposium sessions:
    Knowledge Representation and AI, i.e.:
        AI-Ready University
        Data Infrastructures as a Foundation for AI Projects
        Federated Infrastructures, Knowledge Representation and AI in the Humanities
    Legal and Ethical Challenges, i.e.:
        Law and Ethics in Digitalization of Research
        AI and Usage of Research Data
    Research Infrastructures, i.e.:
        Advancing Machine Learning Through Open and Large-Scale Data Initiatives
        Leveraging Established E-Infrastructures to Enhance Research Data Management and EOSC Integration
        NFDI: A Science-Driven Network for FAIR Research Data as a Common Good
    Impact on Science and Society, i.e.:
        Digital Transformation in Science – the Role of Generative AI Tools for Changes in Research Practice and Knowledge Production
        Publishing in the Age of AI: How Generative AI Is Transforming Scientific Communication
        Generative AI in the Practice of Scientific Policy Advice - Potentials and Challenges at the Interface of Science and Politics
        Reflection on the Science Society Interface of Generative AI
2025-09-26

Three weeks left to submit your poster paper to our Symposium!
Call for Posters: DiTraRe Symposium on Digitalisation of Research 2025
Submission deadline: Oct 17, 2025
Submission link: easychair.org/my/conference?co
Conference website: ditrare.de/en/symposium-2025

#AI #generativeAI #explainableAI #reliableAI #digitalisation #research #cfp @fiz_karlsruhe @fizise @KIT_Karlsruhe @ITAS @AnnaJacyszyn @Feelix #knowledgegraphs #semanticweb #datascience

Call for posters for the DiTraRe Symposium 2025. 
List of Topics
The following areas are especially aligned with our Symposium sessions:
    Knowledge Representation and AI, i.e.:
        AI-Ready University
        Data Infrastructures as a Foundation for AI Projects
        Federated Infrastructures, Knowledge Representation and AI in the Humanities
    Legal and Ethical Challenges, i.e.:
        Law and Ethics in Digitalization of Research
        AI and Usage of Research Data
    Research Infrastructures, i.e.:
        Advancing Machine Learning Through Open and Large-Scale Data Initiatives
        Leveraging Established E-Infrastructures to Enhance Research Data Management and EOSC Integration
        NFDI: A Science-Driven Network for FAIR Research Data as a Common Good
    Impact on Science and Society, i.e.:
        Digital Transformation in Science – the Role of Generative AI Tools for Changes in Research Practice and Knowledge Production
        Publishing in the Age of AI: How Generative AI Is Transforming Scientific Communication
        Generative AI in the Practice of Scientific Policy Advice - Potentials and Challenges at the Interface of Science and Politics
        Reflection on the Science Society Interface of Generative AI
2025-09-21

PEAR: a novel loss term enhances interpretability and confidence in deep learning models for tabular data, hence boosting consensus in AI explainers. hackernoon.com/the-geeks-guide #explainableai

2025-09-21

Discover how PEAR increases explainer consensus, paving the way for deep learning models that are fair, interpretable, and prepared for the future. hackernoon.com/can-pear-make-d #explainableai

2025-09-21

Learn how PEAR exhibits optimal consensus when both loss terms are balanced, improves linearity, and preserves useful, non-trivial explanations. hackernoon.com/consensus-loss- #explainableai

2025-09-21

Learn how PEAR offers interpretability improvements with negligible accuracy tradeoffs by improving explainer agreement across measures and invisible explainers hackernoon.com/the-trade-off-b #explainableai

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst