#ResponsibleAI

2025-06-18

'The Responsible AI Ecosystem: Seven Lessons from the BRAID Landscape Study'
Report just released: braiduk.org/the-responsible-ai

zenodo.org/records/15195686

#ResponsibleAI #RAI

Seven β€˜lessons learned’ from the first waves responsible Al
The 'Al' in R-Al is an elusive and rapidly moving target
R-Al must expand stakeholder reach to include impacted communities
Narrowly technical approaches to R-Al do not work
Public trust is essential to a sustainable R-Al ecosystem
Good intentions are not enough for R-Al
R-Al must address questions wider than ethics and legality
R-Al is not a problem to be solved but an ecosystem to be built and sustained
2025-06-18

Fantastic comment in a question at the BRAID event: "there's too much focus on 'are you ready for AI, and not enough on 'is AI ready for your business and society'"πŸ’―πŸ’―πŸ’―

I keep saying we don't yet have the AI we deserve - yes, we need to challenge and question it, actively shape it!

#ResponsibleAI #RAI (edited to add hashtags)

Harold Sinnott πŸ“²HaroldSinnott
2025-06-18

Enterprises racing to deploy GenAI are facing rising ethical risks. Transparency, governance, and bias mitigation aren’t optional.

hbr.org/2025/03/ai-ethics-is-n

wurzelgrumpfjschulze
2025-06-17

β€žπ‚π‘πšπ­π†ππ“ - πƒπšπ¬ 𝐩𝐞𝐫𝐟𝐞𝐀𝐭𝐞 π•πžπ«π¬π©π«πžπœπ‘πžπ§ πŸβ€œ

🀩 𝔾𝕖𝕀𝕔𝕙𝕒𝕗𝕗π•₯! 𝔻𝕒𝕀 π”Ύπ•ΓΌπ•”π•œπ•€π•˜π•–π•—ΓΌπ•™π•, π••π•–π•Ÿ π”Έπ•Ÿπ••π•£π•¦π•”π•œ π•–π•šπ•Ÿπ•–π•€ 𝕀𝕖𝕝𝕓𝕀π•₯ 𝕧𝕖𝕣𝕗𝕒𝕀𝕀π•₯π•–π•Ÿ 𝔹𝕦𝕔𝕙𝕖𝕀 π•šπ•Ÿ β„π•’π•–π•Ÿπ••π•–π•Ÿ 𝕫𝕦 𝕙𝕒𝕝π•₯π•–π•Ÿ, π•Ÿπ•¦π•₯𝕫π•₯ π•€π•šπ•”π•™ π•Ÿπ•šπ•”π•™π•₯ 𝕒𝕓. 🀩
π”»π•šπ•– π•«π•¨π•–π•šπ•₯𝕖, π•€π•šπ•˜π•Ÿπ•šπ•—π•šπ•œπ•’π•Ÿπ•₯ π•–π•£π•¨π•–π•šπ•₯𝕖𝕣π•₯𝕖 π”Έπ•¦π•€π•˜π•’π•“π•–, π•šπ•€π•₯ 𝕒𝕓 𝕀𝕠𝕗𝕠𝕣π•₯ 𝕒𝕝𝕀 𝔼-π”Ήπ• π• π•œ π•¦π•Ÿπ•• π•‹π•’π•€π•”π•™π•–π•Ÿπ•“π•¦π•”π•™ π•“π•–π•š π”Έπ•žπ•’π•«π• π•Ÿ π•¦π•Ÿπ•• 𝔸𝕑𝕑𝕝𝕖 𝕖𝕣𝕙À𝕝π•₯π•π•šπ•”π•™.

π™Έπš‚π™±π™½ 𝟿𝟽𝟾-𝟹-𝟿𝟷𝟢𝟿𝟷𝟸-𝟢𝟸-𝟷 (𝙴-π™±πš˜πš˜πš”)
π™Έπš‚π™±π™½ 𝟿𝟽𝟾-𝟹-𝟿𝟷𝟢𝟿𝟷𝟸-𝟢𝟹-𝟾 (πšƒB)

jschulze.com/projects/ChatGPT2/

Cover des Buches β€žChatGPT – Das perfekte VersprechenΒ²β€œ von JΓΌrgen Schulze. Der Hintergrund ist in krΓ€ftigem Orange gehalten. Im Zentrum leuchtet eine stilisierte gelbe GlΓΌhbirne, in deren Innerem sich ein auffΓ€lliger Angelhaken befindet – als Symbol fΓΌr mΓΆglichen KΓΆder oder Falle. Der Titel steht in weißer Schrift mittig auf dem Cover. Über dem Titel ist der Autorenname β€žJΓΌrgen Schulzeβ€œ auf einem schwarzen Balken hervorgehoben. Unten der Untertitel: β€žChatbots – Segensreiche Entlastung fΓΌr den ΓΌberforderten menschlichen Geist oder digitale Quasselstrippen mit zweifelhafter Erziehung und schlechter Sozialprognose?β€œ sowie: β€žNeue, erheblich erweiterte 2. Ausgabeβ€œ. Oben rechts befindet sich eine weiße Ziffer β€ž2β€œ als Hinweis auf die zweite Ausgabe.
2025-06-17

We’re live at OW2con25 today!

This conference has always been a space where open-source thinkers come together to build better futures.

This year’s focus? Open source and responsible AI. A conversation we care deeply about.

🎀 Our CEO @ldubost was on stage sharing insights from XWiki and the WAISE project, exploring what AI means for open-source companies, user autonomy, and ethical tech.

@ow2
#FOSS #DigitalSovereignty #ResponsibleAI #XWiki #OpenTech #OW2con25

Ludovic Dubost on stage at OW2Con
Softsasisoftsasi
2025-06-16

New York passes a groundbreaking AI Disaster Prevention Bill! πŸ—½

It focuses on risk assessment, transparency, & accountability. Softsasi can help orgs navigate AI compliance, ensuring responsible AI development. πŸ€–

Dr Robert N. Winterrobert@social.winter.ink
2025-06-14

In the final instalment of this edition of the Talent Aperture Series, I continue the case that hiring isn't procurementβ€”it's stewardshipβ€”and explore:

🧠 How we reclaim human judgement in hiring
πŸ“ˆ Why blind recruitment and contextual interviews are gaining ground
πŸ’Ž What good decision-making really demands in a world drunk on metrics.

robert.winter.ink/the-talent-a

#Discernment #EthicalHiring #AlgorithmicBias #HumanJudgement #ResponsibleAI #TalentEthics #StrategicRecruitment #HiringPractices

Nicola Fabiano :xmpp:nicfab@fosstodon.org
2025-06-14

πŸ“˜ The English edition of my latest book is now available!

β€œArtificial Intelligence, Neural Networks and Privacy: Striking a Balance between Innovation, Knowledge, and Ethics in the Digital Age”

With forewords by Danilo Mandic and Carlo Morabito, and an introduction by Guido Scorza, this edition offers a comprehensive, multidisciplinary perspective.

πŸ”— More details: nicfab.eu/en/pages/bookai/

#AI #Privacy #LLMs #NeuroRights #AIAct #Cybersecurity #ResponsibleAI #EthicsInAI #ArtificialIntelligence

2025-06-13

FIZ Karlsruhe nimmt Stellung zur EU-Konsultation ΓΌber KI-Verordnung

Unsere Botschaft: GPAI-Modelle sollten nicht nur nach Rechenleistung reguliert werden.

Besser: risikobasierte AnsΓ€tze & Raum fΓΌr offene Forschung.

Stellungnahme vom 22. Mai liefert Impulse fΓΌr eine praxisnahe, innovationsfreundliche Regulierung.

tinyurl.com/272lfeos

#GPAI #KI #EUAIAct #ResponsibleAI #Forschung #FIZKarlsruhe #AIRegulation

Ruopeng AnRuopeng_An
2025-06-12

Startups now offer β€œcompliance as code” for AI auditsβ€”model cards, fairness reports, energy ledgers on demand. Can regulators trust industry-built tools, or should audit code be open-sourced? Share experiences with automated assurance platforms. πŸ—οΈ

2025-06-11

Addressing the fears of AI is crucial. My latest post explores the potential risks and benefits, inspired by 'The Sentient Machine'. ctnet.co.uk/the-sentient-machi #AIRisks #AIBenefits #ResponsibleAI

Thinker, The InfoSec Buddha_th1nk3r@infosec.exchange
2025-06-11

🚨 Hot take: Most AI companies build risk frameworks backwards

❌ Build AI first, figure out risks later

βœ… Design risk governance INTO development

Treat AI safety as competitive advantage, not compliance burden.

What’s the biggest AI risk blind spot you’re seeing?

#AIRisk #ResponsibleAI

Eric Maugendreeric@social.coop
2025-06-11

"#Amsterdam followed every piece of advice in the #ResponsibleAI playbook. It debiased its system when early tests showed ethnic #bias and brought on academics and consultants to shape its approach, ultimately choosing an explainable algorithm over more opaque alternatives. The city even consulted a participatory council of welfare recipients.
[Yet] the system continued to be plagued by biases.
[…] As political pressure mounted, officials killed the project."

lighthousereports.com/investig

#ethicalAI

2025-06-11

New from me, Gabriel Geiger,
+ Justin-Casimir Braun at Lighthouse Reports.

Amsterdam believed that it could build a #predictiveAI for welfare fraud that would ALSO be fair, unbiased, & a positive case study for #ResponsibleAI. It didn't work.

Our deep dive why: technologyreview.com/2025/06/1

Karen Smiley (6 Ps in AI Pods)karensmiley
2025-06-11

Do we need a business case for ethical AI?

Why AI ethics is like DEI and sustainability - or, how to encourage more business people to do the right thing, even if it's not for what we may think are the 'right' reasons.

Curious to hear thoughts from my colleagues in AI, business, and ethics, or anyone who's currently using AI-based tools (essentially everyone).

aaab.karensmiley.com/p/do-we-n

Image of currencies (bank notes) from multiple countries worldwide
2025-06-10

Now, research task forces are working hard on their research problem for the week.

#isws2025 #summerschool #semanticweb #semweb #llms #AI #responsibleAI #neursymbolicAI #academiclife #bertinoro

Group of 5 students sitting together with tutor Sebastian Rudolph from TU Dresden around a table, stuffed with computing infrastructure, thinking....

The educator panic over AI is real, and rational.
I've been there myself. The difference is I moved past denial to a more pragmatic question: since AI regulation seems unlikely (with both camps refusing to engage), how do we actually work with these systems?

The "AI will kill critical thinking" crowd has a point, but they're missing context.
Critical reasoning wasn't exactly thriving before AI arrived: just look around. The real question isn't whether AI threatens thinking skills, but whether we can leverage it the same way we leverage other cognitive tools.

We don't hunt our own food or walk everywhere anymore.
We use supermarkets and cars. Most of us Google instead of visiting libraries. Each tool trade-off changed how we think and what skills matter. AI is the next step in this progression, if we're smart about it.

The key is learning to think with AI rather than being replaced by it.
That means understanding both its capabilities and our irreplaceable human advantages.

1/3

#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy

AI isn't going anywhere. Time to get strategic:
Instead of mourning lost critical thinking skills, let's build on them through cognitive delegationβ€”using AI as a thinking partner, not a replacement.

This isn't some Silicon Valley fantasy:
Three decades of cognitive research already mapped out how this works:

Cognitive Load Theory:
Our brains can only juggle so much at once. Let AI handle the grunt work while you focus on making meaningful connections.

Distributed Cognition:
Naval crews don't navigate with individual geniusβ€”they spread thinking across people, instruments, and procedures. AI becomes another crew member in your cognitive system.

Zone of Proximal Development
We learn best with expert guidance bridging what we can't quite do alone. AI can serve as that "more knowledgeable other" (though it's still early days).
The table below shows what this looks like in practice:

2/3

#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst