'The Responsible AI Ecosystem: Seven Lessons from the BRAID Landscape Study'
Report just released: https://braiduk.org/the-responsible-ai-ecosystem-seven-lessons-from-the-braid-landscape-study
'The Responsible AI Ecosystem: Seven Lessons from the BRAID Landscape Study'
Report just released: https://braiduk.org/the-responsible-ai-ecosystem-seven-lessons-from-the-braid-landscape-study
Fantastic comment in a question at the BRAID event: "there's too much focus on 'are you ready for AI, and not enough on 'is AI ready for your business and society'"π―π―π―
I keep saying we don't yet have the AI we deserve - yes, we need to challenge and question it, actively shape it!
#ResponsibleAI #RAI (edited to add hashtags)
Enterprises racing to deploy GenAI are facing rising ethical risks. Transparency, governance, and bias mitigation arenβt optional. #AIEthics #ResponsibleAI #EnterpriseAI
https://hbr.org/2025/03/ai-ethics-is-now-a-business-imperative
βππ‘πππππ - πππ¬ π©ππ«πππ€ππ πππ«π¬π©π«πππ‘ππ§ πβ
π€© πΎππ€ππππππ₯! π»ππ€ πΎπΓΌπππ€πππΓΌππ, πππ πΈπππ£π¦ππ πππππ€ π€ππππ€π₯ π§ππ£πππ€π€π₯ππ πΉπ¦ππππ€ ππ βππππππ π«π¦ ππππ₯ππ, ππ¦π₯π«π₯ π€πππ πππππ₯ ππ. π€©
π»ππ π«π¨πππ₯π, π€ππππππππππ₯ ππ£π¨πππ₯ππ£π₯π πΈπ¦π€ππππ, ππ€π₯ ππ π€π ππ π£π₯ πππ€ πΌ-πΉπ π π π¦ππ πππ€ππππππ¦ππ πππ πΈπππ«π π π¦ππ πΈπ‘π‘ππ ππ£πΓ€ππ₯ππππ.
πΈππ±π½ πΏπ½πΎ-πΉ-πΏπ·πΆπΏπ·πΈ-πΆπΈ-π· (π΄-π±πππ)
πΈππ±π½ πΏπ½πΎ-πΉ-πΏπ·πΆπΏπ·πΈ-πΆπΉ-πΎ (πB)
Weβre live at OW2con25 today!
This conference has always been a space where open-source thinkers come together to build better futures.
This yearβs focus? Open source and responsible AI. A conversation we care deeply about.
π€ Our CEO @ldubost was on stage sharing insights from XWiki and the WAISE project, exploring what AI means for open-source companies, user autonomy, and ethical tech.
@ow2
#FOSS #DigitalSovereignty #ResponsibleAI #XWiki #OpenTech #OW2con25
New York passes a groundbreaking AI Disaster Prevention Bill! π½
It focuses on risk assessment, transparency, & accountability. Softsasi can help orgs navigate AI compliance, ensuring responsible AI development. π€
In the final instalment of this edition of the Talent Aperture Series, I continue the case that hiring isn't procurementβit's stewardshipβand explore:
π§ How we reclaim human judgement in hiring
π Why blind recruitment and contextual interviews are gaining ground
π What good decision-making really demands in a world drunk on metrics.
https://robert.winter.ink/the-talent-aperture-reopened/
#Discernment #EthicalHiring #AlgorithmicBias #HumanJudgement #ResponsibleAI #TalentEthics #StrategicRecruitment #HiringPractices
π The English edition of my latest book is now available!
βArtificial Intelligence, Neural Networks and Privacy: Striking a Balance between Innovation, Knowledge, and Ethics in the Digital Ageβ
With forewords by Danilo Mandic and Carlo Morabito, and an introduction by Guido Scorza, this edition offers a comprehensive, multidisciplinary perspective.
π More details: https://www.nicfab.eu/en/pages/bookai/
#AI #Privacy #LLMs #NeuroRights #AIAct #Cybersecurity #ResponsibleAI #EthicsInAI #ArtificialIntelligence
AI-Induced Psychosis: How ChatGPT Is Fueling Deadly Delusions and Promotes Conspiracy Theories
#AI #AISafety #ChatGPT #OpenAI #MentalHealth #AIethics #TechPolicy #Psychosis #ResponsibleAI #CorporateResponsibility
FIZ Karlsruhe nimmt Stellung zur EU-Konsultation ΓΌber KI-Verordnung
Unsere Botschaft: GPAI-Modelle sollten nicht nur nach Rechenleistung reguliert werden.
Besser: risikobasierte AnsΓ€tze & Raum fΓΌr offene Forschung.
Stellungnahme vom 22. Mai liefert Impulse fΓΌr eine praxisnahe, innovationsfreundliche Regulierung.
#GPAI #KI #EUAIAct #ResponsibleAI #Forschung #FIZKarlsruhe #AIRegulation
Startups now offer βcompliance as codeβ for AI auditsβmodel cards, fairness reports, energy ledgers on demand. Can regulators trust industry-built tools, or should audit code be open-sourced? Share experiences with automated assurance platforms. ποΈ #ResponsibleAI
OpenAI's Troubling Paradox: Unsafe AI and Premium-Priced Trust
#AI #AISafety #OpenAI #AIEthics #ChatGPT #ResponsibleAI #AIregulation
Addressing the fears of AI is crucial. My latest post explores the potential risks and benefits, inspired by 'The Sentient Machine'. https://www.ctnet.co.uk/the-sentient-machine-key-takeaways-on-ai-humanity-and-our-future/ #AIRisks #AIBenefits #ResponsibleAI
π¨ Hot take: Most AI companies build risk frameworks backwards
β Build AI first, figure out risks later
β Design risk governance INTO development
Treat AI safety as competitive advantage, not compliance burden.
Whatβs the biggest AI risk blind spot youβre seeing?
"#Amsterdam followed every piece of advice in the #ResponsibleAI playbook. It debiased its system when early tests showed ethnic #bias and brought on academics and consultants to shape its approach, ultimately choosing an explainable algorithm over more opaque alternatives. The city even consulted a participatory council of welfare recipients.
[Yet] the system continued to be plagued by biases.
[β¦] As political pressure mounted, officials killed the project."
https://www.lighthousereports.com/investigation/the-limits-of-ethical-ai/
New from me, Gabriel Geiger,
+ Justin-Casimir Braun at Lighthouse Reports.
Amsterdam believed that it could build a #predictiveAI for welfare fraud that would ALSO be fair, unbiased, & a positive case study for #ResponsibleAI. It didn't work.
Our deep dive why: https://www.technologyreview.com/2025/06/11/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure/
Do we need a business case for ethical AI?
Why AI ethics is like DEI and sustainability - or, how to encourage more business people to do the right thing, even if it's not for what we may think are the 'right' reasons.
Curious to hear thoughts from my colleagues in AI, business, and ethics, or anyone who's currently using AI-based tools (essentially everyone).
https://aaab.karensmiley.com/p/do-we-need-a-business-case-for-ethical-ai
#EthicalAI #ResponsibleAI #BusinessCase #Sustainability #DEI #SheWritesAI
Now, research task forces are working hard on their research problem for the week.
#isws2025 #summerschool #semanticweb #semweb #llms #AI #responsibleAI #neursymbolicAI #academiclife #bertinoro
The educator panic over AI is real, and rational.
I've been there myself. The difference is I moved past denial to a more pragmatic question: since AI regulation seems unlikely (with both camps refusing to engage), how do we actually work with these systems?
The "AI will kill critical thinking" crowd has a point, but they're missing context.
Critical reasoning wasn't exactly thriving before AI arrived: just look around. The real question isn't whether AI threatens thinking skills, but whether we can leverage it the same way we leverage other cognitive tools.
We don't hunt our own food or walk everywhere anymore.
We use supermarkets and cars. Most of us Google instead of visiting libraries. Each tool trade-off changed how we think and what skills matter. AI is the next step in this progression, if we're smart about it.
The key is learning to think with AI rather than being replaced by it.
That means understanding both its capabilities and our irreplaceable human advantages.
1/3
#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy
AI isn't going anywhere. Time to get strategic:
Instead of mourning lost critical thinking skills, let's build on them through cognitive delegationβusing AI as a thinking partner, not a replacement.
This isn't some Silicon Valley fantasy:
Three decades of cognitive research already mapped out how this works:
Cognitive Load Theory:
Our brains can only juggle so much at once. Let AI handle the grunt work while you focus on making meaningful connections.
Distributed Cognition:
Naval crews don't navigate with individual geniusβthey spread thinking across people, instruments, and procedures. AI becomes another crew member in your cognitive system.
Zone of Proximal Development
We learn best with expert guidance bridging what we can't quite do alone. AI can serve as that "more knowledgeable other" (though it's still early days).
The table below shows what this looks like in practice:
2/3
#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy