#responsibleai

Neuronus Computingneuronus_computing
2025-06-22

AI is rising fastβ€”but can we keep it in check?

As AI reshapes our world, global regulation is key to ensuring it serves humanityβ€”not harms it.
🧭 Let’s guide AI with smart, united regulation.

πŸ” Learn more about AI lawsβ€”read the full blog.πŸ‘‡

neuronus.net/en/blog/ai-legisl

Ruopeng AnRuopeng_An
2025-06-22

Regulatory sandboxes test algorithms under supervision, but few cross borders. Could interoperable sandboxes boost innovation while preserving oversight? Policy architects: lessons from fintech or biotech? πŸ”„

Brian Greenberg :verified:brian_greenberg@infosec.exchange
2025-06-20

🚨 AI is hallucinating more, just as we’re trusting it with more critical work. New β€œreasoning” models, such as OpenAI’s o3 and o4-mini, were designed to solve complex problems step-by-step. But the results?
🧠 o3: 51% hallucination rate on general questions
πŸ“‰ o4-mini: 79% hallucination on benchmark tests
πŸ” Google & DeepSeek’s models also show rising errors
⚠️ Trial-and-error learning compounds risk at each step

Why is this happening? Because these models don’t understand truth, they just predict what sounds right. And the more they β€œthink,” the more they misstep.

We’re using these tools in legal, medical, and enterprise settingsβ€”yet even their creators admit:
🧩 We don’t know exactly how they work.

βœ… It’s a wake-up call: accuracy, explainability, and source traceability must be the new AI benchmarks.

#AI #LLM #ResponsibleAI #AIEthics #Hallucination
nytimes.com/2025/05/05/technol

2025-06-18

'The Responsible AI Ecosystem: Seven Lessons from the BRAID Landscape Study'
Report just released: braiduk.org/the-responsible-ai

zenodo.org/records/15195686

#ResponsibleAI #RAI

Seven β€˜lessons learned’ from the first waves responsible Al
The 'Al' in R-Al is an elusive and rapidly moving target
R-Al must expand stakeholder reach to include impacted communities
Narrowly technical approaches to R-Al do not work
Public trust is essential to a sustainable R-Al ecosystem
Good intentions are not enough for R-Al
R-Al must address questions wider than ethics and legality
R-Al is not a problem to be solved but an ecosystem to be built and sustained
2025-06-18

Fantastic comment in a question at the BRAID event: "there's too much focus on 'are you ready for AI, and not enough on 'is AI ready for your business and society'"πŸ’―πŸ’―πŸ’―

I keep saying we don't yet have the AI we deserve - yes, we need to challenge and question it, actively shape it!

#ResponsibleAI #RAI (edited to add hashtags)

Harold Sinnott πŸ“²HaroldSinnott
2025-06-18

Enterprises racing to deploy GenAI are facing rising ethical risks. Transparency, governance, and bias mitigation aren’t optional.

hbr.org/2025/03/ai-ethics-is-n

wurzelgrumpfjschulze
2025-06-17

β€žπ‚π‘πšπ­π†ππ“ - πƒπšπ¬ 𝐩𝐞𝐫𝐟𝐞𝐀𝐭𝐞 π•πžπ«π¬π©π«πžπœπ‘πžπ§ πŸβ€œ

🀩 𝔾𝕖𝕀𝕔𝕙𝕒𝕗𝕗π•₯! 𝔻𝕒𝕀 π”Ύπ•ΓΌπ•”π•œπ•€π•˜π•–π•—ΓΌπ•™π•, π••π•–π•Ÿ π”Έπ•Ÿπ••π•£π•¦π•”π•œ π•–π•šπ•Ÿπ•–π•€ 𝕀𝕖𝕝𝕓𝕀π•₯ 𝕧𝕖𝕣𝕗𝕒𝕀𝕀π•₯π•–π•Ÿ 𝔹𝕦𝕔𝕙𝕖𝕀 π•šπ•Ÿ β„π•’π•–π•Ÿπ••π•–π•Ÿ 𝕫𝕦 𝕙𝕒𝕝π•₯π•–π•Ÿ, π•Ÿπ•¦π•₯𝕫π•₯ π•€π•šπ•”π•™ π•Ÿπ•šπ•”π•™π•₯ 𝕒𝕓. 🀩
π”»π•šπ•– π•«π•¨π•–π•šπ•₯𝕖, π•€π•šπ•˜π•Ÿπ•šπ•—π•šπ•œπ•’π•Ÿπ•₯ π•–π•£π•¨π•–π•šπ•₯𝕖𝕣π•₯𝕖 π”Έπ•¦π•€π•˜π•’π•“π•–, π•šπ•€π•₯ 𝕒𝕓 𝕀𝕠𝕗𝕠𝕣π•₯ 𝕒𝕝𝕀 𝔼-π”Ήπ• π• π•œ π•¦π•Ÿπ•• π•‹π•’π•€π•”π•™π•–π•Ÿπ•“π•¦π•”π•™ π•“π•–π•š π”Έπ•žπ•’π•«π• π•Ÿ π•¦π•Ÿπ•• 𝔸𝕑𝕑𝕝𝕖 𝕖𝕣𝕙À𝕝π•₯π•π•šπ•”π•™.

π™Έπš‚π™±π™½ 𝟿𝟽𝟾-𝟹-𝟿𝟷𝟢𝟿𝟷𝟸-𝟢𝟸-𝟷 (𝙴-π™±πš˜πš˜πš”)
π™Έπš‚π™±π™½ 𝟿𝟽𝟾-𝟹-𝟿𝟷𝟢𝟿𝟷𝟸-𝟢𝟹-𝟾 (πšƒB)

jschulze.com/projects/ChatGPT2/

Cover des Buches β€žChatGPT – Das perfekte VersprechenΒ²β€œ von JΓΌrgen Schulze. Der Hintergrund ist in krΓ€ftigem Orange gehalten. Im Zentrum leuchtet eine stilisierte gelbe GlΓΌhbirne, in deren Innerem sich ein auffΓ€lliger Angelhaken befindet – als Symbol fΓΌr mΓΆglichen KΓΆder oder Falle. Der Titel steht in weißer Schrift mittig auf dem Cover. Über dem Titel ist der Autorenname β€žJΓΌrgen Schulzeβ€œ auf einem schwarzen Balken hervorgehoben. Unten der Untertitel: β€žChatbots – Segensreiche Entlastung fΓΌr den ΓΌberforderten menschlichen Geist oder digitale Quasselstrippen mit zweifelhafter Erziehung und schlechter Sozialprognose?β€œ sowie: β€žNeue, erheblich erweiterte 2. Ausgabeβ€œ. Oben rechts befindet sich eine weiße Ziffer β€ž2β€œ als Hinweis auf die zweite Ausgabe.
2025-06-17

We’re live at OW2con25 today!

This conference has always been a space where open-source thinkers come together to build better futures.

This year’s focus? Open source and responsible AI. A conversation we care deeply about.

🎀 Our CEO @ldubost was on stage sharing insights from XWiki and the WAISE project, exploring what AI means for open-source companies, user autonomy, and ethical tech.

@ow2
#FOSS #DigitalSovereignty #ResponsibleAI #XWiki #OpenTech #OW2con25

Ludovic Dubost on stage at OW2Con
Softsasisoftsasi
2025-06-16

New York passes a groundbreaking AI Disaster Prevention Bill! πŸ—½

It focuses on risk assessment, transparency, & accountability. Softsasi can help orgs navigate AI compliance, ensuring responsible AI development. πŸ€–

Dr Robert N. Winterrobert@social.winter.ink
2025-06-14

In the final instalment of this edition of the Talent Aperture Series, I continue the case that hiring isn't procurementβ€”it's stewardshipβ€”and explore:

🧠 How we reclaim human judgement in hiring
πŸ“ˆ Why blind recruitment and contextual interviews are gaining ground
πŸ’Ž What good decision-making really demands in a world drunk on metrics.

robert.winter.ink/the-talent-a

#Discernment #EthicalHiring #AlgorithmicBias #HumanJudgement #ResponsibleAI #TalentEthics #StrategicRecruitment #HiringPractices

Nicola Fabiano :xmpp:nicfab@fosstodon.org
2025-06-14

πŸ“˜ The English edition of my latest book is now available!

β€œArtificial Intelligence, Neural Networks and Privacy: Striking a Balance between Innovation, Knowledge, and Ethics in the Digital Age”

With forewords by Danilo Mandic and Carlo Morabito, and an introduction by Guido Scorza, this edition offers a comprehensive, multidisciplinary perspective.

πŸ”— More details: nicfab.eu/en/pages/bookai/

#AI #Privacy #LLMs #NeuroRights #AIAct #Cybersecurity #ResponsibleAI #EthicsInAI #ArtificialIntelligence

2025-06-13

FIZ Karlsruhe nimmt Stellung zur EU-Konsultation ΓΌber KI-Verordnung

Unsere Botschaft: GPAI-Modelle sollten nicht nur nach Rechenleistung reguliert werden.

Besser: risikobasierte AnsΓ€tze & Raum fΓΌr offene Forschung.

Stellungnahme vom 22. Mai liefert Impulse fΓΌr eine praxisnahe, innovationsfreundliche Regulierung.

tinyurl.com/272lfeos

#GPAI #KI #EUAIAct #ResponsibleAI #Forschung #FIZKarlsruhe #AIRegulation

Ruopeng AnRuopeng_An
2025-06-12

Startups now offer β€œcompliance as code” for AI auditsβ€”model cards, fairness reports, energy ledgers on demand. Can regulators trust industry-built tools, or should audit code be open-sourced? Share experiences with automated assurance platforms. πŸ—οΈ

2025-06-11

Addressing the fears of AI is crucial. My latest post explores the potential risks and benefits, inspired by 'The Sentient Machine'. ctnet.co.uk/the-sentient-machi #AIRisks #AIBenefits #ResponsibleAI

Thinker, The InfoSec Buddha_th1nk3r@infosec.exchange
2025-06-11

🚨 Hot take: Most AI companies build risk frameworks backwards

❌ Build AI first, figure out risks later

βœ… Design risk governance INTO development

Treat AI safety as competitive advantage, not compliance burden.

What’s the biggest AI risk blind spot you’re seeing?

#AIRisk #ResponsibleAI

Eric Maugendreeric@social.coop
2025-06-11

"#Amsterdam followed every piece of advice in the #ResponsibleAI playbook. It debiased its system when early tests showed ethnic #bias and brought on academics and consultants to shape its approach, ultimately choosing an explainable algorithm over more opaque alternatives. The city even consulted a participatory council of welfare recipients.
[Yet] the system continued to be plagued by biases.
[…] As political pressure mounted, officials killed the project."

lighthousereports.com/investig

#ethicalAI

2025-06-11

New from me, Gabriel Geiger,
+ Justin-Casimir Braun at Lighthouse Reports.

Amsterdam believed that it could build a #predictiveAI for welfare fraud that would ALSO be fair, unbiased, & a positive case study for #ResponsibleAI. It didn't work.

Our deep dive why: technologyreview.com/2025/06/1

Karen Smiley (6 Ps in AI Pods)karensmiley
2025-06-11

Do we need a business case for ethical AI?

Why AI ethics is like DEI and sustainability - or, how to encourage more business people to do the right thing, even if it's not for what we may think are the 'right' reasons.

Curious to hear thoughts from my colleagues in AI, business, and ethics, or anyone who's currently using AI-based tools (essentially everyone).

aaab.karensmiley.com/p/do-we-n

Image of currencies (bank notes) from multiple countries worldwide

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst