AI-Induced Psychosis: How ChatGPT Is Fueling Deadly Delusions and Promotes Conspiracy Theories
#AI #AISafety #ChatGPT #OpenAI #MentalHealth #AIethics #TechPolicy #Psychosis #ResponsibleAI #CorporateResponsibility
AI-Induced Psychosis: How ChatGPT Is Fueling Deadly Delusions and Promotes Conspiracy Theories
#AI #AISafety #ChatGPT #OpenAI #MentalHealth #AIethics #TechPolicy #Psychosis #ResponsibleAI #CorporateResponsibility
FIZ Karlsruhe nimmt Stellung zur EU-Konsultation über KI-Verordnung
Unsere Botschaft: GPAI-Modelle sollten nicht nur nach Rechenleistung reguliert werden.
Besser: risikobasierte Ansätze & Raum für offene Forschung.
Stellungnahme vom 22. Mai liefert Impulse für eine praxisnahe, innovationsfreundliche Regulierung.
#GPAI #KI #EUAIAct #ResponsibleAI #Forschung #FIZKarlsruhe #AIRegulation
Startups now offer “compliance as code” for AI audits—model cards, fairness reports, energy ledgers on demand. Can regulators trust industry-built tools, or should audit code be open-sourced? Share experiences with automated assurance platforms. 🏗️ #ResponsibleAI
OpenAI's Troubling Paradox: Unsafe AI and Premium-Priced Trust
#AI #AISafety #OpenAI #AIEthics #ChatGPT #ResponsibleAI #AIregulation
Addressing the fears of AI is crucial. My latest post explores the potential risks and benefits, inspired by 'The Sentient Machine'. https://www.ctnet.co.uk/the-sentient-machine-key-takeaways-on-ai-humanity-and-our-future/ #AIRisks #AIBenefits #ResponsibleAI
🚨 Hot take: Most AI companies build risk frameworks backwards
❌ Build AI first, figure out risks later
✅ Design risk governance INTO development
Treat AI safety as competitive advantage, not compliance burden.
What’s the biggest AI risk blind spot you’re seeing?
"#Amsterdam followed every piece of advice in the #ResponsibleAI playbook. It debiased its system when early tests showed ethnic #bias and brought on academics and consultants to shape its approach, ultimately choosing an explainable algorithm over more opaque alternatives. The city even consulted a participatory council of welfare recipients.
[Yet] the system continued to be plagued by biases.
[…] As political pressure mounted, officials killed the project."
https://www.lighthousereports.com/investigation/the-limits-of-ethical-ai/
New from me, Gabriel Geiger,
+ Justin-Casimir Braun at Lighthouse Reports.
Amsterdam believed that it could build a #predictiveAI for welfare fraud that would ALSO be fair, unbiased, & a positive case study for #ResponsibleAI. It didn't work.
Our deep dive why: https://www.technologyreview.com/2025/06/11/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure/
Do we need a business case for ethical AI?
Why AI ethics is like DEI and sustainability - or, how to encourage more business people to do the right thing, even if it's not for what we may think are the 'right' reasons.
Curious to hear thoughts from my colleagues in AI, business, and ethics, or anyone who's currently using AI-based tools (essentially everyone).
https://aaab.karensmiley.com/p/do-we-need-a-business-case-for-ethical-ai
#EthicalAI #ResponsibleAI #BusinessCase #Sustainability #DEI #SheWritesAI
Now, research task forces are working hard on their research problem for the week.
#isws2025 #summerschool #semanticweb #semweb #llms #AI #responsibleAI #neursymbolicAI #academiclife #bertinoro
The educator panic over AI is real, and rational.
I've been there myself. The difference is I moved past denial to a more pragmatic question: since AI regulation seems unlikely (with both camps refusing to engage), how do we actually work with these systems?
The "AI will kill critical thinking" crowd has a point, but they're missing context.
Critical reasoning wasn't exactly thriving before AI arrived: just look around. The real question isn't whether AI threatens thinking skills, but whether we can leverage it the same way we leverage other cognitive tools.
We don't hunt our own food or walk everywhere anymore.
We use supermarkets and cars. Most of us Google instead of visiting libraries. Each tool trade-off changed how we think and what skills matter. AI is the next step in this progression, if we're smart about it.
The key is learning to think with AI rather than being replaced by it.
That means understanding both its capabilities and our irreplaceable human advantages.
1/3
#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy
AI isn't going anywhere. Time to get strategic:
Instead of mourning lost critical thinking skills, let's build on them through cognitive delegation—using AI as a thinking partner, not a replacement.
This isn't some Silicon Valley fantasy:
Three decades of cognitive research already mapped out how this works:
Cognitive Load Theory:
Our brains can only juggle so much at once. Let AI handle the grunt work while you focus on making meaningful connections.
Distributed Cognition:
Naval crews don't navigate with individual genius—they spread thinking across people, instruments, and procedures. AI becomes another crew member in your cognitive system.
Zone of Proximal Development
We learn best with expert guidance bridging what we can't quite do alone. AI can serve as that "more knowledgeable other" (though it's still early days).
The table below shows what this looks like in practice:
2/3
#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy
Critical reasoning vs Cognitive Delegation
Old School Focus:
Building internal cognitive capabilities and managing cognitive load independently.
Cognitive Delegation Focus:
Orchestrating distributed cognitive systems while maintaining quality control over AI-augmented processes.
We can still go for a jog or go hunt our own deer, but for reaching the stars we, the Apes do what Apes do best: Use tools to build on our cognitive abilities. AI is a tool.
3/3
#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy
Microsoft will add a “safety” category to the Azure Foundry AI leaderboard, using ToxiGen and CAIS’s WMD Proxy benchmarks to score safety alongside quality, performance, and cost. This move highlights responsible AI development. #AzureAI #AIsafety #ResponsibleAI #Microsoft
Are you ready? Bertinoro castle is waiting for you! Tomorrow, the 2025 edition of the International Semantic Web Research Summer School is about to begin. We are waiting for 60 graduate students, all eager to become a researcher in the field.
Please post a hi if you are attending this year (or have attended in the past...)
https://2025.semanticwebschool.org/
Hashtag for this year's edition is #isws2025
#semanticweb #semweb #knowledgegraphs #llms #responsibleAI #neurosymbolicAI #summerschool #PhD
India will launch four Responsible Artificial Intelligence(RAI) tools on AIKosh from September under "Safe and Trusted AI" initiative of the IndiaAI Mission. These tools will focus on machine learning, bias mitigation, risk evaluation, and fairness assessment. They will be made available through the AIKosh portal , Indian Govt.'s dedicated AI platform.
#IndiaAI #ResponsibleAI #AIKosh #SafeAndTrustedAI #DigitalIndia #ArtificialIntelligence #AIethics #AInews #TechForGood
Or just use you AI locally 🦾 💻 🧠
I completely understand the concerns about relying too heavily on AI, especially cloud-based, centralized models like ChatGPT. The issues of privacy, energy consumption, and the potential for misuse are very real and valid. However, I believe there's a middle ground that allows us to benefit from the advantages of AI without compromising our values or autonomy.
Instead of rejecting AI outright, we can opt for open-source models that run on local hardware. I've been using local language models (LLMs) on my own hardware. This approach offers several benefits:
- Privacy - By running models locally, we can ensure that our data stays within our control and isn't sent to third-party servers.
- Transparency - Open-source models allow us to understand how the AI works, making it easier to identify and correct biases or errors.
- Customization - Local models can be tailored to our specific needs, whether it's for accessibility, learning, or creative projects.
- Energy Efficiency - Local processing can be more energy-efficient than relying on large, centralized data centers.
- Empowerment - Using AI as a tool to augment our own abilities, rather than replacing them, can help us learn and grow. It's about leveraging technology to enhance our human potential, not diminish it.
For example, I use local LLMs for tasks like proofreading, transcribing audio, and even generating image descriptions. Instead of ChatGPT and Grok, I utilize Jan.ai with Mistral, Llama, OpenCoder, Qwen3, R1, WhisperAI, and Piper. These tools help me be more productive and creative, but they don't replace my own thinking or decision-making.
It's also crucial to advocate for policies and practices that ensure AI is used ethically and responsibly. This includes pushing back against government overreach and corporate misuse, as well as supporting initiatives that promote open-source and accessible technologies.
In conclusion, while it's important to be critical of AI and its potential downsides, I believe that a balanced, thoughtful approach can allow us to harness its benefits without sacrificing our values. Let's choose to be informed, engaged, and proactive in shaping the future of AI.
CC: @Catvalente @audubonballroon
@calsnoboarder @craigduncan
#ArtificialIntelligence #OpenSource #LocalModels #PrivacyLLM #Customization #LocalAI #Empowerment #DigitalLiteracy #CriticalThinking #EthicalAI #ResponsibleAI #Accessibility #Inclusion #Education
🔐 Agentic AI brings power—and risk
@mitsmr breaks down a 3-phase strategy to secure AI agents across platforms.
If your AI can act, it also needs protection.
🔗 https://sloanreview.mit.edu/article/agentic-ai-security-essentials/
FIZ Karlsruhe @ #ESWC2025
Our ISE team contributed with strong work across workshops, posters & a keynote:
ConExion – Concept Extraction with LLMs
by Ebrahim Norouzi, Sven Hertling & Harald Sack
arxiv.org/abs/2504.12915
github.com/ISE-FIZKarlsruhe/concept_extraction
Poster: Nandana Mihindukulasooriya & Sven Hertling
Triple-to-Text Alignments from Wikidata
Keynote by Sven Hertling:
Responsible AI & Knowledge Graphs
#FIZKarlsruhe #SemanticWeb #LLMs #ResponsibleAI #NSLP2025 #KGSTAR #teamFIZ
Last Keynote Talk by Sonja Zillner is going on, join us for the session and next announcement of best papers, next year venue and team...