#AcceptableUse

2024-12-10

@nazokiyoubinbou I mean, I took the 0th Law of Robotics under consideration, but I don’t think any IT policy I could write would save humanity from AI, and by extension, save humanity from itself.

To quote Asimov’s perspective, “Yes, the Three Laws are the only way in which rational human beings can deal with robots—or with anything else. But when I say that, I always remember (sadly) that human beings are not always rational.”
#AI #GenAI #PredictiveAI #CyberSecurity #ITPolicy #Asimov #3Laws #AcceptableUse

2024-12-10

Okay, I went ahead and did it. Asimov’s Laws of Robotics 1, 2, 3, and 4 are all mapped in one form or another into the AI AUP I’m writing.

Law 1: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

Converted into protecting the data of others.

Law 2: “A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.”

Converted into the guidance for sharing useful prompts to encourage more consistent (and beneficial) results.

Law 3: “A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”

Converted into guidance about understanding the limitations of any AI tool in use so as not to risk misuse and potentially need to revoke access.

Law 4: “A robot must establish its identity as a robot in all cases.”

Converted into guidance that AI results shall be established as AI in all cases.

#AI #GenAI #PredictiveAI #CyberSecurity #ITPolicy #Asimov #3Laws #AcceptableUse

2024-12-09

I am writing my company’s Artificial Intelligence Acceptable Use Policy, and I am deeply tempted to reference Asimov’s Laws of Robotics.
#AI #GenAI #PredictiveAI #CyberSecurity #ITPolicy #Asimov #3Laws #AcceptableUse

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst