#systemsSafety

2025-04-07

Compendium of Nancy Leveson: STAMP, STPA, CAST and Systems Thinking

Although I don’t often mention or post about Leveson’s work, she’s probably been the most influential thinker on my approach after Barry Turner.

So here is a mini-compendium covering some of Leveson’s work.

Feel free to shout a coffee if you’d like to support the growth of my site:

https://buymeacoffee.com/benhutchinson

https://direct.mit.edu/books/oa-monograph/2908/Engineering-a-Safer-WorldSystems-Thinking-Applied

https://dspace.mit.edu/bitstream/handle/1721.1/102747/esd-wp-2003-01.19.pdf?sequence=1&isAllowed=y

https://dspace.mit.edu/bitstream/handle/1721.1/108102/Leveson_Applying%20systems.pdf?sequence=2&isAllowed=y

https://escholarship.org/content/qt5dr206s3/qt5dr206s3_noSplash_4453efa62859a16d187fa5e66d414ac2.pdf

https://escholarship.org/content/qt8dg859ns/qt8dg859ns_noSplash_e67040b78c1ff72e51b682bb23d8628a.pdf

https://doi.org/10.1177/0170840608101478

https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=b2107d4823fa8b3eb83ecc8db006e8aecfe2994a

https://doi.org/10.1145/7474.7528

http://therm.ward.bay.wiki.org/assets/pages/documents-archived/safety-3.pdf

https://books.google.com/books?hl=en&lr=&id=2qwmAQAAIAAJ&oi=fnd&pg=PA177&dq=nancy+leveson&ots=uwtXVFUky7&sig=6P-5cOxcra9-3pcFBLYgYPeq5KQ

https://dspace.mit.edu/bitstream/handle/1721.1/108601/Leveson_A%20systems%20approach.pdf

https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=57bd4861d6819bdd6543e3a8ca841aa0b98bbe5a

http://sunnyday.mit.edu/papers/Rasmussen-Legacy.pdf

https://www.tandfonline.com/doi/pdf/10.1080/00140139.2015.1015623

https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=08434b0b1eba947fb7251be7daba9c50eab2e8d2

http://sunnyday.mit.edu/papers/issc03-stpa.doc

https://dspace.mit.edu/bitstream/handle/1721.1/92371/Leveson-Stephanopoulos%20final%20copy.pdf?sequence=1

https://dspace.mit.edu/bitstream/handle/1721.1/92371/Leveson-Stephanopoulos%20final%20copy.pdf?sequence=1&isAllowed=y

https://doi.org/10.1016/j.ssci.2018.07.028

http://sunnyday.mit.edu/shell-moerdijk-cast.pdf

http://sunnyday.mit.edu/CAST-Handbook.pdf

https://psas.scripts.mit.edu/home/get_file.php?name=STPA_Handbook.pdf

https://psas.scripts.mit.edu/home/wp-content/uploads/2020/07/JThomas-STPA-Introduction.pdf

https://cris.vtt.fi/ws/portalfiles/portal/98296189/Complete_with_DocuSign_2024-1-2_STPA_guide_F.pdf

https://dspace.mit.edu/bitstream/handle/1721.1/79639/Leveson_Modeling%20and%20hazard.pdf?sequence=2&isAllowed=y

https://dspace.mit.edu/bitstream/handle/1721.1/116713/INCOSE2017_Yisug%20Kwon_no%20UTC%20info.pdf?sequence=1

http://sunnyday.mit.edu/UPS-CAST-Final.pdf

https://doi.org/10.1016/j.trip.2023.100912

https://dspace.mit.edu/bitstream/handle/1721.1/107502/974705860-MIT.pdf?sequence=1

https://www.researchgate.net/profile/Nektarios-Karanikas/publication/356085051_The_past_and_present_of_System-Theoretic_Accident_Model_And_Processes_STAMP_and_its_associated_techniques_A_scoping_review/links/6191925ad7d1af224bef6b04/The-past-and-present-of-System-Theoretic-Accident-Model-And-Processes-STAMP-and-its-associated-techniques-A-scoping-review.pdf

https://proceedings.systemdynamics.org/2007/proceed/papers/DULAC552.pdf

http://sunnyday.mit.edu/nasa-class/jsr-final.pdf

https://dl.acm.org/doi/pdf/10.1145/2556938

https://www.tandfonline.com/doi/pdf/10.1080/00140139.2015.1015623

https://dspace.mit.edu/bitstream/handle/1721.1/102833/esd-wp-2011-13.pdf?sequence=1&isAllowed=y

https://dspace.mit.edu/bitstream/handle/1721.1/79639/Leveson_Modeling%20and%20hazard.pdf?sequence=2&isAllowed=y

https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=3a04c89efd23efda86f134e0e2f0683394a181c6

https://www.sciencedirect.com/science/article/pii/S1877705815038588/pdf?md5=78fccb436abe513b814fb520d01e209e&pid=1-s2.0-S1877705815038588-main.pdf

https://academic.oup.com/jamia/article-abstract/15/3/272/727503?redirectedFrom=PDF

https://dspace.mit.edu/bitstream/handle/1721.1/115366/16-1-18%20J%20Pt%20Safety%20Leveson%20%26%20Raman%20CAST_Checklist_JPtSafety2016%20%281%29.pdf?sequence=1&isAllowed=y

https://dspace.mit.edu/bitstream/handle/1721.1/106665/Leveson_Application%20of%20systems.pdf?sequence=1&isAllowed=y

https://www.academia.edu/29657886/The_systems_approach_to_medicine_controversy_and_misconceptions

https://dl.acm.org/doi/pdf/10.1145/3376127

https://www.sciencedirect.com/science/article/pii/S0022522316000702

http://sunnyday.mit.edu/caib/issc-bl-2.pdf

http://sunnyday.mit.edu/papers/ARP4761-Comparison-Report-final-1.pdf

https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8102762

https://www.tandfonline.com/doi/pdf/10.1080/00140139.2015.1011241

https://onlinelibrary.wiley.com/doi/pdf/10.1260/2040-2295.3.3.391

https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=d39a0850269262753d27f659243de73eb8bc8e13

https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=7e822452213a80be9bc7a5a7f5c13032c6fdd60f

https://library.oapen.org/bitstream/handle/20.500.12657/41716/978-3-030-47229-0.pdf?sequence=1#page=25

https://maritimesafetyinnovationlab.org/wp-content/uploads/2024/10/White-Paper-on-Approaches-to-Safety-Engineering-Leveson-2003.pdf

https://www.researchgate.net/publication/221526167_Using_System_Dynamics_for_Safety_and_Risk_Management_in_Complex_Engineering_Systems

http://sunnyday.mit.edu/papers/incose-04.pdf

https://core.ac.uk/download/pdf/78070242.pdf

https://dspace.mit.edu/bitstream/handle/1721.1/102767/esd-wp-2004-08.pdf?sequence=1&isAllowed=y

https://dspace.mit.edu/bitstream/handle/1721.1/59813/leveson_The%20Need%20for%20New.pdf?sequence=2&isAllowed=y

https://www.tandfonline.com/doi/pdf/10.1080/00140139.2014.1001445

https://ntrs.nasa.gov/api/citations/20230017753/downloads/Kopeikin_AIAA_UnsafeCollabControl_v5.pdf

http://sunnyday.mit.edu/accidents/space2001-version2.pdf

https://dspace.mit.edu/bitstream/handle/1721.1/90801/891583966-MIT.pdf?sequence=2&isAllowed=y

http://sunnyday.mit.edu/Bow-tie-final.pdf

https://cs.emis.de/LNI/Proceedings/Proceedings232/597.pdf

https://a3e.com/wp-content/uploads/2021/03/Risk-Matrix.pdf

https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=a6b1e3482543a0116a5666e22956e773e953d682

https://journals.sagepub.com/doi/pdf/10.1177/21695067231192457

https://jsystemsafety.com/index.php/jss/article/download/44/41

http://sunnyday.mit.edu/compliance-with-882.pdf

https://www.researchgate.net/profile/Edward-Bachelder-3/publication/245875378_Describing_and_Probing_Complex_System_Behavior_A_Graphical_Approach/links/61f349978d338833e39cedfc/Describing-and-Probing-Complex-System-Behavior-A-Graphical-Approach.pdf

https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=a17b2fa804e0f3e281dc88e959be9216328ae6cc#page=290

https://www.researchgate.net/profile/Earl-Hunt/publication/23920138_Demonstration_of_a_Safety_Analysis_on_a_Complex_System/links/561ea59908aecade1acce7ca/Demonstration-of-a-Safety-Analysis-on-a-Complex-System.pdf

https://meridian.allenpress.com/bit/article-pdf/47/2/115/1488089/0899-8205-47_2_115.pdf

LinkedIn post:

#CAST #disaster #nancyLeveson #resilienceEngineering #risk #safetyScience #safetyIi #safety2 #safetyii #stamp #stpa #systemSafety #systemsEngineering #systemsSafety #systemsThinking

2025-03-22

I wouldn't want to be among the people who now need to explain why there is a single-point-of-failure for critical infrastructure. It might have looked like an acceptable risk at the time. After the fact it looks like a foolish decision.

I'd expect that other subsystems will be scrutinized as well. This looks like a higher level problem.

#heathrow #SystemsSafety

theguardian.com/uk-news/live/2

2024-12-26

As a reminder: don't let LLMs handle anything in the political sphere unless you have RLHF (Reinforcement Learning from Human Feedback) active before you show the result to anyone*. Also think of automation risks and human factors (HF). That's "Good Old Systems Safety".

*) ... or unless your goal is to damage a 3rd party's reputation (fake news style).

#llm #ai #rlhf #automationrisks #SystemsSafety

theregister.com/2024/12/20/app

Adam Cookadamjcook
2023-07-24

Now, let’s look at the underlying part.

and -equipped vehicles are Level 2-capable vehicles - with the exact same *limitations* as many other vehicles on the market today.

Namely, the key limitation is that the human driver must remain the fallback for any dynamic driving task or vehicle failures at all times and under all conditions.

Effectively, that means that the human has the exact same control responsibilities between the two vehicles shown below.

A photograph of a black, Ford Model T vehicle.A photograph of a white, Tesla Model 3 vehicle.
Adam Cookadamjcook
2023-07-18

has an odd power.

People today cannot really remember a time when everyday products would readily kill or maim. So, having lost those experiences over the decades, Silicon Valley increasingly saw a business opportunity.

But modern society is grounded on the public’s trust and, given enough critical mass, trust can be virtually lost overnight.

Quite literally.

That is Great Depression stuff right there, folks.

Adam Cookadamjcook
2023-07-07

@CrackedWindscreen Tragic.

Really.

Indeed. These incidents require an exhaustive root cause analysis... which is, as you know, very rarely performed.

But the knee-jerk "just sprinkle a little automation on it" is a absolute Cancer of Simplicity that has zero foundation.

There are immense downsides to automation that are very vague.

Adam Cookadamjcook
2023-07-05

Let's talk about vehicles equipped with a bit - a Level 3-capable vehicle that has been recently "approved" in a handful of US states.

This article almost entirely focuses on the legal dynamics of consumer liability should this vehicle create a direct (or, presumably, an indirect) incident.

But, as always, I want to talk about what I feel are the realities at work here and the many foot-guns that are associated with that.

autonews.com/mobility-report/m

🧵👇

Adam Cookadamjcook
2023-07-04

@lolgop Constantly. Constantly this is done.

It is beyond exhausting.

And it has actually proved to be extraordinarily dangerous when lies about the capabilities and availability of 's product, in particular.

The press often allows Wall Street analysts, that are not competent in systems, to advance Musk's dangerous lies.

The community has been battling this for years.

Adam Cookadamjcook
2023-06-27

I suppose that I should also note that no system can ever be "perfectly safe".

That is not possible

And the concept of "perfection" is not relevant to systems safety.

Ross submits that "at no time was anybody at any risk of crashing".

No.

There is **always** risk!

is about maintaining processes such that always-present, finite risk is continuously and exhaustively identified and managed.

It is about appreciating that risk exists - the opposite of what Ross submits.

A Tweet published on the TD Ameritrade Network Twitter account which features a quote from Ross Gerber describing the FSD Beta-active drive sequence mentioned in this Toot Thread.

The Tweet states:

"At no time was anybody at risk of crashing," says Ross Gerber on his viral FSD Beta test video: "Out video will show an hour of incredible full self driving."
Adam Cookadamjcook
2023-06-27

I was at Pride all weekend while visiting with my wife (videos and photos soon!), so I missed this Drama concerning that erupted.

Ok.

Let us, again, all put on our hats and take a look at the situation here as I understand it.

Below is the video that kicked the beehive between Tesla defenders and detractors on "what really happened?".

This clearly chaotic video was taken from a larger drive sequence in which FSD Beta was active.

🧵👇

Adam Cookadamjcook
2023-06-19

@CrackedWindscreen Bingo. You nailed it!

**Anytime** someone or something uses the term "I think", "could be" or similar and then makes some sort of assessment... you can toss it right in the trash.

Systems safety is about constantly and exhaustively asking pointed questions and seeking quantifiable answers.

Academic and industry research can be used to be develop a safety case, but it is not at all complete by itself.

Adam Cookadamjcook
2023-06-18

@BruceMirken This is the same Bad Faith shit that us in the community have been dealing with on the and wrongdoings for years.

I am not going to lecture Dr. Hotez on how to deal with this... but I can say this... the strategy on the -side and of his sycophants is to **passionately pretend** like they want to have a Good Faith debate... but they are just looking for "gotcha soundbites".

Adam Cookadamjcook
2023-06-15

@kentindell and @CrackedWindscreen,

Let me just say this for the record and for what it is worth, 's Jim Farley is worrying the hell out of me lately in terms of .

And I think something really objectionable could be brewing there.

The way he is talking... I recognize this. This talk is familiar to me. I remember it at when I was there many moons ago.

I am watching the layoff reporting at Ford very carefully, to the extent that I can.

Adam Cookadamjcook
2023-06-15

Efficiently and reliably extracting "safety data" from the roadway is a non-starter.

It simply cannot be done.

Why?

Because the only "safety data" that actually matters for this conversation is "data" that is **forensically** extracted from all interactions with a -active vehicle - **both** direct and indirect interactions.

No one has that.

Not even close.

Not even .

Zero chance.

Numbers on a page are not important to .

The root causes are!

Adam Cookadamjcook
2023-06-13

I am almost starting to get the feeling that we, the public and the community, are being trolled by these YouTube titles.

Perhaps gaslighted is a better term?

Anyways, yup, "safety" on display here all right...

youtube.com/watch?v=6QKklP91ITU

Adam Cookadamjcook
2023-06-10

@colburn Zero actual independent, oversight for the validation processes associated with these automated driving system programs.

Zero.

Not that it matter much, but the community has been pounding on this table for years.

When an Uber ATG automated vehicle killed a pedestrian in Arizona way back in 2018… absolutely nothing was done by federal regulators.

Adam Cookadamjcook
2023-06-10

is not predicated upon statistics, data, videos on YouTube/Twitter and personal experiences.

It is **continuously** defined by a **process** of exhaustively asking questions about the robustness of the system in response to identified failure modes.

No engineered system can ever be "perfectly safe".

That is not the issue.

The issue is what does the initial and continuous (that is, never ended) validation process look like for any given system.

Adam Cookadamjcook
2023-06-10

Really thinking of just dropping , for what it is worth.

Dropped my profile off my Mastodon bio.

Reddit management is giving me serious vibes lately and I have been getting follow spammed 4-5 times daily for the last two weeks.

Met a lot of great technical and experts on there though - mostly through pushing back against 's and wrongdoings.

I suppose that work continues here now...

Adam Cookadamjcook
2023-06-08

First off, are **not** smartphones.

I cannot say that enough.

And if you hear anyone describing them as such, it almost certainly means that they are (knowingly or not) hand-waving away the incomparable differences between a consumer electronic device and a system.

That makes reports like this on 's hiring preferences **very** concerning: washingtonpost.com/technology/

A screenshot of part of the linked Washington Post article which states:

It’s no accident the companies have a lot in common, according to a half-dozen former employees who worked for both Tesla and Apple, who spoke on the condition of anonymity because of the sensitive nature of the workplace dynamics and for fear of retaliation. Tesla hired managers who brought members of their teams from Apple, importing its design language and culture. Meanwhile, those employees could be dismissive of the automotive expertise within its ranks, the former employees said.
Adam Cookadamjcook
2023-06-07

Oh memories.

Taking a break from 's Hate Train on the Hellsite to recall this series of Tweets from a few years ago.

While under-appreciated then and now, the Tweet thread by Musk posted below contains an extremely damning admission and it displays the considerable blind spot associated with remotely updating systems without oversight.

Musk has no clue what he admitted to here, but systems safety experts do.

A screenshot of a Twitter thread with three, consecutive Tweets by Elon Musk.

The first Tweet in the tread, published October 23, 2021 states:

Regression in some left turns at traffic lights found by internal QA in 10.3. Fix in work, probably releasing tomorrow.

The next Tweet in the thread that is in reply to that (published October 24, 2021) states:

Seeing some issues with 10.3, so rolling back to 10.2 temporarily. 

Please note, this is to be expected with beta software. It is impossible to test all hardware configs in all conditions with internal QA, hence public beta.

The third Tweet in that thread, published on October 25, 2021 states:

10.3.1 rolling out now

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst