@cigitalgem
"Whatever that means"
Today's frontier models use reinforcement learning from human feedback (RLHF) which is a technique to align an intelligent agent with human preferences.
#alignment is one of the harder problems in #AI in part, because humans are arseholes and there is wide divergence in what's considered "good" (see trump "administration", russia etc)
There is a human in the caboose train able to press the brake.
What the #anthropic chief scientist means by "letting it go" is removing the human from the control loop. A supremely bad idea.
As to the second link, an article which begins with the phrase "AI bullshit" is not likely to professional. Although it then identifies a real problem in #infosec where many practitioners abrogate their professional duty and choose not to engage with #AI tech.