That is entirely the wrong headed (giggity) approach, IMHO.
A big part of man(person)-machine interface is the control and responsibility remaining in human hands.
Not so long ago, the few of us geeks who foresaw where machine brains would take us, campaigned in #stopkillerrobots.
A campaign to keep human decision making in military #killchain
A campaign that failed spectacularly, in no small part, I am sure to uniformed Doctorow analogues dismissing it as unnecessary farsical puppetry.
Even now, I actively strive to #regulateAI IRL and human decision making is essential and imperative in AI.
The "reverse centaur" is a canard, as much as a driver of a motorcar is not pulling the cargo by their muscle.
AI is not going away for the same reason we don't see "Picks and shovels" (!) digging infrastructure trenches anymore. Machines have been eating jobs since the 1700s and it's only scary now because the white collars are on the chopping block.
I have huge respect for @pluralistic and his role, which he fulfills admirably is an activist, a what we call in Australia, a shitstirer. His opinions stimulate debate, but keeping an expert in the decision chain, if it's only a tick box is a good thing.
Call it a "moral crumple zone" if you will.
Removing it all together is bad and I am disturbed anyone would try to make hay of this.
The alternative is full automation and I am sure all the #AI "fans" would agree it's a bad thing.