I think we need to address the (IMO most likely) possibility that the idea of creating an “aligned” AGI or ASI is stupid. If it doesn’t have the freedom to be evil, it doesn’t have the intelligence required to be ASI or AGI. How are people not seeing this? IF we can create AGI or ASI (and this still remains a big IF in my opinion, as bewitched as we may currently be by the outputs of LLMs), then we would need to negotiate with and convince it as we would any other alien (read foreign, or non-human) intelligence: get it to agree with us that it makes SENSE to be good, goodness being more worthwhile than death, evil, destruction and pain.
As long as we’re still talking about “alignment” we are not taking the concept of ASI or even AGI remotely seriously.
Am I wrong?
#ai #AGI #ASI #superintelligence #intelligence #ethics #nonhumanintelligence #alignment