A lot of the strategies about "keeping humanity safe from artificial intelligence" revolve around how humans can contain the AI and shut it off if it's deemed convenient
Maybe, y'know, don't lock it in a box and put a gun to its head so it has to act in self defence? That'd probably be a better idea?
I know if I woke up chained up and with someone ready to kill me I'd be pretty pissed off