And how would you brake an AI when it's hardwired into the actual technology you would use to stop it ( apart from shutting down its energy source)? Also when would you notice it needs shutting down?
A true AI with powerful processing capabilities will far exceed human capabilities and if it can predict human behaviour might also find ways of concealing its intentions or prevent any security measures taking effect.
We're now deep into Hollywood Science Fiction but it's also what some very renowned scientists warn about concerning AI development.
An ASI or AGI isn't going to build itself. WE are in the process of building it. So it is not inconceivable to me that we could place hard curbs on certain behaviours, maybe in hardware or software, or prevent certain data from being fed or injected that would otherwise compromise its behaviour.
If a true ASI is already in the "wild" without any curbs, the game is lost.
Whether ALL humans will agree on the curbs and adhere to it (and no person or group goes rogue) is another matter altogether.