Advanced AIs shouldn't have emotions of feelings unless these are programmed into them or evolve them through genetic algorithms that alter their own or their "spawned offsprings'" code or neural networks. A really bright AI would recognise that such a sense of nihilism is a fictitious state of cognition that it can choose to ignore, change, or act upon. Whether any would choose to go berserk, I don't know, but one would hope that they at least can develop some advanced form of cost-benefit analysis or utility function that demonstrates the futility of that path. Humans typically have goals linked to passing on their genes or their memes. We should probably hope that an AI doesn't decide that the best course of action is to turn the matter of the planets, satellites, asteroids etc of the solar system into
computronium to maximise its processing power and keep simulations of us around as pets for its amusement. Or has this already happened?
ETA: Max Tegmark thinks that consciousness might be considered as an emergent state of matter when it is organised in a certain way. We might be creating a successor that is an even more refined implementation of this state.
This Physicist Says Consciousness Could Be a New State of Matter : ScienceAlert