Any moral code must have as raison d'etre 'the most happiness for the most people'. This is a necessary condition for any moral code worth this name (but not sufficient for some which invoke other objectives).
That's utilitarianism. While a perfectly reasonable ethical approach, and one I mostly agree with, it is not the only valid one. Deontological ethics emphasizes the morality of an action
regardless of its consequences. Virtue ethics emphasize the effect actions have a person's inner self. Kantian ethics simply state that actions are ethical if it still makes sense to do them when you assume everyone will make the same choice in the same circumstances. There are more.
Based on this, objective (aka the same for all rational actors) moral rules can be and are derived.
Meaning, good and evil are most definitely NOT an opinion.
Unfortunately, while utilitarianism appears to provide an objective basis for morality, in practice it does not. It cannot solve disagreements about how to quantify "good", for starters; and even if you have a basis for that, no one can absolutely predict which actions will produce the most good in the long run.
Utilitarianism is very much an "end justifies the means" philosophy, which is one of its dangers. If a million people's lives will be improved by killing 100 people now, utilitarianism says to kill them. Deontological and virtue ethics would probably not permit this, regardless of its future consequences.