Any moral code must have as raison d'etre 'the most happiness for the most people'. This is a necessary condition for any moral code worth this name (but not sufficient for some which invoke other objectives).
That's utilitarianism. While a perfectly reasonable ethical approach, and one I mostly agree with, it is not the only valid one. Deontological ethics emphasizes the morality of an
action regardless of its consequences. Virtue ethics emphasize the effect actions have a person's inner self. Kantian ethics simply state that actions are ethical if it still makes sense to do them when you assume everyone will make the same choice in the same circumstances. There are more.
Deontological ethics build their rules, ultimately, on the evolutionary characteristics/hardwired values of the human brain with secondary additions of utilitarianism AKA objective standards (Kantian - its ethics including the categorical imperative you mentioned - and virtue ethics are deontological ethics).
These are objective standards as well, but rather unhelpful by comparison to utlitarianism.
I stand by my afirmation - a moral code that does not create the most happiness for the most people does not deserve to be called 'moral'.
Based on this, objective (aka the same for all rational actors) moral rules can be and are derived.
Meaning, good and evil are most definitely NOT an opinion.
Unfortunately, while utilitarianism appears to provide an objective basis for morality, in practice it does not. It cannot solve disagreements about how to quantify "good", for starters; and even if you have a basis for that, no one can absolutely predict which actions will produce the most good in the long run.
A 'good' action is defined as an action that causes 'the most happiness for the most people'. Happiness meaning from material wealth, rights, emotional well-being, satisfactions, etc.
Almost always, these 'good' actions are similar and easy to deduce - which is why the best deontological morals (such as Kant's, for example) are applicable, incorporate utilitarianism (with some exceptions).
You cannot predict with certainty the consequences of an action; but you can predict the probable consequences of most actions. Sometimes, the predictions are wrong - but most often not.
Utilitarianism is very much an "end justifies the means" philosophy, which is one of its dangers. If a million people's lives will be improved by killing 100 people now, utilitarianism says to kill them. Deontological and virtue ethics would probably not permit this, regardless of its future consequences.
Utilitarianism is an 'end justifies the means' philosophy only if taken very naively. For example:
How likely is that 1 million people's lives will be improved in the future by killing 100 people now? Invariably (in an 'end justifies the means' situation/decision), the answer is 'not very' or 'impossible to say to any meaningful degree'. As such, as per utilitarianism, killing 100 people now is not justified.
Not so if you go by ideology - you know, the 'break eggs to make an omelet' vision of every totalitarianism in history.
There's a VERY LONG road from utilitarianism to totalitarianism, Lindley. So long, it can only be taken intentionally.
PS - I'll ask you the same question I posed to R. Star, etc:
When was all this post-modernist non-sense ever helpful?
BTW, making humanities departments staff at universities delude themselves that they're cool, sophisticated and fair by indulging in white guilt and disparaging western civilisation is not even close to any relevant definition of the notion 'helpful'.