Morality is an inherent classification of right and wrong behaviors, often the result of tradition, upbringing, and/or society.
Ethics is a moral system at which one may arrive through philosophy and rational thought.
Ethics tends to define right and wrong in terms of its impact on human well-being, and not just as a inherent sense of right and wrong. As such, it may arrive at conclusions that feel "morally wrong" but actually perpetuate a greater well-being. (One example being utilitarianism.) This is also its danger, as one may argue oneself into a behavior which is rationally ethical but inherently harmful (e.g. eugenics).
The power of ethics is that it can be used to derive moral guidelines for new circumstances, such as AI or global ecological considerations. Such considerations can be derived from morality, but they have a tendency to not truly appreciate new variables and instead attempt to reduce new systems to familiar circumstances, thus often missing nuance.
I'd argue that ultimately, a sound ethical system must be derived from rational ethical thought, gently guided by sound morality as a safeguard against dangerous fallacies.
Wouldn't ethics then define right and wrong in terms of its impact on the well-being of sentient beings, rather than just human well-being?
And I suppose the difference with morality might be that certain actions that don't necessarily negatively impact other sentient beings, such as recreational drug use, might still be considered immoral by some due to cultural norms rather than practical considerations about the rightness or wrongness of them?
I think generalizing the good of human beings to all sentient beings is a great example of how a rigorous ethical discourse can expand traditional morality. The idea of giving rights to great apes is a wonderful example and I hope we can get there soon.
And likewise, a lot of traditionally "wrong" behaviors can be argued to be morally neutral if they don't really diminish the well-being of human beings. Sex work is another example.
I completely agree. Would you, in theory, be in support of giving rights to all sentient beings where possible, ensuring the best possible treatment and experiences of all individuals that have a conscious/subjective experience of life?
I would ideally like to see humanity extend moral/ethical consideration beyond humans to all animals, hypothetical alien animals, sentient AI, or any other sentients that emerged in future. I believe sentientism is the core underlying philosophy behind this idea of ethics.