I.
Ethics is a lot like driving. Everyone insists they know what they’re doing, and yet somehow we’re constantly crashing into each other.1
Moreover, like driving, ethics is one of those domains where those who are putatively at the pinnacle of the profession appear to engage in the activity in a manner which seems almost entirely alien from that of the everyday man.
For example: 4-time F1 champion Max Verstappen drives a custom-built, carbon-fiber race car with a 1.6 L turbo-hybrid V6 engine pumping >1,000 horsepower around a 3+ km FIA Grade 1 asphalt circuit with hairpin turns at speeds of up to 360 km/h. His main concerns include:
His pole position
Current wear and temperature on his tires
Remaining fuel reserves
Brake temp and balance
The speed and angle he needs to hit for next turn-in point to nail his racing line
I drive my sedan to Target. My main concern is whether I can make the next green light.
Academic philosophers ask if the theoretical possibility of a Utility Monster would justify enslaving the rest of humanity to make said monster the happiest it could possibly be.
I ask if I really have to spend the time sorting my food-court lunch waste into the blue, green, and brown bins.
We Are Not The Same.

Yet while I don’t think it would even occur to a professional F1 driver to criticize the average commuter for not optimizing their turn radius in the Denny’s parking lot, academic philosophers have a much more direct, and often fraught relationship with the intuitions of what we might call “common sense morality”. Verstappen is not trying to give an exemplary model for how you should cruise the 105, but Kant does at least purport to give a rigorous account of how you and I ought to act in order to behave ethically.
But if these ivory tower know-it-alls claim to be giving us actual moral guidance, then why do the didactic ramblings of the academy and the intuitive judgments of our common sense morality seem, at times, so violently dissimilar?
II.
Part of the tension here comes down to an issue of what we might call the source of authority. Philosophers may construct grand theories to articulate the moral truth of the universe, but for all their talk of “a priori” truths, they do not arrive at the feet of Plato or Parfit as moral virgins. Rather, the same common-sense intuitions which many academics come to regard as shallow and deceptive are, in fact, the very starting point from which they first assemble their ethical theories and principles.
Which raises the question: when our intuitions say one thing, yet our theories say another, who exactly deserves the boot?
On one hand, to the extent moral philosophy is science-like—itself a highly dubious claim—our common sense moral intuitions would be akin to discrete observations in an experiment. They are the evidence around which we weave our theoretical threads. Our theories must cohere in some way with these intuitions lest they fail to properly explain the “evidence”.
But on the other hand, a degree of coherence is not perfect obedience. If, during the course of some scientific experiment we discover, after repeated testing, that a certain measurement appears to have been erroneous, then we are justified in throwing out said measurement. Similarly, if we find that one of our moral intuitions flies completely in the face of our best theories, and we can identify some source of flaw in how this intuition may have arose, then we may similarly choose to discard it.
What might be the source of said flaw? Well, as much of evolutionary and moral psychology suggests, our moral instincts are largely, well, instincts. They are inherited dispositions and inclinations for pro-social and pro-survival behavior. We might then expect these instincts to be wrong just as our instincts about physics or statistics are wrong. Ask any child if a bottle of rocks will fall faster than a bottle full of feathers. Ask a teenager who hasn’t learned special relativity if velocities are purely additive. Ask a college grad about the Monty Hall or Linda problem.
That our intuitions are strong is no guarantee that they are right.
But if such instincts are untrustworthy and yet moral realism remains true2—in the sense that there exist some “objective” moral facts—then you would expect the set of true moral propositions to have some significant areas of divergence from our moral intuitions. Given how strong our instincts are, you would expect at least some of these to be severe.
Yet while in scientific discovery we have experiments and data that, in theory, allow us to get closer to ground truth and provide an alternative source of knowledge against which we can test and validate our physical intuitions, in ethics we have no such thing. There would seem to be no empirical fact of the matter that we can test, no sensor that we can construct, no alternative source of knowledge at all which we can place in opposition to our intuitions as a way of validating their content. All we have are intuitions to test against other intuitions. It’s turtles all the way down.
Here’s where we hit the real issue: given modern evolutionary psychology evidence regarding the origins of our moral intuitions, the ethicist faces a conundrum, either:
Our moral theory perfectly agrees with our moral intuitions. Such an outcome would be surprising. Our intuitions are almost certainly the result of evolutionary survival pressures. Why on Earth would they correspond perfectly with objective moral truth? Shouldn’t we question whether such an agreeable theory is any more than ex-post rationalization of what our genes tell us to believe?3
Our moral theory strongly disagrees with our moral intuitions. This would, almost by definition, seem prima facie false. Moreover, if our moral intuitions aren’t guiding our ultimate judgment of which moral propositions are true or false, what is? On what grounds would we possibly justify moral truth?
A secret third thing.
So disagreement is baseless, yet agreement is suspect.
What is one to do?
Well, one answer is that ethics is, at least in part, an exercise in reflective equilibrium—an iterative process through which we test our concrete moral intuitions against our abstract moral principles and vice versa until we get a coherent output. It is a push and pull, a dance, a spring.
Our intuitions form the starting point, as well as any socially ingrained moral principles—which may as well be intuitions to the extent they are merely recitative mantras rather than rationally cognized maxims. We then attempt to formalize these intuitions into something more abstract, coherent, and principled. In the process, we notice some intuitions that do not play nice with each other and resist mutual coherence and co-existence. We may for a time attempt to square the circle, to solve the dissonance, but eventually we may be forced to make a meta-judgment: to decide that one of these initial judgments was given to us in error. We clear the clutter and begin anew. Through this process we shift from the common sense morality that sits disorganized within our hearts to a sharper, more rigorous theory that can stand up to the likes of Kant and Hume.
I expect most moral philosophers would agree, at least in part, with this characterization of moral reasoning.
Personally, I kind of hate it.
For one thing, it gives moral reasoning a formalistic, ethereal quality that I find personally dissatisfying—as if we’re all just sorting puzzle pieces with no guarantee they’re even from the same set.
Moreover, I find that this framing gives moral philosophers a bit too much leeway to discard intuitions at the first sign of trouble.4 Many of the standard first and second-order conclusions of utilitarianism and deontology are, on face value, fucking absurd; and the typical philosopher response to these absurdities often seems to be a push to double down or bite the bullet—to prioritize the formal and the elegant since, in a sense, that’s all they really have.
Perhaps I have been deluded by vain notions of the internal consistency of my own beliefs, but I’m not willing to give up that easy. Maybe it’s worth seeing if we can rescue more of these intuitions than standard philosophical inquiry would have us believe. It certainly can’t hurt to try.
III.
I have thought about many ways to kick off this blog.
In some ways the first post is the least important, in that it will almost certainly be read by the fewest people. On the other hand, it is actually quite hard to start writing about philosophy when you don’t have an established reference point to jump off from. I do not want to re-derive deontology and consequentialism from first principles.5 I want to get at the good stuff.
So I figure that one way to make that shōnen time-skip is simply to list out as many of my own intuitions as I can think of, focusing in particular on those that I expect to clash with each other. The challenge then over the course of this blog’s lifespan is whether I can find ways to resolve these tensions, or whether some of these intuitions simply must be discarded. I like to imagine that all the internally consistent intuitions will gather around and drag the ugly ones out back to be executed, Mafioso-style. Snitches may get stitches, but bad ethical intuitions get treated as mere means rather than ends.
This exercise was partially inspired by Theo Jaffee’s Enlightened Centrist Manifesto on Trans Issues, but I also just think this is good epistemic hygiene for philosophers or thinkers in all fields to exercise before seriously engaging with a difficult topic. It helps avoid “philosophical p-hacking” by pre-committing to a set of “predictions” about what you think moral philosophy ought to conclude. It can help separate motivated thinking from genuine discovery. And perhaps it will serve as an anchor to ground any future readers should they need to figure out how I got into this whole mess. If I were an Intro to Ethics professor, this is the first exercise I would assign to my students.
Two caveats before I start listing:
First, I have tried to bucket these by topic as best I can, but I have prioritized keeping together any intuitions which exhibit some natural tension with one another. These are, after all, the places where my commitments are at their most vulnerable—where I may be forced to sacrifice My Truth™ at the altar of reason. So if something appears to be in the wrong bucket, don’t worry, I know. It’s just that often the juxtapositions which create the most tension are those where a sensible meta-ethical view conflicts with a strongly held applied ethics view. And so category errors must abound.
Second, I realize that not all of these are, strictly speaking, intuitions. Sure, some of them are base intuitions, but some are much closer to considered judgments, while others still are merely political beliefs that masquerade as ethical intuitions. I think to not include these latter types of “intuitions” would be an act of willful self-deception—these beliefs are just as hard for me to sacrifice as any “brute” intuition, perhaps harder. Nevertheless, there are several intuitions here which I already suspect will need to be thrown out. See if you can guess which!
Now, without further ado, let us begin:
Metaethics & Moral Realism:
Moral obligations are real: You have an obligation to be a good person even if you don’t want to, even if it accrues you no personal benefit, even if there is no God to judge you, no Hell to punish you, no witnesses to praise you, no ego to satisfy you. Be a good person. Don’t be a bad person; mom would be sad.
Morality exists even if God doesn’t: This is mostly an expansion of one element in the broader claim above, but it’s worth hammering: the existence of God is not necessary for the existence of ethics.6 In fact, the existence of heaven and hell would, if anything, dissolve all ethical considerations into an egoistic (though justified) desire to avoid eternal torment, which doesn’t seem very moral at all.
Morality isn’t typically adhered to because it exists: I mean this in the direct sense of the word “because”—i.e., that even if moral realism is true, most people’s moral behavior is incidental and emotion-driven based on evopsych factors in our lizard brain and/or cultural factors from our community or family. Considered judgments are rare in everyday situations; they are exceedingly rare in moral ones.
Ethical egoism7 is clearly wrong: And don’t get me started on outright sadism. Only doing what benefits you is Not Good™, and that’s true even if “what benefits you” includes your altruistic or other-oriented desires (e.g., that others being happy makes you happy). If you go around making others miserable just for your benefit, you’re a bad person.
A little egocentric bias is fine: You do not have an infinite duty towards others. That morality exists does not imply that all of your actions must be maximally aligned with the pursuit of The Good™. In many (most?) situations, you are allowed to prioritize self-interested, non-moral reasons over moral ones, even if that means producing outcomes that are less virtuous, righteous or beneficent.
Moral relativism is obviously false: It is not the case that human sacrifice by the Aztecs was morally acceptable because it was accepted in their culture. It is not the case that female genital mutilation in Somalia is acceptable because it is accepted in their culture. It is not the case that slavery was acceptable in the Antebellum South because it was accepted in their culture. These are not merely alternative moral perspectives that are true in the cultures within which they originate. They are objectively wrong.
Agent relativism is obviously true: This one is more technical, and it’s hard to succinctly define agent-relativity without creating undue confusion8, so I shall simply state the underlying intuition and leave the abstract principle as an exercise to the reader. Bear witness: it is possible for two agents to be wholly morally justified in taking actions which are diametrically opposed (i.e., actions such that for one agent to succeed in their action would entail the other agent failing theirs). This is the root of many tragedies—tragedies which are all the more tragic precisely because nobody has acted wrongly. It is perhaps best summarized in this phrase: “You do what’s right for you, I’ll do what’s right for me.”
Intentions matter: There is a difference between an act of malice, recklessness, or negligence. An act of commission is different (and generally higher valence) from one of omission, even if they net the same result. Manslaughter is bad, but it’s not murder. This is especially relevant for punishment.
Consequences matter too: Having good intentions while producing horrific outcomes is only an excuse up to a point. ‘I didn’t mean to’ is not a catch-all defense. Strict liability exists for a reason.
Intention and consequences are, at some level, fungible: If Alice intentionally kills 1 person, that is worse than if she recklessly causes the death of 1 person, yet it is clearly not as bad as if she recklessly kills 1,000 people.
Moral impermissibility and obligation are real: When we say “X is wrong” we do in fact mean that you should not do X, full stop. If we say “X is the right thing to do” then we do mean that you are obligated to do X, full stop. These are binary characteristics and they evaluate to true for some X.9
There are gradations of moral wrongness: It is not merely binary permissibility or obligation that we are concerned with. Two actions which are both impermissible can still have a binary relation such that one is worse than the other in some sense.
Similarly, impermissibility and obligation are not enough; supererogatory actions are real: If Alice donates 20% of her income and Bob donates only 10% of his, then Alice is, in some way, more virtuous than Bob (ceteris paribus). But that does not necessarily mean that Bob “ought to have” donated 20% instead, or that he acted wrongly in only donating 10%. We may well say that neither of their actions were obligatory, and they were purely operating in the “permissible yet better” space.
Evaluation of Competing Moral Theories:
Utilitarianism/consequentialism makes some good points: Promoting human flourishing is good. Creating human suffering is bad. We should strive to effect more of the former and less of the latter. Value is scalar-like, and pursing objective value means that we must consider not only our own pleasure, desires, and preferences, but also those of all other like beings and give them similar weight.
Deontology/Kantian ethics makes some good points: Respecting human dignity is good. Violating human rights is bad. We should feel obligated to ensure the former and avoid the latter. Actions are rule-like, and being a rational actor means that we must consider not only our own actions in isolation, but how those actions generalize to all other rational beings.
Utilitarianism, when taken to its logical extreme, is bad: The ends don’t always justify the means. There are times when the wickedness of the means through which we achieve an outcome outweighs whatever benefit that outcome entails. There are times where it is wrong to lie or to kill even if you think it’ll produce a better result.
Deontology, when taken to its logical extreme, is bad: There is an exception for every rule: obviously lying is okay sometimes, same with killing another human being. In fact, for almost any given duty there probably exists some outcome, even if it must be comically extreme, that justifies violating said duty.10
Virtue ethics is mostly just a cop-out: Consider the classic virtue ethicist’s mediating cry: “Actually moral value is plural and there are all sorts of incommensurate virtues which are good and should each be strived for independently!” Okay, thanks but you’ve added almost zero value to the conversation. Sure, virtues are probably a good guide for navigating every-day life. Cool. But you have offered no coherent decision theory that evaluates when considering any of the interesting cases we might want to address. No soup for you!
Specific Moral Judgments:
Do unto others as you would have them do unto you: Ah yes, the Golden Rule. We all learned this when we were five and it is only through the cruel, ego-construction of adolescence that so many learn to abandon it. I will posit that this playground principle covers 90% of morality in a practical sense. It’s not everything, but if you follow this one principle, in genuine good faith, and assuming you aren’t a weird bug person who likes being maltreated, you’ll handle most moral situations pretty well.
The answers to the Trolley Problem are obvious: You should flip the switch to kill one to save five. But you shouldn’t kill one patient in a hospital to harvest their organs to save five others. The fat man example is weird and deserves no consideration. How is a fat man going to stop a trolley? How do I know that he’s going to stop the trolley? What even is this? Next question.
You don’t have to sacrifice everything to save starving kids in Africa: Ordo amoris is probably somewhat true; you have a greater moral imperative to save your family over the drowning child that you spot on your morning walk, just as you have a greater moral imperative to save said drowning child than you do to save a starving kid in Africa. Moreover, there is no sweeping obligation to sacrifice even mere material benefits for those closest to you just because those resources would be better used elsewhere. A simple splurge is not an act of violence. You can have a sweet treat.
You should probably be doing more to save starving kids in Africa: That’s right, JD Vance, ordo amoris doesn’t mean you get to ONLY care about those close to you. All human beings matter. Most of us probably can and should do more even if this obligation is not infinite per the above.
Discrimination is wrong: Wait, no! Hold up! I realize that for some readers, this may have just triggered the little political weasels in your brains, and perhaps one or two of you are already frothing at the mouth about DEI. But let me clarify: I simply mean that I think it is wrong to treat some people worse on the basis of arbitrary characteristics that they did not choose and cannot change. Why, you ask? Refer to point #1 of this section.
Meritocracy is good: By this I mean that it is good, proper, and perhaps even just to reward those who are most capable in society. Wait Corsaren, didn’t you just say it’s wrong to treat people worse on the basis of characteristics that they did not choose and cannot change? Isn’t some genetic portion of intelligence a trait which is unchosen and cannot be changed? Isn’t rewarding the intelligent essentially equivalent to punishing the stupid?11 Well, maybe, but that’s the point of this here intuition-listing exercise now, isn’t it?
Doctors, lawyers, and other professions have unique duties owing to the roles they play in society: A doctor must do no harm even if doing so could effect a greater good. A lawyer must defend their clients even if they are guilty. This is a subset of agent-relative norms, but it is an important one that is worth calling out because it is apparently not obvious to everyone.
Direct harm is not necessary for personal wrong-doing: It is possible to “harm” someone’s interests even without them experiencing harm (e.g., cheating, broken promises). Shielding them from the truth does not eliminate the wrongness of the act by preventing the consequences of them learning the truth from occurring. Similar logic applies for invasion of privacy.
Gay marriage doesn’t directly harm you, so it doesn’t matter if you don’t like it: If an activity doesn’t affect you in any real way, then your preference about it is not a legitimate moral concern. To object that gay marriage is gross or that it offends your sensibilities is a preposterous reason to label a practice as wrong. Gay marriage is between two consenting adults, and in general, if there is informed consent then it’s probably fine. This applies to sex, HRT, the food you eat, and low-externality economic transactions.12
Bestiality and incest are wrong because I think they’re gross: These are vile acts that others should not engage in, and even if there is consent amongst parties it is still wrong. Yes, I know this is hard to square with everything else I’ve said. It would have been a lot easier to stay silent on this topic, but…well…mamma didn’t raise no bitch.
We should care about future persons: We owe something to the future, and this obligation extends to those who do not have and do not plan to have children. You owe something to the future because you would strongly prefer that those before you acted like they owed something to you. That they may have failed to meet those obligations is no excuse for you to shirk yours. The duty to promote the flourishing of life extends to life which does not yet exist.
Abortion is morally permissible: Women ought to have a choice in the matter about whether they carry a fetus to term. Moreover, an embryo doesn’t really seem to be a person the moment after conception, and in many cases it seems like the birth of one child today merely offsets that of a different child in the future.
An abortion during the last week of a pregnancy is not permissible: The mere act of being born doesn’t really seem to significantly alter the moral worth of an unborn child, and so unless there is some other medical reason (e.g., danger to life of the mother/child, etc.), it seems like an abortion that late into a pregnancy would be prima facie wrong. Unfortunately, this means that we’ve now ruled out the only two clear-cut Schelling points for this issue.
The death penalty is fine in principle: Some people deserve to die. I reserve judgment over whether, as a practical matter, the death penalty is good law. But it seems clear to me that some humans are so vile, so evil, so—dare I say—inhuman that they really do deserve to be killed in a retributive, moral sense. That the state is the one vested with the power to kill also seems perfectly appropriate to me, though perhaps that’s a matter for political philosophy.
Bugs don’t matter: They just don’t. You are welcome to be nice to bugs if you like them but it is of literally zero moral consequence if you kill one just because it’s bothering you. I’d probably extend this to shrimp too but then I’ll really start to piss people off.
It is wrong to torture puppies: Need I say more?
And yeah, I’m going to say that “not torturing puppies” is as good of a place as any to end this. There are surely more moral intuitions I could tease out, but we don’t have all day.
IV.
Let’s instead take a moment to review: what does all of the above say about me?
For starters, it says that I am very much an e n l i g h t e n e d c e n t r i s t on moral philosophy, which I suppose lines up with my obvious neoliberal politics. I like both deontology and consequentialism. I would like to maximize human flourishing and respect human dignity. I wholly reject the notion that I must pick one or the other.
Moreover, a keen observer may notice that while I have been intentionally imprecise with my normative primitives, there are some clear patterns. If utilitarians are primarily concerned with value, and deontologists are primarily concerned with permissibility, then I am disproportionately concerned with blameworthiness. I care more about whether “you acted wrongly” than whether “your act was wrong”, even though the two are surely related. I am interested in ethics not merely as an abstract and objective account of The Good, but as a practical process and decision theory for conducting ourselves in a manner befitting of such good. Accordingly, I am less concerned with ideal ethics (i.e., what is “truly best” at any given moment from the view of an omniscient being) than I am with non-ideal ethics (i.e., what should I do given my limited knowledge and propensity for error).
Finally, I think this list shows that if I am committed to trying to hold onto a majority of these beliefs, then well…it seems I certainly have my work cut out for me. There is a lot of contradiction in here.
But that’s the fun part, right? I certainly think so.
I certainly hope so.
…
Then again, maybe it is all for naught. Maybe the last 5,000 words are nothing more than the manic cries of the delusional, desperately clawing for truth within the empty threads of his own epigenetics and cultural background radiation.
Perhaps.
But in a world stripped of God and meaning and what seems to be the very last remnants of sanity, I do know this for sure:
I want to believe.
***
If you’re curious to see whether my Sisyphean task pans out, then I invite you to stick around. Contrary to what the chaos above might imply, I do, in fact, have a plan to defend, interrogate, and reconcile much of what I’ve asserted above—I even have a particular ethical theory in mind that I wish to advance as a way of meeting this challenge. Getting there will take some time, however, and there will be many fun detours along the way in the form of miscellaneous musings on culture, politics, AI, and more. I hope that this blog can become a dialogue rather than a one-man rant, so do join me to help make that happen!
Frankly, if you persevered through what Substack informs me is a 25(!) minute article, then please consider subscribing below. I could use the motivation to write more.
And finally, if you think I missed any big moral questions, or you agree or disagree with any of my positions, then please leave a comment down below! You have my full permission to be mean and unfiltered.
C.
Also similar in that the question of whether AI is capable of doing it well seems to be one of the defining questions of the decade.
Don’t worry, we’ll get to error theory.
This is essentially what moral relativists and skeptics are getting at when they say that a certain ethical system is just “an offshoot of Judeo-Christian norms”. The accusation is that this ethical system *just so happens* to line up with the moral traditions of the West, which in turn, they claim, can be traced back to the Bible. Isn’t it convenient, they say, that you’re supposedly atheistic system for right and wrong which you claim to have discovered as an external, objective truth is a perfect match for the cultural norms that you grew up in? If your system is “right” in some way independent of your biases and culturally-inherited beliefs, then surely someone born into a non-Western tradition should be able to come to the same conclusions? Surely they should be able to divine the same truths that you claim to have discovered? And yet, such a claim seems prima-facie preposterous, as evidenced by the fact that non-Western cultures have not adopted Western moral values en-masse.
I find this argument to ultimately be wrong, but I do think it mounts a serious challenge—one that must be met in time.
I realize that this is a bit contradictory. On one hand, I’m saying that developing ethics via reflective equilibrium is weak grounding, but on the other hand I’m saying that we should be *more* committed to our base intuitions? Don’t those seem opposed? It’s tough to explain succinctly, but one way to think of it is that when you look at physics, we do know that relative velocities are not additive, they instead follow: v₃ = (v₁ + v₂) / (1 + (v₁v₂/c²))
But we also know that this approximates to v₁ + v₂ at low velocities. So we haven’t entirely dropped the simpler account for the more complex one; one reduces to the other under the proper circumstances. Some philosophers manage to do something like this (R.M. Hare’s two-level utilitarianism is a decent example), but most are pretty bad about it. I’ll have more to say on this another time.
Okay, well, maybe I do a little.
If I were to describe my philosophical “crusade” in a single sentence, it would be this: proving that morality can exist even if God doesn’t. As a born and raised atheist with strong moral convictions, in some sense I hold this belief more strongly than any other on this list. But like any core foundational belief, it is also the one that I find myself doubting most often. Hopefully this blog will help me finally figure out if this intuition can survive.
I consider the issue of ethical egoism to be a question of moral realism because I think that, for almost all intents and purposes, moral skepticism = non-realism = error theory = ethical egoism. It’s all the same theory being talked about in the context of different questions (Does morality exist? Do the questions that moral realists ask evaluate to true or false? What should a person do if there are no higher moral considerations? Etc.). Throw emotivism in there too for good measure (What is the function of making statements about morality if they are all apparently false?). Burn ‘em all to the ground.
It is also worth noting that morality being “agent-relative” really has different meanings depending on 1) which philosopher you are talking to, 2) whether you are talking the nature of some subset of moral verdicts (e.g., “be good to your children”) or whether you are talking about the nature of moral obligation itself. I will discuss these nuances at a later date in a 10,000 word essay titled “For Members Of A Profession Obsessed With Meaning, Philosophers Are So Inconsistent With Language That They All Deserve To Be Shot”.
This might sound like I am committing myself to full-on deontology where something like “murder is wrong” is always true with no exception. I assure you, I am not.
With the possible exception of something like “the duty to not condemn all possible life in the universe to eternal torture and suffering” since the violation of that duty would somewhat entail that no good countervailing outcome could possibly exist.
Moreover, isn’t this possibly the most self-serving “moral intuition” you could possibly have? Also yes.
Looking at you, Marxists.
I thought this post was great and really enjoyed it! My only major objection (besides wanting to stick up for virtue ethics as a powerful first-order theory, but I should just write a post about it myself sometime instead of derailing here) is that I don't think the contrast you make between the process of observation and experimentation in the empirical sciences and the process of reflexive equilibrium in ethics is actually a meaningful or important one. I think they're actually the same kind of general process, and that reflective moral reasoning should be seen as a form of abstract science in and of itself. I wrote a post about this a while ago if you're interested: https://bothsidesbrigade.substack.com/p/moral-realism-turns-ethics-into-a But otherwise, I look forward to seeing more posts!
Great post! Fwiw, I do think there are a couple kinds of moral facts that can be studied empirically: 1. You can study which moral intuitions are biological and which are learned, what the allele frequency is for the biological ones, and what the historical roots are for the learned ones. This might help guide thinking about which intuitions to distrust. 2. You can study game theory, which (IMO) is pretty evidently where most of these intuitions (both biological and learned) come from.