How to Murder Babies and Feel Well About It

A thought about longtermism

Today we’re going to talk about killing babies, Elon Musk and the Human cosmos.

Have I got your attention? Haha!

Okay, sorry about that. To put it in less click-baity terms, we’re about to dip our toes in the murky waters of longtermism [Link]. Which, believe it or not, is directly related to our favorite topic in the world: science fiction. ‘And how’s that?’ you might ask. Well, longtermism happens to be an essential topic of the novel I’m plotting out (codename “Intelligence Beyond,” which is, incidentally, the best science fiction you will ever read). In fact, if you think about it, it does make sense: both sci-fi and longtermism have an obvious ingredient in common: the future.

But, since I’m such a nice fellow, I’m going to spare you the gory philosophical definitions and academic ethical discussions at the root of longtermism. Instead, we’re going to imagine together, allright? Warm up your neurons and think a bit with me (just a bit, I promise!). Let’s see where we land.

Let’s begin by, yeah,  killing babies.

To be more precise, let’s put our aim on one particular baby born in 1889. Look at those cute, chubby cheeks, and those curious, large, blue eyes.

Aww!

His name: Adolf Hitler.

Now, you just teleported back in time next to his wooden cradle in the middle of the night. There he is, sleeping like an angel. And there you are, invisible to the world, history hinging on your every move.

So… What do you do? Do you take that pillow, push it over little Adolf’s face and keep it pressed there for a few minutes until history gives out a long, deep sigh of relief?

Or do you hesitate?

Of course you hesitate! You aren’t a monster! Who can kill an innocent baby in cold blood just like that? I know! Why not just kill the parents? They surely had something to do with little Adolf’s, let’s say, perspective. But what if he ends up in an orphanage, or in a relative’s home, and grows up just like in his original timeline? No. The baby. It’s gotta be the baby. It’s the only way to be sure. But how do you force yourself to do what is necessary when it is also unthinkable? Simple. You think.

Yeah, this is the bit of thinking I was warning you about at the top. Okay, no point dancing around it, let’s close our eyes and jump in the icy waters of our intellect. Let’s be rational about our little dilemma.

On one hand, we kill a baby. The cost? A human being (and our damaged humanity, but let’s leave that out of the picture for simplicity sake). On the other hand, we will spare humanity of a regime, a holocaust and a war that cost humankind approximately 80 million lives (again, there were other costs, but let’s also keep our thought process as simple as possible).

Conclusion? 1 life vs 80 million lives.

Yeah, go ahead. Kill the baby. I don’t even think I would hesitate (easy to say from the comfort of my home in the far 21st Century).

But! Yeah, let’s begin butting. But if you kill little Adolf, perhaps another dude will take his place and we end up in the same mess. The forces of history are larger than a single individual, you might say.

I would usually agree, but there are individuals in history that are exceptions to the rule. Their existence and individuality really make a difference. Take inventors, scientists, explorers to name the obvious. And, I would argue, Hitler is one of them. I believe he was pretty unique. But fine, let’s give your argument the benefit of the doubt, and let’s say that if we kill Hitler, there is a chance that we didn’t stop history’s wheels from fulfilling their macabre turn.

See the word I used there? Chance? It was intentional. Because we must get a bit mathematical now.

Bear with me! It’s basic stuff, but if you want to be rational, well, a couple of additions and multiplications can go a long way.

So, what is the probability that, were you to kill Hitler, you solve nothing? In my opinion, close to zero, but hey, we’re talking about killing babies here, so let’s be conservative and call it a 50% chance. Now we can calculate the expected cost of both our choices.

So, the cost of not killing Hitler doesn’t change: 80 million lives. But if we kill hitler, our cost is 1 baby + 50% (the probability of not achieving our goals with our murder) of 80 million. That is 40,000,001.

Yeah, we still kill little Adolf.

‘But wait,’ you say, the pillow clutched in your trembling hands. Let’s be ultra-conservative! Let’s assume that if we kill Hitler, almost certainly somebody else as bad will take his place.

Almost certainly? So, say, 1% probability you are wrong?

Ok, then if we do our duty and kill the baby, our expected cost = 1 baby + 1% of 80 million = 800,001. Sorry. We’re being rational here, so just pinch your nose and murder the baby!

And let me point out that being ultra-conservative is the wrong approach when trying to use rationality to solve this problem. As wrong as being ultra-(what’s the opposite of conservative?). Because if you are using the wrong probabilities, then you are possibly making the wrong choice. Simple as that. So leave your feeble human angst out of the picture and think as coldly as machine. That’s the only way to get this dirty job done right.

I hope you are still with me, because now, instead of the past, we’re about to jump into the future.

The far future.

Imagine a galaxy that we humans call home. Billions of suns. Trillions of planets and habitats. Quadrillions of humans! Our Human cosmos… This future is possible, even plausible, given enough time (a few million years should suffice).

Unless… we go extinct before we make it out of this planet.

Now, imagine there is a tiny war somewhere that threatens this rosy future. Say… I don’t know. Ukraine? Alright, so now let’s imagine some unpleasant ways that war could unfold that would shove us the way of the dinosaurs. Nuclear war, say. And let’s assign a probability to that.

I say, I dunno, 1%? Nah, too high. Let’s say 1 per million. Alright, so let’s say we can stop the war by appeasing the stronger side and letting them conquer the weak neighbour. What’s the price? Pretty steep, obviously: many lives and the misery of a nation for starters. But now let’s be “rational” again and say we can stop the war by helping the underdog win. What’s the cost now? Well, now we must factor in the probability of the myriad of future lost lives if we go extinct. That is 1 millionth of quadrillions, which is, uh, trillions of lives. Trillions! Think about it, those uncountable descendants in the stars are looking back at us, hoping that we make the right choice.

The longtermist choice.

Our intellect is surer than ever: We must obviously side with nuke-muscled tyrants against their weak neighbours, right? People around you might object, but don’t listen to their whining. They’re just being emotional. You’re better than them. You are pure intellect!

Just like Elon Musk! (Ha! I finally brought him into the narrative. Am I good or what?)

Elon Musk is a well known longtermist. When you look at what he does (except Twitter, that’s like… I don’t even know what he is doing there), you can see a longtermist hand in action. Say, his obsession to get us out of Earth before something really bad happens. Say, his original involvement with OpenAI and his vocal concern about the vertiginous development of AI. Say, his support of Human-Machine interfaces as a possible strategy to counter the advent of pure machine superintelligent AIs.

And say, denying Ukraine access to his satellites when they were about to ambush the Russian fleet. Yes, that sam fleet that is blockading the grain exports that feed half the world; not to mention the constant targeting of civilian infrastructure with its cruise missiles.

Go Elon! Don’t listen to them. They don’t understand. They are simpletons, unable to see beyond the tip of their noses. You and I know the truth. We understand that the fate of the entire species is at stake here. Quatrillions of lives. Quintillions maybe, if we make it out of our galaxy and into the far depths of the history of the universe. Let those pathetic, emotional wrecks worry about the few isolated war crimes here or there while we peek into the real future.

Yeah. Okay.

I know what you’re thinking: so where’s the magic bullet?

Because you’re hearing what I’m saying. You understand it all. Your reason follows the ruthlessly impeccable longtermist logic, and, oh God, find no flaw!

But there must be a flaw, right?

Right?!

Hmm… I think there is. But I’m neither academic nor philosopher, so I’m not going to bore you with my humble opinion. You have a beautiful brain yourself. Use it if you want to get out of the longtermist cage. Or…

… accept its cold, relentless logic.

Like? SHARE!

GET FREE SHORT STORY!
Get Free Book