The Case For The Very Long Term
Consider how many people are alive today (~7.5 billion) compared to all the humans that have ever lived (~100 billion). Of the roughly 200,000 years Homo sapiens have been around, much of that time has come and gone with no significant progress in welfare (health, human rights, life expectancy, wealth, etc.) There is no doubt that our current generation is immensely privileged with respect to prior generations. We are also uniquely positioned to make a great impact on the future. As we grow richer in technology, energy, and knowledge, we must also accumulate the wisdom and prudence necessary to stay in control of our future. Considering the trajectory of welfare up to this point, and the fact that our current population represents such a small fraction of total human lives and years that have been spent on earth, it seems reasonable to conclude that exponentially more lives (good lives!) have yet to be lived. We are morally responsible for the future, the far future included.
If you missed the last 4 years of U.S. politics then you won’t ask this question, but I’ll ask it anyway: How could we possibly screw this up?
Existential threat: nuclear threat, pandemic, biowarfare, etc. Irreversible trajectories: climate change (uninhabitable earth), QWERTY keyboard, North Korea, any 1984-esque scenario
A common reaction to the idea that we can make an impact on the far future is that it sounds nigh impossible because the future is so far away. At first glance, it looks like all we might be able to do is more research into understanding the existential risks and threats to humanity. But just take common sense morality, things you would likely be advocating for already, and you already have a good playbook for the future. Examples might include:
teach your kids good values make governmental and educational institutions better expand human rights improve collective decision making build general resilience and sustainability
In these ways we shape the present, but also the far future, as we continue to realize these benefits for many generations to come.
Unless you place a social discount rate on the future, I’ve made the case so far that most of the ‘expected value’ in humanity lies in the future. That is, most good lives (in quantity, not necessarily quality though it could be) are yet to be lived if humanity survives for a very long time. If you buy this premise then there is quite a bit of moral significance to affecting the long term future. And this doesn’t necessitate sacrifice in the present for the sake of the future, it How can we affect a future that is so far away? First and foremost we could explore and mitigate the existential risks that threaten to end humanity. Despite it being hard to predict what the very far future will look like, there is no doubt the choices we make now have a direct effect on the future.
When I think about the case for the very long term I have an immediate gut check that tells me I should be distrustful. If we don’t socially discount the future, we risk neglecting lives that can be improved now in favor of improving lives that don’t exist yet, which leaves me with a bitter taste in my mouth. Yet consider the scenario:
Suppose we are deciding what we should do with nuclear waste. You have the ability to destroy it with the press of a button, but doing so may have detrimental effects to the button presser. Or you could bury the nuclear waste which is safer now but necessarily passes significant detrimental effects to a future population for hundreds of years.
If we socially discount the future then burying the nuclear waste is better than destroying it. We’re left with a bitter taste in our mouth just by reframing to show the opposite problem, the problem with having a social discount rate.
It becomes very much a question of how much we should value future lives. And maybe we can find a useful discount rate that feels like a fair tradeoff in this scenario. But committing to any social discount rate above zero necessitates looking at the edge cases where one life now is worth x amount of lives (hundreds, thousands, millions) in some very far future. Toby Ord points out the odd result that comes from accepting a social discount rate of 1% per year (i.e. your welfare next year is only 99% as important as your welfare this year), that would imply the welfare of Tutankhamun was more important than all 7 billion humans alive today.
Intuitively, the welfare of a baby born in 2020 should not be intrinsically less valuable than a baby born in 2030, yet this is what social discounting does. One might hold a “person-affecting” view and argue this person isn’t born yet so there is no one being harmed. Or that since their birth hasn’t happened yet they are being harmed less. Discounting merely possible people just because they don’t exist yet is like refusing to sign a bill or consider a new healthcare system because it is only a merely possible healthcare system and it doesn’t exist yet. This sounds ridiculous because it is! Unless, that is, these possible people won’t actually exist, due to some existential catastrophe. In that case you should discount future welfare by the chance that we (humanity) are not around to realize it.
One of the dissatisfying implications of long-termism is the delayed benefit. And not only is most of its potential not realized until after you are long dead, but in the case of early human extinction, successful prevention doesn’t make the world look very different at all. (Besides being decaying thankfulness for a couple generations after. I have no concept of what it was like to live at the height of the Cold War, and while I am thankful the world was not irreparably set back, the cultural memory fades as generations pass. For these reasons, long-termism is not a particularly socially appealing stance (among the living, as opposed to the not yet living) and lacks a high rating in emotional valence. And this is why it should be investigated further. Not only that but future generations should be represented. They don’t get a ballot to vote and yet they have a dog in the fight.
Long-termism still might not sit with you right. Maybe you believe it sounds good on paper but less palatable in practice. On this question though, I believe rational thought beats out moral intuition in shaping our ethics on the matter. In general, moral intuition per the historical record seems quite suspect. Aristotle, one who spent a great deal of time thinking about these things, and who was progressive and ahead of his time in many areas, endorsed slavery. Nick Beckstead’s thesis contends that if there was pervasive and biased moral error in the past, it seems likely for that to hold true in the present.
Here are some of the specific moral biases that could be affecting our judgment. Even though we are framing these biases in terms of long-termism they are certainly interesting enough to stand on their own.
Scope insensitivity - People fail to differentiate between harms/benefits that are 1x, 10x or even 100x greater than each other. For example, a study found that people’s willingness to pay to save 2,000 vs. 20,000 vs. 200,000 birds from drowning in uncovered oil ponds was essentially the same ($80, $78, $88). For this reason we are unlikely to fully appreciate the value of many future generations.
Proportionality reasoning - Similar to above, people significantly preferred helping a fixed number of people when the proportion of the number of people helped was larger. The example in the study was for saving the lives of 4500 people in a refugee camp of 11,000 vs 250,000 (people much preferred the former). An example from everyday life might be where you wait in line to save $50 on a new pair of kicks, but don’t wait in line to save $50 on your plane ticket to Tokyo. This thought process seems understandable, but it is ultimately misguided and irrational, especially when considering something as non-trivial as human life.
Identifiable victim bias - This one seems related to the previous two and is best described by a quote commonly attributed to Joseph Stalin, “A single death is a tragedy; a million deaths is a statistic.” There is probably evolutionary groundwork at play here where it seems reasonable to confirm your beneficiary before you help them. But how much should we let this affect our judgment? Again we risk underestimating future lives because they don’t have faces yet.
Probability neglect - Just as we can be insensitive to changes in scope, when dealing with very low risks we are not sufficiently sensitive to changes in the probability of the outcome. People are willing to pay the same amount for insurance against risks of 1/100,000, 1/1,000,000, and 1/10,000,000. Similarly the value of reducing some already small probability of existential risk is probably underappreciated.
Bias toward the near - People are naturally inclined to prefer benefits sooner rather than later. But this preference isn’t necessarily consistent over time. For example, people prefer receiving $15 next month to $10 this month, but do not prefer $15 in 13 months to $10 in 13 months. This bias has a clear effect on our attitude toward the prospect of benefitting and creating value in the far future.
Bias toward overconfidence - Optimism has been an incredible tool for humanity, but we (especially Americans) have managed to turn productive optimism into brash overconfidence. What they say is true, with 93% of Americans believing that they are more skillful than the median driver. Given that we are already dialed toward overconfidence, it is slightly worrying how we might interpret the value of reducing global or existential risk.
With our moral biases, we may end up concluding saving 500 lives is as good (or maybe better) to saving 5,000 lives. We may feel like we’re getting an emotional deal because the proportionality is just right. Or maybe because those lives are now rather than in 12 months. Or maybe because our emotional reaction would be the same for 500 vs. 5,000 vs 50,000 lives. But this isn’t about getting the best emotional bang for your buck. We have an obligation to consider human lives in each of their own respects. But the option of doing more good despite a lesser emotional reward is currently underrated. Which is weird, because quantitatively you are doing more good. So do what you can to make those cold unfeeling numbers real!
At the end of the day we have to make tough allocation decisions that have all too real effects. But accepting this argument for the long-term doesn’t mean that we throw the baby out with the bathwater. We can carefully dispose of the bathwater, save the baby, and protect future babies from being thrown out! As mentioned above, many of the practical solutions we come up with today to solve contemporary issues will reap benefits far into the future. In fact, it seems difficult to come up with short-term single generational problems that do not have long-term potential. In this way long-termism does not look for tradeoffs, but should be used as a tool to compare solutions, reframe problems, and help coordinate and direct our future.
Single-sentence refutation: Despite a passionate case seemingly written by a person from the future, the lives of future people are “extra” and should be given diminished weight (despite the potentially bad fringe cases) because our primary concern should be making lives better rather than making better lives.