Present or future need – which should be served?

September 9, 2022 1 Comments

Looks like I always come back to dragonflies at this time of the year, an unending fascination with their beauty and evolutionary prowess – look at the size of those eyes alone. Dragonflies don’t sting, bite, pollinate – they simply eat bugs that might otherwise do harm, an all important evolutionary role!

I had written about the biological facts of the species here some years ago and provided somewhat sarcastic musings when looking at them last year.

This time around they reminded me of aliens, no surprise given that my mind was preoccupied with thoughts about potential scenarios for humanity’s future, the colonization of space included. Looks like I always come back to politics as well. At this time of year, or any other time, come to think of it. Missed it? “No,” mumble the honest among you, “but did miss the photography.” Oh well.

As is often the case when I learn something new, it all of a sudden pops up everywhere, after decades of (my) ignorance. So it was when I encountered the concepts of Effective Altruism (EA) – a morally inspired way of doing good in the most rational, effective and ambitious way – and its adjoined movement of Longtermism – affecting our species’ survival by economic, scientific and political action that reduces existential risk to humanity, protecting future generations. The New York Times, The New Yorker, Vox, Salon, are suddenly all reporting (and referenced below.)

Let us assume we all agree that doing good is desirable, the morally right thing to do. Why not do it in a fashion that is most effective, literally yielding the most bang for the buck? How would we know how to do that? Rational analysis of the evidence: what amount does it take to save a life or relieve suffering, by what means is that reliably accomplished? It became clear very quickly, that helping people in developing nations, particularly Africa, saved more lives per dollar, and engaging in projects there that use donations cost-effectively saved more lives overall. Sounds good, right? Particularly when we know that empathy is often reserved to those who most resemble us ( I STILL can’t get over how Ukrainian refugees are treated by European nations, compared to their Black or Middle-Eastern Counterparts this year, for example, with my full solidarity to all) instead of redistributing our wealth to those most in need, far away and unfamiliar.

The Effective Altruism movement was started by Toby Ord and philosopher Will MacAskill in 2009, with a group called Giving What We Can promoting a pledge whose takers commit to donating 10 percent of their income to effective charities every year. Not only that – people were encouraged to choose high-paying professions or jobs instead of hands-on occupations, so that they could donate more. (Be a bit-coin speculator, not a country doctor!) Multiple smaller organizations worked towards the same goals, soon to be joined by multi-billionaires donating to the causes: estimates are that the movement has roughly $46 billion at its disposal, an amount that had grown by 37 percent a year since 2015. (A detailed, sympathetic overview of the evolution of the movement can be found here.)

Fighting global poverty and evaluating the charities that commit to that fight have been to some extent superseded by a recent focus on protecting lives that do not yet exist, concentrating on the long term. The alleviation of present suffering is eclipsed by worries that we, as a species, might not have a future at all. At least that is the perspective held by the many extremely wealthy donors, tech bros included, and MacAskill himself, all of whom have led Longtermism from obscurity to relative power. (Elon Musk linked to MacAskill’s new book, “What We Owe the Future,” with the comment, “Worth reading. This is a close match for my philosophy.”) Longtermists are eager to invest in projects that reduce the risk for humanity to become extinct and increase the possibility for trillions of future humans to be born and colonize other stars. Indeed, they are also committed to transhumanism, believing with its prominent proponent, Nick Bostrom, that we can create digital people living in vast computer simulations millions or billions of years in the future. Yes, no kidding.

The main threats to our future are assumed to be global pandemics, potentially created by our very own, bad-actor scientists, nuclear extinction (note: not climate change) and first and foremost Artificial Intelligence (AI). These threats cannot be faced with simple evaluations where to best spend limited resources. They require political solutions across the board, and they entail unknown or unknowable risks. We don’t really know if our interventions will make things better or worse. ( I know not enough about how dangerous AI might indeed be – I do acknowledge that scores of people unfamiliar with nuclear power ended up with radiation poisoning – just one example that lack of technological knowledge can have horrid consequences. (Here is a warning from a thoughtful perspective just last week.)

We might quibble, then, whether it’s better to save millions of people now or devote our resources to saving unimaginably large numbers later – or we might take a deeper look at what EA and Longtermism actually entail.

Private compassion – even when it provides organized distribution of billions of dollars – is a band-aid for wounds caused by a system that lacks societal and political solidarity. If we do not change the modes in which resources generally are distributed, we are forever looking for remedies that simply patch up the most grievous harm. If wealth is generated socially but appropriated privately, no amount of empathy will suffice to protect most of humanity. And the more conspicuously we demonstrate our compassion the more we will feel we have done our part, rather than tackling the more complicated efforts to change a structurally unjust system. Compassion IS important, but it is no replacement for political advocacy.

Longtermism is a whole different kettle of fish, something we need to be aware of given its increasing influence of businesses and even governments. (Ref.) Proponents, as mentioned above, often adopt transhumanist ideals, the hope to reengineer humanity with brain implants and life extension technologies, making post-humans that are “far superior.” And speaking of superiority: one of the existential risks that longtermists fear are “dysgenic pressures” as an existential risk, whereby less “intellectually talented” people (those with “lower IQs”) outbreed people with superior intellects. (Ref.) Straight out of classical Eugenics teachings. The next logical step then is to save not the poor in developing countries (as EA proposed) but to transfer wealth to already rich nations since they more likely provide innovations that could help with technological advances and space travel. And these advantaged nations should also fight underpopulation by focussing on increasing birthrates (of the “right people,” mind you) because more minds imply more potential innovations.

It gets worse. Robin Hanson, for example, an economics professor on board with the Future of Humanity Institute where many of these ideas are hatched, believes, like many longtermists, that in the event of a civilizational collapse humanity will have to re-enact the stages of our historical development. In order to facilitate that evolution, he suggests we should create refuges — e.g., underground bunkers — that are continually stocked with humans. But not just any humans will do:

“if we end up in a pre-industrial phase again,it might make sense to stock a refuge [or bunker] with real hunter-gatherers and subsistence farmers, together with the tools they find useful. Of course such people would need to be disciplined enough to wait peacefully in the refuge until the time to emerge was right. Perhaps such people could be rotated periodically from a well-protected region where they practiced simple lifestyles, so they could keep their skills fresh.”

Possessive colonial mind-set, anyone?

I guess what I am trying to say today can be summarized such: whenever you think, hey, smart altruistic giving is a good thing or protecting humanity from risks of extinction is desirable, think further. Are the ways these things are advertised based on something much darker? Are they effective agents of change or actually tools to leave the status quo of distributions of power and wealth mostly untouched? Are they making us feel good, and thus complacent? Are they expressions of grandiosity to curate future lives?Food for thought. Provided by time to read on vacation!

And here is The Dragonfly by Josef Strauss.

See, you got your photos!

September 12, 2022

friderikeheuer@gmail.com

1 Comment

  1. Reply

    Lee Musgrave

    September 9, 2022

    Did you know that dragonflys were very popular during the 1920s & 1930s as jewelry subjects?

LEAVE A COMMENT

This site uses Akismet to reduce spam. Learn how your comment data is processed.

RELATED POST