- Why Steroids Are NOT Functional – Don’t Trade Your Health for Muscle
- How to Keep Leveling Up INFINITELY – Like Sung Jin-Woo
- The Ideal Physique is Easy for Most Guys When They Learn This – Toji Workout
- How to Train Your FOOT Muscles for Balance, Power, & Injury Prevention
- How to Do Sit Ups CORRECTLY for Ripped, Powerful Abs
- How to Train Your Nervous System Like a NINJA
- Pike Push Ups are Good and You Should Probably Do Them, Maybe
- Supercharge Your Mitochondria for Energy, Endurance, And Longevity
- Calisthenics will change you.
- How to Track and Progress Multiple Goals at the Gym… And Win!
A Discussion on Transhumanism Through the Lens of Sapiens
The book Sapiens by Yuval Noah Harari, describes the ways in which resources, geography, evolution, and economics have shaped societies throughout history. It’s a tragic tale that often show how we have inadvertently conned ourselves into raw deals – allowing countless members of the human race to experience significant hardships not through some grand conspiracy, but through short-sightedness, naivety, and greed.
One excellent example given is the way in which wheat can be seen as having ‘domesticated’ humans. We think of humans as domesticating animals, but what was it that forced us to give up our freedom in exchange for comfort? We settled in one spot to be near our wheat farms. We gave up our hunter gatherer natures, often to the significant detriment of our own health and happiness. Meanwhile, wheat flourished as a result. A symbiotic relationship or a parasitic one? Wheat tricked us into working for it, luring us to the rocks like a siren with its convenient carbohydrates.
Wheat ‘domesticated’ humans
In light of all this, the question I want to ask, is whether transhuman technologies will make us freer… or just create entirely new prisons.
Behavioral Economics and Transhumanism
We have made this mistake time and time again – exchanging long term health for short-term convenience. Often technology is responsible. The example that Yuval gives is of the email. Once upon a time it would have taken us a long time to write and deliver a handwritten letter and this would have represented a fair amount of labour and effort. When email came along, it reduced this effort such that we could accomplish the same thing in a fraction of the time – and you might understandably expect that this would significantly decrease workloads.
By looking at the history of technology’s influence on our work loads and our happiness, we can speculate as to the impact of becoming faster, harder, stronger, and smarter.
But of course, demand increased to meet the supply. Now that we can send a message in minutes rather than days, we are expected to send orders of magnitude more correspondence. We will typically send ten or twenty emails in a day, and often find ourselves still being bothered come evening.
This is a new lens through which we can view transhumanism. By looking at the history of technology’s influence on our work loads and our happiness, we can speculate as to the impact of becoming faster, harder, stronger, and smarter.
Force Multipliers
Email is a force multiplier. Computers are force multipliers. These are tools that allow us to significantly increase our output, without any corresponding change in effort or time investment. We can send 100 emails in the time and effort it takes to send one snail mail.
If you could run at double speed, you’d be expected to get places much faster
But what happens when the force multiplier is in built. What happens when we upgrade the wetware, such that a person can now type at three times the speed, for twice as long? What happens when we can travel at three times the speed on foot.
The obvious and somewhat pessimistic view might be to assume that we will be expected to do so much more. With great power comes great responsibility. And lots of responsibilities at that.
If you could run at double speed, you’d be expected to get places much faster and people would be less sympathetic when you were running late. If you could think faster, then you’d be expected to complete twice the amount of work.
If we could all live longer, we’d need significantly greater resources and we’d put much greater strain on the economy. What unforeseen consequences could arise from this?
The fantasy of the superhero only really holds any appeal because only the hero has the powers. If everyone has the powers and they therefore become less special, the appeal is lost.
Your future transhuman abilities seem exciting now only because you don’t have them yet. In reality, they would quickly become mundane unless you were the only person who possessed them.
Unexpected Psychological Consequences
Even if you were the sole owner of incredible super powers though, you might still find that they had unexpected consequences and particularly with regards to your psychology. Yuval suggests how someone ‘amortal’ (immortal in the sense that they won’t die of natural causes but could still be killed in an accident) might become far more cautious and averse to risk. They have the potential to live forever, and so an early death would be even more potentially devastating.
I’ve always thought something similar about Hiro’s character in the old TV series Heroes. He could teleport anywhere at will and even travel through time. I’ve often fantasized about this power – about how I would wash in the morning by appearing under a waterfall and then dry off via a trip to the Sahara dessert.
But I’ve also often wished I could teleport when I’ve been a long way from home. And when I’ve been in awkward social situations. And were I to do this regularly, I might find myself becoming incredibly lazy and never walking anywhere. I might find myself becoming agoraphobic, or extremely antisocial. Imagine if you could appear next to your cake cupboard instantly at any time. How much harder would it be to avoid snacking? With the best will in the world, all your biochemistry would be pushing you to abuse this power. It simply didn’t evolve under these superhuman conditions.
I’m not suggesting that teleportation is an ability that is likely to be on the cards even for our transhuman inheritors. But the point is, you can’t just ‘add’ abilities without them having far reaching implications for our psychology, our economy, and our society.
Does Technology Make Us Happier?
Another grim conclusion that Yuval draws is that very rarely do greater resources or abilities increase our happiness. A caveman was just as happy as a modern-day billionaire because they didn’t know any different and their bodies and psychology had adapted to their circumstances. Unhappiness according to many psychologists comes from not having what you feel you could have.
Are we any happier than we were before?
In fact, even the most fortunate of today’s society might actually be less content than the least fortunate hunter gatherers. Why? Because – thanks to wheat – we have been taken out of our natural habitats. Our urban lifestyles are unable to fulfil the deep emotional needs that we still hold. Thus many of us feel alienated, bored, lonely, overworked, and ultimately without purpose.
The question: for all our progress and technological discoveries, are we any happier than we were before? And if not, why would transhuman technology be any different?
I feel that this question is at the heart of countless political, religious, and economic debates: do we embrace what we have now, learn to be comfortable, and stay as we are? Or do we push forward and seek out new opportunities and adventures, even at the expense of individuals?
Transhumanists can paint Adam and Eve as heroes
I see the story of Adam and Eve as the perfect metaphor for this. To me, a transhumanist can paint Adam and Eve as heroes. They have everything they could possibly wish for, in a world of complete bliss. God tells them there is one experience that they can’t sample. What kind of inquisitive human wouldn’t want to go straight for that? Isn’t that sense of pushing boundaries, of pioneering, of exploring, precisely what also drove us to discover the globe, and to travel to the moon? If that apple represents knowledge, discovery, and risk… then it serves as a perfect metaphor for the transhuman technologies we are dabbling with now. Heck, it’s even called the tree of knowledge (of good and evil… but still).
Then again, you could argue God in this story was right. Has the world gotten better since we pursued new technologies? Has knowing more made us happier? Or have we destroyed the planet, alienated ourselves from nature, and ultimately embraced empty lives in pursuit of material goals?
If so, will the damage we have caused be justified by the end destination we might reach?
Moreover, is the possibility of where we might end up – and the freedom to make that choice – worth the damage that we’ve caused?
Or would we have been happier if we were still hunter gatherers? I’m not making the case for either viewpoint here or making a comment on the story itself; I’m simply drawing the parallel.
Where you stand on this debate will likely predict how you feel about transhuman endeavors. Not to mention your political leanings (left vs right wing politics can essentially be boiled down to safety vs freedom), or of course your religious views.
If you don’t have an opinion, then it bears some introspection.
Closing Comments
Where do I stand on all this? Well, to slightly dodge the question, I don’t necessarily see this as a black-and-white debate.
I will say that on the whole… I just want to be able to run really fast. And exploring and trying out new things just excites me more. But ultimately I want those things because they promise more freedom. And this book has made me wonder if perhaps that’s short-sighted.
That said, looking at most technologies, it seems that they first cause trouble, before we eventually find a happy balance and they start setting things right.
Look at cars for instance. They let us get around, but they cause a lot of pollution. Eventually, the hope is that we’ll have much safer self-driving cars that run on electricity and cause no emissions. At that point, we’ll have found a way to enjoy the freedom that cars offer, without some of the negative consequences. If all technology follows this trajectory, then we might expect transhuman technologies to result in social upheaval… followed by universal benefits.
I want to be able to run really fast.
Maybe there is a techno-utopia where technology and nature live side-by-side waiting for us in the distant future?
But moreover – and as I discussed in my dissertation on transhumanism – it isn’t fair to paint all transhuman technologies with a single brush. A brain implant is very different from an exosuit, is very different from genetic engineering.
We need to assess each of these technologies on their own merits. But the lesson we can take from Yuval’s research and ideas is simply to expect the unexpected. Recognize the behavioral economics at play, and try to think about the long-long-long term.
Yuval ends the book by saying that we need to ask ourselves: “who do we want to become?”. In short: if we’re going to take this whole transhumanism lark seriously, then we owe ourselves some deep introspection. History is likely to be our best guide when it comes to guestimating how changing our biology might alter our future.
This is the question that I plan to address in an upcoming post. But spoiler: I am staunchly against the idea of altering our biochemistry as a way to achieve lasting happiness.
(Note: I will be reading ‘Homo Deus’ next, I suspect it has some points to make on this subject!)
I have a couple of chapters of Sapiens left (coincidentally i am reading it for a discussion group).
I do not think he goes over much that I did not see, in a more fun format, years ago in “The Assent Of Man” and “Science of the Diskworld2:The globe” and even his description of “Inter-Subjective” ideas existing in many minds at once is not much better then “Emergent Properties” back when Chaos theory was super cool in the 90’s.
https://www.goodreads.com/book/show/9839718-the-ascent-of-man
https://en.wikipedia.org/wiki/The_Science_of_Discworld_II%3A_The_Globe
The worst thing is that so far he basically does what CS Lewis describes in “Abolition of Man” and debunks everything as made up (and thus meaningless) and THEN inserts value judgments about things like war, cruelty and suffering which he really cant base on anything but preference since he rejects meaning and Objective Truth, and thus Morality that would forbid those things…. because “There is nothing a man can not do when he accepts that there is no God” since nothing is then forbidden.
If I actually followed the book to its logical end I’d be a Nilhist or something.