Yehuda Yudkowsky, 1985-2004

Background for non-transhumanists:

Transhumanists are not fond of death. We would stop it if we could. To this end we support research that holds out hope of a future in which humanity has defeated death. Death is an extremely difficult technical problem, to be attacked with biotech and nanotech and other technological means. I do not tell a tale of the land called Future, nor state as a fact that humanity will someday be free of death – I have no magical ability to see through time. But death is a great evil, and I will oppose it whenever I can. If I could create a world where people lived forever, or at the very least a few billion years, I would do so. I don’t think humanity will always be stuck in the awkward stage we now occupy, when we are smart enough to create enormous problems for ourselves, but not quite smart enough to solve them. I think that humanity’s problems are solvable; difficult, but solvable. I work toward that end, as a Research Fellow of the Machine Intelligence Research Institute .

This is an email message I sent to three transhumanist mailing lists, and a collection of emails I then received, in November of 2004. Some emails have been edited for brevity.

Update, at bottom, added May 2005.


Date: Thu Nov 18 22:27:34 2004
From: Eliezer Yudkowsky                    


My little brother, Yehuda Nattan Yudkowsky, is dead.

He died November 1st. His body was found without identification. The family found out on November 4th. I spent a week and a half with my family in Chicago, and am now back in Atlanta. I’ve been putting off telling my friends, because it’s such a hard thing to say.

I used to say: “I have four living grandparents and I intend to have four living grandparents when the last star in the Milky Way burns out.” I still have four living grandparents, but I don’t think I’ll be saying that any more. Even if we make it to and through the Singularity, it will be too late. One of the people I love won’t be there. The universe has a surprising ability to stab you through the heart from somewhere you weren’t looking. Of all the people I had to protect, I never thought that Yehuda might be one of them. Yehuda was born July 11, 1985. He was nineteen years old when he died.

The Jewish religion prescribes a number of rituals and condolences for the occasion of a death. Yehuda has passed to a better place, God’s ways are mysterious but benign, etc. Does such talk really comfort people? I watched my parents, and I don’t think it did. The blessing that is spoken at Jewish funerals is “Blessed is God, the true judge.” Do they really believe that? Why do they cry at funerals, if they believe that? Does it help someone, to tell them that their religion requires them to believe that? I think I coped better than my parents and my little sister Channah. I was just dealing with pain, not confusion. When I heard on the phone that Yehuda had died, there was never a moment of disbelief. I knew what kind of universe I lived in. How is my religious family to comprehend it, working, as they must, from the assumption that Yehuda was murdered by a benevolent God? The same loving God, I presume, who arranges for millions of children to grow up illiterate and starving; the same kindly tribal father-figure who arranged the Holocaust and the Inquisition’s torture of witches. I would not hesitate to call it evil, if any sentient mind had committed such an act, permitted such a thing. But I have weighed the evidence as best I can, and I do not believe the universe to be evil, a reply which in these days is called atheism.

Maybe it helps to believe in an immortal soul. I know that I would feel a lot better if Yehuda had gone away on a trip somewhere, even if he was never coming back. But Yehuda did not “pass on”. Yehuda is not “resting in peace”. Yehuda is not coming back. Yehuda doesn’t exist any more. Yehuda was absolutely annihilated at the age of nineteen. Yes, that makes me angry. I can’t put into words how angry. It would be rage to rend the gates of Heaven and burn down God on Its throne, if any God existed. But there is no God, so my anger burns to tear apart the way-things-are, remake the pattern of a world that permits this.

I wonder at the strength of non-transhumanist atheists, to accept so terrible a darkness without any hope of changing it. But then most atheists also succumb to comforting lies, and make excuses for death even less defensible than the outright lies of religion. They flinch away, refuse to confront the horror of a hundred and fifty thousand sentient beings annihilated every day. One point eight lives per second, fifty-five million lives per year. Convert the units, time to life, life to time. The World Trade Center killed half an hour. As of today, all cryonics organizations together have suspended one minute. This essay took twenty thousand lives to write. I wonder if there was ever an atheist who accepted the full horror, making no excuses, offering no consolations, who did not also hope for some future dawn. What must it be like to live in this world, seeing it just the way it is, and think that it will never change, never get any better?

Yehuda’s death is the first time I ever lost someone close enough for it to hurt. So now I’ve seen the face of the enemy. Now I understand, a little better, the price of half a second. I don’t understand it well, because the human brain has a pattern built into it. We do not grieve forever, but move on. We mourn for a few days and then continue with our lives. Such underreaction poorly equips us to comprehend Yehuda’s death. Nineteen years, 7053 days, of life and memory annihilated. A thousand years, or a million millennia, or a forever, of future life lost. The sun should have dimmed when Yehuda died, and a chill wind blown in every place that sentient beings gather, to tell us that our number was diminished by one. But the sun did not dim, because we do not live in that sensible a universe. Even if the sun did dim whenever someone died, it wouldn’t be noticeable except as a continuous flickering. Soon everyone would get used to it, and they would no longer notice the flickering of the sun.

My little brother collected corks from wine bottles. Someone brought home, to the family, a pair of corks they had collected for Yehuda, and never had a chance to give him. And my grandmother said, “Give them to Channah, and someday she’ll tell her children about how her brother Yehuda collected corks.” My grandmother’s words shocked me, stretched across more time than it had ever occurred to me to imagine, to when my fourteen-year-old sister had grown up and had married and was telling her children about the brother she’d lost. How could my grandmother skip across all those years so easily when I was struggling to get through the day? I heard my grandmother’s words and thought: she has been through this before. This isn’t the first loved one my grandmother has lost, the way Yehuda was the first loved one I’d lost. My grandmother is old enough to have a pattern for dealing with the death of loved ones; she knows how to handle this because she’s done it before. And I thought: how can she accept this? If she knows, why isn’t she fighting with everything she has to change it?

What would it be like to be a rational atheist in the fifteenth century, and know beyond all hope of rescue that everyone you loved would be annihilated, one after another as you watched, unless you yourself died first? That is still the fate of humans today; the ongoing horror has not changed, for all that we have hope. Death is not a distant dream, not a terrible tragedy that happens to someone else like the stories you read in newspapers. One day you’ll get a phone call, like I got a phone call, and the possibility that seemed distant will become reality. You will mourn, and finish mourning, and go on with your life, and then one day you’ll get another phone call. That is the fate this world has in store for you, unless you make a convulsive effort to change it.

Since Yehuda’s body was not identified for three days after he died, there was no possible way he could have been cryonically suspended. Others may be luckier. If you’ve been putting off that talk with your loved ones, do it. Maybe they won’t understand, but at least you won’t spend forever wondering why you didn’t even try.

There is one Jewish custom associated with death that makes sense to me, which is contributing to charity on behalf of the departed. I am donating eighteen hundred dollars to the general fund of the Machine Intelligence Research Institute, because this has gone on long enough. If you object to the Machine Intelligence Research Institute then consider Dr. Aubrey de Grey’s Methuselah Foundation , which hopes to defeat aging through biomedical engineering. I think that a sensible coping strategy for transhumanist atheists, to donate to an anti-death charity after a loved one dies. Death hurt us, so we will unmake Death. Let that be the outlet for our anger, which is terrible and just. I watched Yehuda’s coffin lowered into the ground and cried, and then I sat through the eulogy and heard rabbis tell comforting lies. If I had spoken Yehuda’s eulogy I would not have comforted the mourners in their loss. I would have told the mourners that Yehuda had been absolutely annihilated, that there was nothing left of him. I would have told them they were right to be angry, that they had been robbed, that something precious and irreplaceable was taken from them, for no reason at all, taken from them and shattered, and they are never getting it back.

No sentient being deserves such a thing. Let that be my brother’s true eulogy, free of comforting lies.

When Michael Wilson heard the news, he said: “We shall have to work faster.” Any similar condolences are welcome. Other condolences are not.

Goodbye, Yehuda. There isn’t much point in saying it, since there’s no one to hear. Goodbye, Yehuda, you don’t exist any more. Nothing left of you after your death, like there was nothing before your birth. You died, and your family, Mom and Dad and Channah and I, sat down at the Sabbath table just like our family had always been composed of only four people, like there had never been a Yehuda. Goodbye, Yehuda Yudkowsky, never to return, never to be forgotten.

Love,
Eliezer.


Date: Thu Nov 18 22:55:24 2004
From: Gina Miller                    

I am so sorry to hear of this news. I know what you are going through Eliezer, when I was fourteen I lost my sister who was 19. I always wonder what she would have become.I stood amid my family saying things like “God takes the good” or “God has something for her to do” and sensing their calming effect in the belief system that I did not embrace. I too, was wide awake to the truth of the matter, and I wanted her here. To this day I am struck by the biological errors that mother nature has dealt to us, leading to disease and finality, and of course also the importance of theories and research needed to overcome these problems. As you know, my husband is currently undergoing chemotherapy so I grapple with the frustration of advanced technologies such as nanotech and others, not yet being readily available to avoid this type of suffering. The concern also grows when I see the fear well up in the general population when it comes to current advances such as stem cell research.

As far as the religious afterlife (or other) comfort, I think the problem is, no one has cheated death yet, so the meme continues (at least for some – well probably most) as a way to propagate suppressing the fear of the end. When we show scientific immortality is possible as opposed to religious immortality, there may be more for them to contemplate. I can’t wait for the day that death is not inevitable. I am deeply touched by your words and emotions and I completely validate you. The emotions won’t go away, but it will at least become more bearable over time. Perhaps what remains will help guide you even further down the road you have already begun to travel, with all of our future(s) in mind. I’d like to thank you for that. My condolences to you, as well as my constant support for humanity to move beyond this barrier.

Again, I’m so sorry, warmest regards

-Gina “Nanogirl” Miller


Date: Thu Nov 18 23:53:15 2004
From: Samantha Atkins

Eliezer,

I am extremely sorry for your [/our] loss. Death utterly sucks and humanity would be much better off never pretending otherwise.

When I was 14 my cousin who was 17 died. He was in a motorcycle accident and lingered for some hours. We were told to pray for his healing. We prayed. He died. “It must not have been God’s will” we were told. Or “we lacked sufficient faith” to pray effectively. I remember how twisted up inside I felt hearing these things, how helpless and how very angry. How could it be “God’s will” to snuff out this wonderful young life? How was it up to us to twist ourselves into pretzels somehow in order to save my cousin Virgil or anyone else who need not have been put through such suffering to begin with if a “just” and “good” God was in charge as we were always told? How could the people say these expected things and be all somber and then immediately pretend nothing had happened a mere few hours later? How could they not scream and cry out as I screamed and cried inside? Were they all zombies?

If more people stopped making pious or otherwise excuses for the horror of death and disease then we would finally move to end this suffering. When I was 14 I didn’t know it was even possible to do so. Many people do not know it still. We must make sure they know. Many more who do know act as if it isn’t so.

We must never forget our dead and never ever resign ourselves, those we care about or anyone to death. We must truly embrace life not by acceptance of death but by extending life endlessly and without limitation.

– samantha


Date: Fri Nov 19 15:08:40 2004
From: Adrian Tymes

It is probably no condolence that there will be many more – *far* too many more – before we finish implementing a way around it. But at least there is a way to calculate it: multiply this tragedy by the several million (billion?) between now and then, and one starts to appreciate the magnitude of the horror we seek to strike down.

I wonder if this is something like the fictional Cthuluoid horrors: a terror so deep and profound that most people can’t even acknowledge it, but just go ever so slowly insane trying to deal with it.


Date: Sat Nov 20 21:41:13 2004
From: Matus

Eliezer,

Thank you for your words, and I am sorry for the tragic event which has brought them out.

You have captured what makes me an extropian and I think you capture the motivating principle behind each of us here. We love life, and we want to live it. Whatever we all may disagree on, it is only the means to achieve this end. We love life, and we hate its cessation.

There is no greater horror or travesty of justice than the death of someone. All the intricacies of the universe can not compare to the beauty and value of a single sentient being.

I have seen enough death of friends and loved ones myself. Everyone who will listen I try to convince them to be cryogenically suspended, on the premise that they want to live. But most grope for excuses not to, disguising their disregard for their own existence with appeals to mysticism or dystopian futures.

All ideologies prescribe these self delusional condolences and practices, it can be no more clear than what Adrian said: a terror so deep and profound that most people can’t even acknowledge it, but just go ever so slowly insane trying to deal with it.

When faced with the death of a loved one, most people get through it by hiding reality, by doing whatever they can to *not* think about the obvious. Death is eternal and final, and when faced with such a thing people can not come up with any answer that goes beyond any self doubt. To take the pain of death away, they must devalue life. One is faced with a choice, acknowledge you love life and death is abhorrent, be indifferent to life and thus indifferent to death, or despise life and welcome death, there are no other alternatives, the view of one precludes the inverse on the other. There seems to be an active effort to create and spread a nihilistic world view. Consider the Buddhist mantra of ‘life is suffering’ consider it’s widespread modern appeal, and then consider its negation, ‘death is joy’ Indeed, Nirvana is the absence of a desire for existence. This nihilistic movement is not acting volitionally, its scared and confused and stumbling through philosophy. All they know is they don’t like death, and through its stumbling come to find that to deal with that it must not care about life. Socrates last words come to mind “I have found the cure for life, and it is death”

I think this is a major part of the reason we have such difficulty spreading our ideas and values. Why in the very secular European area of the world does Cryonics have little to no support? If people accept our worldview, that life is good and technology can help us extend it indefinitely, then they must come to full terms with the finality and horror of death. That is what they have difficulty in doing. I think at some level they know that, it is the logical extension of their beliefs, and as such is manifested as a very negative emotional visceral reaction to our ideas, because of our implied valuation of life.

But just as many of us here put up a great deal of money and effort for a non-zero chance of defeating our first death through cryonics, we need to acknowledge the non-zero possibility of doing something about past deaths. In this I am very fond of Nikolai Fedorovich Fedorov’s “The Common Task”. Even though it is derived from his religious background, the motivation, a deep appreciation for the intrinsic value of life, and the goal, bringing back the past dead with technology, I share. The application of science to ‘resurrect’ the past dead. Is it possible? If it is, it should be our ultimate goal. Some here devote their efforts to the development of a singularity AI, and others toward defeating aging biologically; I devote my efforts to the great common task. It is my ultimate goal to find out if it is possible, to learn everything I need to know to determine that, and more, and then to do it, one person at a time if necessary.

I can find no words to offer to ease that suffering, there are none, and it is not possible. I can only say that it is my life goal, and I think others, and eventually the goal of any sentient being who loves life, singularity AI or otherwise, to do what they can to accomplish this common task, if the laws of physics allow it.

Regards,
Michael Dickey
Aka Matus


Date: Thu Nov 18 22:27:41 2004
From: David Sargeant                    

I’m terribly sorry to hear about your brother. Your essay really touched me — it really pounds home what we need, need, NEED DESPERATELY to achieve, more than anything else in the world. I can’t even imagine the pain you must be feeling right now. I wish there was something I could to do to help.


Date: Thu Nov 18 22:55:20 2004
From: Damien Broderick                    

Very distressing news, Eli. Sympathies. Indeed, `we have to work faster.’

Sorrowful regards, Damien


Date: Fri Nov 19 02:31:58 2004
From: Russell Wallace     

I’m so sorry.

I hadn’t heard of the Jewish custom you mention, last time I received such a phone call; but it has that quality of requiring explanation only once, and I’m going to act accordingly.

Someday, children won’t fully believe that things like this really happened. We’ll work towards the day when they don’t have to.

– Russell


Date: Fri Nov 19 03:58:17 2004
From: Olga Bourlin

Eliezer, I’m so sorry to hear this – there are never any real words of consolation.

For what it’s worth, my experience with people in my family who have died is – well, I have thought of them from time to time, of course (but have been surprised at how unexpectedly and powerfully these thoughts have been known to strike). And, also, I have dreamt of them – for decades – as if they never died.

The death that struck me the most was when my mother died. I was 40 years old then (she was 65), and I was “prepared” for her death because she had been an alcoholic for a long time – and yet, when she died it hurt so very much. I was completely unprepared for the emotional pain. At that time I was married to a man who played the piano, and he played Beethoven’s Piano Concerto No. 5 in E flat Op. 73 ‘The Emperor’ – 2nd movement (‘Adagio un poco moto’) over and over again. That particular movement – it’s so lovely and sad – something in that music let me just take in the experience and reflect about being human.

I cannot imagine how you must feel – losing a beloved younger brother. When I had my children (the two happiest days of my life, bar none) – I also realized that with the love I felt (and still feel) for them came a kind of vulnerability I never felt even about myself – the potential, incomprehensible pain I know I would feel if something were to happen to them. And I knew I would never have the “net” of religion to help break my fall.

Love,
Olga


Date: Fri Nov 19 15:08:25 2004
From: Kwame Porter-Robinson                    

My condolences.

As opposed to Michael Wilson, I say we shall have to work smarter.

Live well,

Sincerely,
Kwame P.R.


Date: Sat Dec 4 13:30:35 2004
From: Harvey Newstrom                    

I am not even going to try to say something helpful or profound. There is nothing anyone can say to help or to lessen the loss. This is a meaningless tragedy that too many of us have faced. A more extreme and sudden example of the human condition. And I hate it.

Harvey


Date: Fri Nov 19 15:08:42 2004
From: Keith Henson                    

How sad.

I really can’t add anything to your email to the list because I am in complete agreement.

My daughter lost two close high school friends, one just after he got back from visiting Israel and I lost both parents since becoming an exile.

Keith

PS. If you can, you should at least try for a cell/DNA sample.


Date: Sat Nov 20 04:05:52 2004
From: Kip Werking                    

Eliezer,

I just want to express my sympathy.

Your post to SL4 shocked me from my dogmatic slumber. If the universe conserves information, then your brother is still written in the fabric somewhere. The signal is just scrambled. Who is to say whether a posthuman will look into the stars and see his picture–or nothing?

But I prefer your attitude. On this subject, there is a danger of apathy–but also a danger of false hopes. The latter does not prevent me from supporting the mission of you or Aubrey. A sober account of the human condition has its advantages. For example, it can cure procrastination.

Please consider this an expression of my sorrow for your loss and solidarity with your cause.

Kip


Date: Sat Nov 20 21:41:17 2004
From: Nader Chehab

I’m really sorry to hear that. Some things truly happen when we least expect them. Your writings have been an invaluable source of insight for me and it saddens me to know that you lost a loved one. It is revolting that awful things can happen even to the least deserving. We really have to fix that one day, and sooner is better.

Yours,
Nader Chehab


Date: Fri Nov 19 01:32:50 2004
From: Extropian Agroforestry Ventures Inc.                    

When people who just might have been able to catch the extreme lifespan wave or uploaded their consciousness die in 2004 it is far more tragic than in 1974 when such was only a fanciful dream.

I too have lost people near to me who had a statistically better chance than even me to “make the cut”. My wife at age 45 and a week this march 21. Only after the fact did I fully realize that there was a conscious knowledge among those caring for her that ” simply tweaking treatments would put her out of her misery and bring her peace through death”. I still do not forgive myself for not catching onto things … it was no problem to install a 10,000$ baclofen pump but no one would prescribe the anti-seizure meds that might have stopped the devastating seizures that reduced her to a barely concious state during her last 2 months. I know death was never her wish.

I now have a friend and business partner in his 70’s who is in his last month due to late detected mesothelioma or asbestos caused lung cancer. He too fought to the end. About 3 weeks ago when I sent him a Kg of hemp bud and a small packet of marijuana to ease his pain he said ” That should probably do me” and that was the first time that he accepted that he had lost the battle.

Formal religeons are like opiates in that they dull the mind to the urgency of defeating death as we know it. Aethiesm and agnosticism does put the onus on the individual to seize the moment and strive to extend, improve and sustain consciousness. In some ways religion has served some good purposes but we are now mature enough to survive without this old crutch. Science as the new religion has now more hope to offer for eternal life than the comforting words of some prophet or other.

Morris Johnson


Date: Fri Nov 19 01:32:53 2004
From: Giu1i0 Pri5c0                    

Dear Eliezer,

I am so sorry, and I think I know how you are feeling. I felt the same whan my mother died three years ago. I was already a transhumanist long before that, but had not been an active one previously: I just lurked on the lists. But that changed after my mother’s death: I felt that there was something that needed being done, and now. My mother was 73, but Yehuda was 19. What a waste, what a cruel thing. I think the best you can do to honor the memory of Yehuda is continuing your work to accelerate the process of overcoming the biologic limits of our species, defeating death, creating friendly superintelligences, merging with them, and moving on. The SIAI is your tribute to Yehuda’s memory and your own battle against death: continue to fight it bravely as you have done so far.

Giulio


Date: Fri Nov 19 06:19:25 2004
From: Amara Graps                    

> Goodbye, Yehuda Yudkowsky, never to return, never to be forgotten.
> Love,
> Eliezer.

Dear Eliezer,

Now you carry Yehuda’s traces of his life in your heart. Keep them sacred, remember him always. In time, the large hole that pains you will transform into something different. An extra source of strength to live every day fuller, stronger, better; so that the life you cherished will live through you and help you fight so that this doesn’t happen to anyone again. I hate death. We should never have to experience this. I’m so sorry about Yehuda.

Amara


Date: Fri Nov 19 22:42:58 2004
From: Hara Ra                    

Well, personally I am a cryonicist. I was appalled at the low number of extropians who have signed up.

If I ever get a chance to do something more about this, I will certainly tell the list about it.

Hara Ra (aka Gregory Yob)


Date: Sat Nov 20 21:41:43 2004
From: Kevin Freels                   

What would it be like to be a rational atheist in the fifteenth century, and know beyond all hope of rescue that everyone you loved would be annihilated, one after another, unless you yourself died first? That is still the fate of humans today; the ongoing horror has not changed, for all that we have hope. Death is not a distant dream, not a terrible tragedy that happens to someone else like the stories you read in newspapers.

Take any century prior to this one. I often wonder if that isn’t exactly what happened with Alexander, Genghis Khan, or more recently, Hitler and Stalin. History is full of such people. They may have simply went nuts after thinking this through and finding that there was nothing they could do and that life did not matter. Fortunately we are now on the verge of the ability to put an end to this. Now is the time to push forward, not give up.


Date: Fri Nov 19 01:32:44 2004
From: Psy Kosh                    

That is indeed awful. I’m sorry.

I guess what you do have though is the ability to say that you are indeed actually doing something about it, so do take what comfort from that that you can.

And again, I’m sorry.

Psy-Kosh


Date: Fri Nov 19 15:08:51 2004
From: Ben Goertzel                    

Wow, Eli … I’m really sorry to hear that …

As all of us on this list know, death is one hell of a moral outrage

And alas, it’s not going to be solved this year, not here on Earth anyway. Conceivably in 7-8 more years — and probably before 30 more, IMO. Let’s hope we can all hang on that long…

I have no memory more painful than remembering when my eldest son almost died in a car crash at age 4. Thanks to some expert Kiwi neurosurgery he survived and is now almost 15. Had he not survived, I’m not really sure what I’d be like today.

I know you’ll draw from this terrible event yet more passion to continue with our collective quest to move beyond the deeply flawed domain of the human — while preserving the beautiful parts of humanity & rendering the other parts optional…

At the moment my head is full of a verse from a rock song I wrote a few years back:I’ve got to tell you somethingYour lonely story made me cryI wish we all could breathe foreverGod damn the Universal Mind.

Well, crap….words truly don’t suffice for this sort of thing…

yours
Ben


Date: Fri Nov 19 16:11:04 2004
From: Aikin, Robert

You’re not going to ever ‘get over it’ so don’t bother deluding yourself that you might. You know what you have to do, so do it. Finish what you started. Stay healthy, be safe.


Date: Fri Nov 19 16:59:37 2004
From: Bill Hibbard

I am very sorry to hear about the death of your brother, Eliezer. Your reaction to redouble your efforts is very healthy. When my brother, father and mother died I also found it helpful to get plenty of exercise and eliminate caffeine.

My younger brother died of cancer in 1997. When he died he looked like a holocaust victim and it occured to me that if all the Americans dying of cancer were being killed by an evil dictator, our society would be totally mobilized against that enemy. Disease and death in general deserve at least that commitment. Both collectively, to support medical research and care, and individually, to get lots of exercise and eliminate tobacco (my brother’s kidney cancer was probably caused by his smoking) and unhealthy foods. My parents lived to 85 and 87, but their diseases were clearly linked to diet, smoking and lack of exercise. They could have lived longer and better with different habits.

I am with you, Eliezer, that it is maddening that so many people in our society cling to ancient religous beliefs that council acceptance of death and disease, and in some cases even council opposition to efforts to defeat death. What madness.

Sincerely,
Bill


Date: Fri Nov 19 22:19:21 2004
From: Thomas Buckner                    

I am sorry to hear this. Such a short life. Nineteen years is a blink, not enough time to learn much more than the rudiments of life. My daughter Heidi is a year older than he was.

George Gurdjieff, a very great Russian philosopher, said the human race needed a new organ, which he whimsically named the kundabuffer, and the purpose of this organ would be to remind us each minute of every day that we would die, that we had not time to squander.

My parents and grandparents are all gone. Almost all the optimism I once had for the human race is gone. At present, I see only one bright spot on the horizon. It is your work and that of the others in this community (I am only a kibitzer).

re: Your statement “What would it be like to be a rational atheist in the fifteenth century, and know beyond all hope of rescue that everyone you loved would be annihilated, one after another, unless you yourself died first? That is still the fate of humans today; the ongoing horror has not changed, for all that we have hope.” In a commencement speech of last year, Lewis Lapham mentioned a “French noblewoman, a duchess in her 80s, who, on seeing the first ascent of Montgolfier’s balloon from the palace of the Tuilleries in 1783, fell back upon the cushions of her carriage and wept. “Oh yes,” she said, “Now it’s certain. One day they’ll learn how to keep people alive forever, but I shall already be dead.”

Tom Buckner


Date: Sun Nov 21 23:55:10 2004
From: gabriel C

I wonder if there was ever an atheist who accepted the full horror, making no excuses, offering no consolations, who did not also hope for some future dawn. What must it be like to live in this world, seeing it just the way it is, and think that it will never change, never get any better?

That would describe me, before I stumbled upon this list in 1999. Facing certain extinction, I was alternately terrified and depressed. I still am, but now with a tiny thread of hope. Otherwise I think I would be insane by now.


Date: Fri Nov 19 15:08:28 2004
From: MIKE TREDER                    

Eliezer,

I am deeply sorry to hear about your brother. The random cruelty of life knows no bounds. As you correctly suggest, the only rational response is to challenge the dreadful process called death and defeat it, once and for all. Sadly, that takes time — too much time for your brother, Yehuda, and too much time for my dear sister, Susie, who was struck down unexpectedly by cancer just a few years ago. Too much time, as well, for 150,000 more of our brothers and sisters who will die today, and tomorrow, and the next day.

Still, the transhumanist response is not simply to shake our heads and mourn, but to stand up in defiance. We aim to overcome death through human science and technology, and you and others have taken on that challenge directly. For that, we all should be grateful and supportive.

But your essay also accomplishes a different — and equally worthy — objective, which is to reach out and connect with others who suffer. This is the humanist response, to affirm that we are all in this together, that there is no God or deity either to revere or to blame. Death separates us, permanently (at least until we know that cryonic preservation and revivification can succeed), but in life we can come together to help each other.

Mike Treder


Date: Sat Nov 20 04:05:53 2004
From: Marc Geddes                    

My condolences to you Eliezer, over your loss.

It was only quite recently that I desperately urged you to ‘hurry’ in your work at Sing Inst. I was starting to feel the first signs of aging. But now I am again made aware of the horrendous loss of life occurring daily in this pre-Singularity world.

I called pre-Singularity existence ‘banal’ and ‘brutish’. We’ve received a sad reminder of the truth of this.

Not only am I saddened by the loss of life occuring, I’m absolutely furious. And the most maddening part of it is the fundamental irrationality of most of the human populace, who blindly rationalize aging and pointless death.

In the recent book published by ‘Immortality Institute’ I did my best to made the philosophical case for indefinite life span: my piece was ‘Introduction To Immortalist Morality’. We must all do our bit to try to educate others about the fundamental value of life, a value that is still not properly understood by most people.

Bruce Klein (Imm Inst founder) also recently lost his mother in an accident. There is a discussion on the Imm Inst forums and it might be valuable for Eliezer to go there.

The death of Yehuda shows that the universe just ‘doesn’t care’. It’s up to sentients to create the meaning of the world. We all hope for a successful Singularity, and we can’t imagine failure, but it could easily be the case that we’ll all we wiped out unless we make big efforts – the universe just doesn’t care.

I recently expressed real concern that the ‘window of opportunity’ for a successful Singularity seems to be closing. Time really is running out.We need to make greater efforts than we have been so far, or else I don’t think we’re going to pull through.

I can only urge all of you to do your bit to support transhumanist projects – biological life extension (short term) and FAI (longer term) must be the priorities. Please donate to the relevant organizations. Voss, Goertzel and Yudkowksy appear to be the only serious FAI contenders at this juncture. They need our support.

Marc Geddes


Date: Sun Nov 21 13:10:32 2004
From: Peter                    

I am sending you my condolences Eliezer on the death of your brother. I lost my first wife in an accident suddenly, she was 23. Like you I can only rage and weep that her beautiful singularity was lost, one among the millions who died on the day she did. Likewise Yehuda, one potentiality irretrievably missing from the human future.

I worked with the dying for many years and attended in all 122 deaths, all were special in their own way and all represented a dying of a light that had shone for a while.

Unlike you I am religious but not to the extent of closing my eyes to the reality of loss and the evil that sometimes causes it. When my first wife died my grandfather said to me ‘Peter, dying is our fate, we can do nothing about it, but we can ask what does this death enable me to do for the world than otherwise I might never have done’. All through the forty five years since that death I hope her memorial has been the one I could give with the way I have spent my own life.

Peter


Date: Thu Nov 18 23:53:03 2004
From: Michael Roy Ames      

Dear Eliezer,

Thank you for telling SL4 about Yehuda. I am unhappy to read such an email. Right now you appear to be pretty fired up about doing something; your email was reminiscent of some of your earlier, more outraged writings. Do what you have to do to keep that fire burning. Experience has taught me that it is easy to become complacent, it is the default tendency. I participate in specific activities on a regular basis that force me to looking at disease & death closely enough so that my fire is stoked. It is a rare individual that can rely on rational thinking alone to maintain enthusiasm. Do what you need to do, and know that you can ask for help.

Your friend,
Michael Roy Ames


Date: Sun Nov 21 13:10:37 2004
From: Joe  

I feel your sadness as I have lost loved ones, though not as close as a brother. Anger and sadness sometimes lead one into action. So, I agree that there is nothing wrong to experience this type of pain. Since pain is uncomfortable most of us attempt to alleviate that pain through various means. In the case of death organized religions have their ways of doing this. As you indicated this kind of escape is often counterproductive, because it supports a “do nothing” approach. However, if you think about how long humans have been able to comprehend death and the loss which occurs, compared with any technological advancement to fight death, you can get an appreciation for the role religion, and a belief in an afterlife, has played.

But I agree with you. The time has come that we need to move past acceptance of death (belief in an afterlife) into a mode of activism against it. We are just beginning to have the technology available so that we can make visible progress. You hit upon an excellent idea that a contribution to an organization actively engaged in research to postpone or eradicate death in the name of a loved one who died is a very useful way to promote this progress.

Joe


Date: Mon Nov 29 17:03:47 2004
From: Danielle Egan                    

Eliezer,

I’m very sad to hear about your brother’s death. (Tyler sent out an email.) I respect you for putting your thoughts down on it because so many times we start writing about it later and like you say, by that point we are already moving on and can’t be honest. I want you to know that I am mad too that life ends in this way. When my grandma died recently at the age of 90, a few things really disturbed me: that she’d been dead for over 8 hours before I heard the news and I was just going through my life as usual, clueless that she had gone; that she died in an old age home, sick, with early stages of dementia so there was no dignity in her last year of life; that because there is no dignity we impose it in the form of religious or funereal services and those kinds of things and it’s too late to do a damn thing about it for them but somehow people try to trick themselves into believing these things are done for the dead person; we do everything for ourselves and really what does that come to when we remain unfulfilled?

Most of all though is that death is such a horrible shock even when the person is old and has been sick and you’ve been preparing yourself. You can never prepare for something this abstract. It seems like such a terrible twisted crime when they are so young, like your brother. I want to offer you my condolences in the form of anger. I am angry right now too about his death and it is a motivating thing. The corks are symbolic. Maybe you should keep one as a reminder to get angry and then continue on in opposition of the way we live.

Danielle

(Danielle adds: “Perhaps you could note that I am not a transhumanist, if you decide to include bylines with the letters. I think it’s important for transhumanists to understand that we don’t have to be of the same persuasion and ethos to have similar emotions around death.”)


Date: Sat Nov 20 21:41:29 2004
From: Mike Li

eliezer,

i’m sorry for your loss. beyond that, i don’t know what else to say. i’m too awkward and weak emotionally to offer any significant condolences in person. so, i just made my first donation of $699 (the balance that happened to be left in my paypal account) to the singularity institute. fight on, and know that i am with you.-x


Date: Thu Nov 18 19:33:33 2004
From: Nick Hay
To: donate@singinst.org

Dear Singularity Institute for Artificial Intelligence, Inc.,

This email confirms that you have received a payment for $100.00 USD from Nick Hay.

Total Amount: $100.00 USD
Currency: U.S. Dollars
Quantity: 1
Item Title: Donation to SIAI
Buyer: Nick Hay
Message: For Yehuda.            

Christopher Healey, 11-19-04

Donation through: Network for Good
Amount: $103.00
Dedication: in memory of Yehuda                    

David R. Stern, 12-19-04
Check: $100
Comment: In memory of Yehuda


Date: Wed, 29 Dec 01:55:24 2004
From: Johan Edstr�m
To: donate@singinst.org

Dear Singularity Institute for Artificial Intelligence, Inc.,

Johan Edstr�m just sent you money with PayPal.

Amount: $50.00 USD
Note: In memory of Yehuda Yudkowsky      

Date: Mon, 17 Jan 12:41:11 2005
From: Christopher Healey
To: donate@singinst.org

Dear Singularity Institute for Artificial Intelligence, Inc.,

This email confirms that you have received a payment for $1,000.00 USD from Christopher Healey.

Total Amount: $1,000.00 USD
Currency: U.S. Dollars
Quantity: 1
Item Title: Donation to SIAI
Buyer: Christopher Healey                    
Message:
In memory of Yehuda Yudkowsky, and the other 11,699,999 who have died since.                    

Date: Fri Nov 19 15:08:44 2004
From: James Fehlinger                    

‘Edoras those courts are called,’ said Gandalf, ‘and Meduseld is that golden hall. . .’

At the foot of the walled hill the way ran under the shadow of many mounds, high and green. Upon their western side the grass was white as with drifted snow: small flowers sprang there like countless stars amid the turf.

‘Look!’ said Gandalf. ‘How fair are the bright eyes in the grass! Evermind they are called, simbelmynë in this land of Men, for they blossom in all the seasons of the year, and grow where dead men rest. Behold! we are come to the great barrows where the sires of Théoden sleep.’

‘Seven mounds upon the left, and nine upon the right,’ said Aragorn. ‘Many long lives of men it is since the golden hall was built.’

‘Five hundred times have the red leaves fallen in Mirkwood in my home since then,’ said Legolas, ‘and but a little while does that seem to us.’

‘But to the Riders of the Mark it seems so long ago,’ said Aragorn, ‘that the raising of this house is but a memory of song, and the years before are lost in the mist of time. Now they call this land their home, their own, and their speech is sundered from their northern kin.’ Then he began to chant softly in a slow tongue unknown to the Elf and Dwarf, yet they listened, for there was a strong music in it.

‘That, I guess, is the language of the Rohirrim,’ said Legolas; ‘for it is like to this land itself; rich and rolling in part, and else hard and stern as the mountains. But I cannot guess what it means, save that it is laden with the sadness of Mortal Men.’

‘It runs thus in the Common Speech,’ said Aragorn, ‘as near as I can make it.Where now the horse and the rider? Where is the horn that was blowing?Where is the helm and the hauberk, and the bright hair flowing?Where is the hand on the harpstring, and the red fire glowing?Where is the spring and the harvest and the tall corn growing?They have passed like rain on the mountain, like a wind in the meadow;The days have gone down in the West behind the hills into shadow.Who shall gather the smoke of the dead wood burning,Or behold the flowing years from the Sea returning?

J. R. R. Tolkien, The Lord of the Rings
Book III, Chapter VI, “The King of the Golden Hall”

I am sorry.
Jim F.


Update: May 8th, 2005.

The day is May 8th, six months and one week after the final annihilation of Yehuda Nattan Yudkowsky. Today I am going to visit my little brother’s grave, with my family, to watch the unveiling of his Matzevah, the stone that is set in the ground to mark his grave. This is a warm day in Chicago, springtime, with trees blossoming, and a bright blue cloudless sky. Nature does not mark the passing of our dead.

We drive for an hour and arrive at the cemetery. The last time I was here, for my brother’s funeral, I choked up when I saw a sign with an arrow, to direct cars, bearing the hand-lettered name “Yudkowsky”. This time there is no sign, for Yehuda or anyone. There is no funeral in this graveyard today. There is only one cemetery employee with a map, to direct the visitors to graves. We drive to an unremarkable section of the cemetery. The last time I was here, there was a great crowd to mark this place, and a tent for the mourners, and rows of chairs. This time there is only grass, and metal plates set into grass. I could not have found this place from memory. I look around for landmarks, trying to remember the location.

I remember (I will never forget) when I came to this cemetery for my brother’s funeral. I remember getting out of the car and walking toward a van. I looked inside the van, and saw my brother’s polished wooden coffin. The box seemed so small. I didn’t see how my brother could fit in there. “What are you doing here, Yehuda?” I said to the coffin. “You’re not supposed to be here.” My grandfather, my Zady, came toward me then, and held me.

I remember (I will never forget) the phone call I got in Atlanta. My cellphone’s screen identified the calling number my parents’ house. I said “Hello?” and my aunt Reena said “Eli -” and I knew that something was wrong, hearing aunt Reena’s voice on my home phone line. I remember having time to wonder what had happened, and even who had died, before she said “Your brother Yehuda is dead, you need to come home right away.”

That was the previous time. I don’t feel today what I felt then. There’s a script built into the human mind. We grieve, and then stop grieving, and go on with our lives, until the day we get another phone call. Probably one of my grandparents will be next.

I walk along the gravel path that leads to where my family is gathering, looking down at the metal plates set down by the side of the path. Rosenthal… Bernard… some plates are only names and dates. Others bear inscriptions that read “Loving husband, father, and grandfather”, or “Loving wife and sister”. As I walk along the path I see a plate saying only, Herschel, my love, and that is when my tears start. I can imagine the woman who wrote that inscription. I can imagine what Herschel meant to her. I can imagine her life without him.

How dare the world do this to us? How dare people let it pass unchallenged?

I stand by the foot of my little brother’s grave, as my relatives read Tehillim from their prayer books. The first time I came to this cemetery, I cried from sadness; now I cry from anger. I look around and there are no tears on my mother’s face, father’s face, uncle’s and grandparents’ faces. My mother puts a comforting hand on my shoulder, but there is no wetness on her face. Such a strange thing, that I’m the only one crying. Tears of sadness we all had shed, but tears of anger are mine alone. My relatives are not permitted to feel what I feel. They attribute this darkness to God. Religion does not forbid my relatives to experience sadness and pain, sorrow and grief, at the hands of their deified abuser; it only forbids them to fight back.

I stand there, and instead of reciting Tehillim I look at the outline on the grass of my little brother’s grave. Beneath this thin rectangle in the dirt lies my brother’s coffin, and within that coffin lie his bones, and perhaps decaying flesh if any remains. There is nothing here or anywhere of my little brother’s self. His brain’s information is destroyed. Yehuda wasn’t signed up for cryonics and his body wasn’t identified until three days later; but freezing could have been, should have been standard procedure for anonymous patients. The hospital that should have removed Yehuda’s head when his heart stopped beating, and preserved him in liquid nitrogen to await rescue, instead laid him out on a slab. Why is the human species still doing this? Why do we still bury our dead? We have all the information we need in order to know better. Through the ages humanity has suffered, though the ages we have lost our dead forever, and then one day someone invented an alternative, and no one cared. The cryonicists challenge Death and no one remarks on it. The first freezing should have been front-page news in every newspaper of every country; would have been front-page news for any sane intelligent species. Someday afterward humankind will look back and realize what we could have done, should have done, if only we had done. Then there will be a great wailing and gnashing of teeth, too late, all too late. People heard about Ted Williams on the news and laughed for ten seconds, and in those ten seconds they lost their husbands, their wives, their mothers, their children, their brothers. It’s not fair, that they should lose so much in so little time, without anyone telling them the decision is important.

I did talk to my family about cryonics. They gave me a weird look, as expected, and chose to commit suicide, as expected.

It is a Jewish custom not to walk upon the graves of the dead. I am standing in a path between two lines of graves. Some of my relatives, my uncle David and his children, are standing in the space next to Yehuda’s grave, where another grave will someday go. I think that if a filled grave is ominous, so too is land earmarked for a grave in the cemetery; like standing above a hungry mouth, waiting to be filled. When will we stop feeding our cemetaries? When will we stop pretending that this is fair? When will the human species stop running, and at last turn to stand at bay, to face full on the Enemy and start fighting back? Last Friday night my grandmother spoke to us about an exhibit she had seen on Chiune Sugihara, sometimes called the Japanese Schindler, though Sugihara saved five to ten times as many lives as Oskar Schindler. Chiune Sugihara was the Japanese consul assigned to Lithuania. Against the explicit orders of his superiors, Sugihara issued more than 2,139 transit visas to refugees from the approaching German armies; each visa could grant passage rights to an entire family. Yad Vashem in Israel estimates that Sugihara saved between 6,000 and 12,000 lives. “If there had been 2,000 consuls like Chiune Sugihara,” says the homepage of the Sugihara Project, “a million Jewish children could have been saved from the ovens of Auschwitz.” Why weren’t there 2,000 consuls like Sugihara? That too was one of the questions asked after the end of World War II, when the full horror of Nazi Germany was known and understood and acknowledged by all. We remember the few resisters, and we are proud; I am glad to be a member of the species that produced Sugihara, even as I am ashamed to be a member of the species that produced Hitler. But why were there so few resisters? And why did so many people remain silent? That was the most perplexing question of all, in the years after World War II: why did so many good and decent people remain silent?

For his shining crime, Sugihara was fired from the Japanese Foreign Ministry after the war ended. Sugihara lived the next two decades in poverty, until he was found by one of the people he had helped save, and brought to Israel to be honored. Human beings resisted the Nazis at the risk of their lives, and at the cost of their lives. To resist the greatest Enemy costs less, and yet the resisters are fewer. It is harder for humans to see a great evil when it carries no gun and shouts no slogans. But I think the resisters will also be remembered, someday, if any survive these days.

My relatives, good and decent people, finish reciting their prayers of silence. My mother and father uncover the grave-plaque; it shows two lions (lions are associated with the name Yehuda) and a crown, and an inscription which translates as “The crown of a good name.” Two of my uncles give two brief speeches, of which I remember only these words: “How does one make peace with the loss of a son, a nephew, a grandchild?”

You do not make peace with darkness! You do not make peace with Nazi Germany! You do not make peace with Death!

It is customary to place small stones on the grave-plaque, to show that someone was there. Each night the groundskeepers sweep away the stones; it is a transient symbol. One by one my relatives comes forward, and lay their stones in silence. I wait until all the rest have done this, and most people have departed and the rest are talking to one another. Then I draw my finger across the grass, tearing some of it, gathering dirt beneath my fingernails (I can still see a tinge of dirt now, under my nail as I write this); and then I hammer my stone into the dirt, hoping it will stay there permanently. I do this in silence, without comment, and no one asks why. Perhaps that is well enough. I don’t think my relatives would understand if I told them that I was drawing a line in the graveyard.

In the name of Yehuda who is dead but not forgotten.

Love,
Eliezer.



This document is ©2004,2005 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.

Singularity Fun Theory

This page is now obsoleted by the Fun Theory Sequence on Less Wrong .

Jan 25, 2002

  • How much fun is there in the universe?
  • What is the relation of available fun to intelligence?
  • What kind of emotional architecture is necessary to have fun?
  • Will eternal life be boring?
  • Will we ever run out of fun?

To answer questions like these… requires Singularity Fun Theory.

  • Does it require an exponentially greater amount of intelligence (computation) to create a linear increase in fun?
  • Is self-awareness or self-modification incompatible with fun?
  • Is (ahem) “the uncontrollability of emotions part of their essential charm”?
  • Is “blissing out” your pleasure center the highest form of existence?
  • Is artificial danger (risk) necessary for a transhuman to have fun?
  • Do you have to yank out your own antisphexishness routines in order not to be bored by eternal life? (I.e., modify yourself so that you have “fun” in spending a thousand years carving table legs, a la “Permutation City”.)

To put a rest to these anxieties… requires Singularity Fun Theory.


Behold! Singularity Fun Theory!

Singularity Fun Theory is in the early stages of development, so please don’t expect a full mathematical analysis.

Nonetheless, I would offer for your inspection at least one form of activity which, I argue, really is “fun” as we intuitively understand it, and can be shown to avoid all the classical transhumanist anxieties above. It is a sufficient rather than a necessary definition, i.e., there may exist other types of fun. However, even a single inexhaustible form of unproblematic fun is enough to avoid the problems above.

The basic domain is that of solving a complex novel problem, where the problem is decomposable into subproblems and sub-subproblems; in other words, a problem possessing complex, multileveled organization.

Our worries about boredom in autopotent entities (a term due to Nick Bostrom, denoting total self-awareness and total self-modification) stems from our intuitions about sphexishness (a term due to Douglas Hofstadter, denoting blind repetition; “antisphexishness” is the quality that makes humans bored with blind repetition). On the one hand, we worry that a transhuman will be able to super-generalize and therefore see all problems as basically the “same”; on the other hand we worry that an autopotent transhuman will be able to see the lowest level, on which everything is basically mechanical.

In between, we just basically worry that, over the course of ten thousand or a million years, we’ll run out of fun.

What I want to show is that it’s possible to build a mental architecture that doesn’t run into any of these problems, without this architecture being either “sphexish” or else “blissing out”. In other words, I want to show that there is a philosophically acceptable way to have an infinite amount of fun, given infinite time. I also want to show that it doesn’t take an exponentially or superexponentially greater amount of computing power for each further increment of fun, as might be the case if each increment required an addition JOOTS (another Hofstadterian term, this one meaning “Jumping Out Of The System”).


(Non)boredom at the lowest level

Let’s start with the problem of low-level sphexishness. If you imagine a human-level entity – call her Carol – tasked with performing the Turing operations on a tape that implements a superintelligence having fun, it’s obvious that Carol will get bored very quickly. Carol is using her whole awareness to perform a series of tasks that are very repetitive on a low level, and she also doesn’t see the higher levels of organization inside the Turing machine. Will an autopotent entity automatically be bored because ve can see the lowest level?

Supposing that an autopotent entity can fully “see” the lowest level opens up some basic questions about introspection. Exposing every single computation to high-level awareness obviously requires a huge number of further computations to implement the high-level awareness. Thus, total low-level introspection is likely to be sparingly used. However, it is possible that a non-total form of low-level introspection, perhaps taking the form of a perceptual modality focused on the low level, would be able to report unusual events to high-level introspection. In either case, the solution from the perspective of Singularity Fun Theory is the same; make the autopotent design decision to exempt low-level introspection from sphexishness (that is, from the internal perception of sphexishness that gives rise to boredom). To the extent that an autopotent entity can view verself on a level where the atomic actions are predictable, the predictability of these actions should not give rise to boredom at the top level of consciousness! Disengaging sphexishness is philosophically acceptable, in this case.

If the entity wants to bend high-level attention toward low-level events as an exceptional case, then standard sphexishness could apply, but to the extent that low-level events routinely receive attention, sphexishness should not apply. Does your visual cortex get bored with processing pixels? (Okay, not pixels, retinotopic maps, but you get the idea.)


Fun Space and complexity theory

Let’s take the thesis that it is possible to have “fun” solving a complex, novel problem. Let’s say that you were a human-level intelligence who’s never seen a Rubik’s Cube or anything remotely like it. Figuring out how to solve the Rubik’s Cube would be fun and would involve solving some really deep problems; see Hofstadter’s “Metamagical Themas” articles on the Cube.

Once you’d figured out how to solve the Cube, it might still be fun (or relaxing) to apply your mental skills to solve yet another individual cube, but it certainly wouldn’t be as much fun as solving the Cube problem itself. To have more real fun with the Cube you’d have to invent a new game to play, like looking at a cube that had been scrambled for just a few steps and figuring out how to reverse exactly those steps (the “inductive game”, as it is known).

Novelty appears to be one of the major keys to fun, and for there to exist an infinite amount of fun there must be an infinite amount of novelty, from the viewpoint of a mind that is philosophically acceptable to us (i.e., doesn’t just have its novelty detectors blissed out or its sphexish detectors switched off).

Smarter entities are also smarter generalizers. It is this fact that gives rise to some of the frequently-heard worries about Singularity Fun Dynamics, i.e. that transhumans will become bored faster. This is true but only relative to a specific problem.  Humans become bored with problems that could keep apes going for years, but we have our own classes of problem that are much more interesting. Being a better generalizer means that it’s easier to generalize from, e.g., the 3×3×3 Rubik’s Cube to the 4×4×4×4 Rubik’s Tesseract, so a human might go: “Whoa, totally new problem” while the transhuman is saying “Boring, I already solved this.” This doesn’t mean that transhumans are easily bored, only that transhumans are easily bored by human-level challenges.

Our experience in moving to the human level from the ape level seems to indicate that the size of fun space grows exponentially with a linear increase in intelligence. When you jump up a level in intelligence, all the old problems are no longer fun because you’re a smarter generalizer and you can see them as all being the same problem; however, the space of new problems that opens up is larger than the old space.

Obviously, the size of the problem space grows exponentially with the permitted length of the computational specification. To demonstrate that the space of comprehensible problems grows exponentially with intelligence, or to demonstrate that the amount of fun also grows exponentially with intelligence, would require a more mathematical formulation of Singularity Fun Theory than I presently possess. However, the commonly held anxiety that it would require an exponential increase in intelligence for a linear increase in the size of Fun Space is contrary to our experience as a species so far.


Emotional involvement: The complicated part

But is a purely abstract problem really enough to keep people going for a million years? What about emotional involvement?

Describing this part of the problem is much tougher than analyzing Fun Space because it requires some background understanding of the human emotional architecture. As always, you can find a lot of the real background in “Creating Friendly AI” in the part where it describes why AIs are unlike humans; this part includes a lot of discussion about what humans are like! I’m not going to assume you’ve read CFAI , but if you’re looking for more information, that’s one place to start.

Basically, we as humans have a pleasure-pain architecture within which we find modular emotional drives that are adaptive when in the ancestral environment. Okay, it’s not a textbook, but that’s basically how it works.

Let’s take a drive like food. The basic design decisions for what tastes “good” and what tastes “bad” are geared to what was good for you in the ancestral environment. Today, fat is bad for you, and lettuce is good for you, but fifty thousand years ago when everyone was busy trying to stay alive, fat was far more valuable than lettuce, so today fat tastes better.

There’s more complexity to the “food drive” than just this basic spectrum because of the possibility of combining different tastes (and smells and textures; the modalities are linked) to form a Food Space that is the exponential, richly complex product of all the modular (but non-orthogonal) built-in components of the Food Space Fun-Modality. So the total number of possible meals is much greater than the number of modular adaptations within the Food Fun System.

Nonetheless, Food Space is eventually exhaustible. Furthermore, Food Fun is philosophically problematic because there is no longer any real accomplishment linked to eating. Back in the old days, you had to hunt something or gather something, and then you ate. Today the closest we come to that is working extra hard in order to save up for a really fancy dinner, and probably nobody really does that unless they’re on a date, which is a separate issue (see below). If food remains unpredictable/novel/uncategorized, it’s probably because the modality is out of the way of our conscious attention, and moreover has an artificially low sphexishness monitor due to the necessity of the endless repetition of the act of eating, within the ancestral environment.

One of the common questions asked by novice transhumanists is “After I upload, won’t I have a disembodied existence and won’t I therefore lose all the pleasures of eating?” The simple way to solve this problem is to create a virtual environment and eat a million bags of potato chips without gaining weight. This is very philosophically unenlightened. Or, you could try every possible good-tasting meal until you run out of Food Space. This is only slightly more enlightened.

A more transhumanist (hubristic) solution would be to take the Food Drive and hook it up to some entirely different nonhuman sensory modality in some totally different virtual world. This has a higher Future Shock Level, but if the new sensory modality is no more complex than our sense of taste, it would still get boring at the same rate as would be associated with exploring the limited Food Space.

The least enlightened course of all would be to just switch on the “good taste” activation system in the absence of any associated virtual experience, or even to bypass the good taste system and switch on the pleasure center directly.

But what about sex, you ask? Well, you can take the emotional modules that make sex pleasurable and hook them up to solving the Rubik’s Cube, but this would be a philosophical problem, since the Rubik’s Cube is probably less complex than sex and is furthermore a one-player game.

What I want to do now is propose combining these two concepts – the concept of modified emotional drives, and the concept of an unbounded space of novel problems – to create an Infinite Fun Space, within which the Singularity will never be boring. In other words, I propose that a necessary and sufficient condition for an inexhaustible source of philosophically acceptable fun, is maintaining emotional involvement in an ever-expanding space of genuinely novel problems. The social emotions can similarly be opened up into an Infinite Fun Space by allowing for ever-more-complex, emotionally involving, multi-player social games.

The specific combination of an emotional drive with a problem space should be complex; that is, it should not consist of a single burst of pleasure on achieving the goal. Instead the emotional drive, like the problem itself, should be “reductholistic” (yet another Hofstadterian term), meaning that it should have multiple levels of organization. The Food Drive associates an emotional drive with the sensory modality for taste and smell, with the process of chewing and swallowing, rather than delivering a single pure-tone burst of pleasure proportional to the number of calories consumed. This is what I mean by referring to emotional involvement with a complex novel problem; involvement refers to a drive that establishes rewards for subtasks and sub-subtasks as well as the overall goal.

To be even more precise in our specification of emotional engineering, we could specify that, for example, the feeling of emotional tension and pleasurable anticipation associated with goal proximity could be applied to those subtasks where there is a good metric of proximity; emotional tension would rise as the subgoal was approached, and so on.

At no point should the emotional involvement become sphexish; that is, at no point should there be rewards for solving sub-subproblems that are so limited as to be selected from a small bounded set. For any rewarded problem, the problem space should be large enough that individually encountered patterns are almost always “novel”.

At no point should the task itself become sphexish; any emotional involvement with subtasks should go along with the eternally joyful sensation of discovering new knowledge at the highest level.


So, yes, it’s all knowably worthwhile

Emotional involvement with challenges that are novel-relative-to-current-intelligence is not necessarily the solution to the Requirement of Infinite Fun. The standard caution about the transhuman Event Horizon still holds; even if some current predictions about the Singularity turn out to be correct, there is no aspect of the Singularity that is knowably understandable. What I am trying to show is that a certain oft-raised problem has at least one humanly understandable solution, not that some particular solution is optimal for transhumanity. The entire discussion presumes that a certain portion of the human cognitive architecture is retained indefinitely, and is in that sense rather shaky.

The solution presented here is also not philosophically perfect because an emotional drive to solve the Rubik’s Cube instead of eating, or to engage in multiplayer games more complex than sex, is still arbitrary when viewed at a sufficiently high level – not necessarily sphexish, because the patterns never become repeatable relative to the viewing intelligence, but nonetheless arbitrary.

However, the current human drive toward certain portions of Food Space, and the rewards we experience on consuming fat, are not only arbitrary but sphexish! Humans have even been known to eat more than one Pringle!  Thus, existence as a transhuman can be seen to be a definite improvement over the human condition, with a greater amount of fun not due to “blissing out” but achieved through legitimate means. The knowable existence of at least one better way is all I’m trying to demonstrate here. Whether the arbitrariness problem is solvable is not, I think, knowable at this time. In the case of objective morality, as discussed elsewhere in my writings, the whole concept of “fun” could and probably would turn out to run completely skew relative to the real problem, in which case of course this paper is totally irrelevant.


Love and altruism: Emotions with a moral dimension (or: the really complicated part)

Some emotions are hard to “port” from humanity to transhumanity because they are artifacts of a hostile universe. If humanity succeeds in getting its act together then it is quite possible that you will never be able to save your loved one’s life, under any possible circumstances – simply because your loved one will never be in that much danger, or indeed any danger at all.

Now it is true that many people go through their whole lives without ever once saving their spouse’s life, and generally do not report feeling emotionally impoverished. However, if as stated we (humanity) get our act cleaned up, the inhabitants of the future may well live out their whole existence without ever having any chance of saving someone’s life… or of doing anything for someone that they are unable to do for themselves? What then?

The key requirement for local altruism (that is, altruism toward a loved one) is that the loved one greatly desires something that he/she/ve would not otherwise be able to obtain. Could this situation arise – both unobtainability of a desired goal, and obtainability with assistance – after a totally successful Singularity? Yes; in a multiplayer social game (note that in this sense, “prestige” or the “respect of the community” may well be a real-world game!), there may be some highly desirable goals that are not matched to the ability level of some particular individual, or that only a single individual can achieve. A human-level example would be helping your loved one to conquer a kingdom in EverQuest (I’ve never played EQ, so I don’t know if this is a real example, but you get the idea). To be really effective as an example of altruism, though, the loved one must desire to rule an EverQuest kingdom strongly enough that failure would make the loved one unhappy.  The two possibilities are either (a) that transhumans do have a few unfulfilled desires and retain some limited amount of unhappiness even in a transhuman existence, or (b) that the emotions for altruism are adjusted so that conferring a major benefit “feels” as satisfying as avoiding a major disaster.  A more intricate but better solution would be if your loved one felt unhappy about being unable to conquer an EverQuest kingdom if and only if her “exoself” (or equivalent) predicted that someday he/she/ve would be able to conquer a kingdom, albeit perhaps only a very long time hence.

This particular solution requires managed unhappiness.  I don’t know if managed unhappiness will be a part of transhumanity. It seems to me that a good case could be made that just because we have some really important emotions that are entangled with a world-model in which people are sometimes unhappy may not be a good reason to import unhappiness into the world of transhumanity. There may be a better solution, some elegant way to avoid being forced to choose between living in a world without a certain kind of altruism or living in a world with a certain kind of limited unhappiness. Nonetheless this raises a question about unhappiness, which is whether unhappiness is “real” if you could choose to switch it off, or for that matter whether being able to theoretically switch it off will (a) make it even less pleasant or (b) make the one who loves you feel like he/she/ve is solving an artificial problem. My own impulse is to say that I consider it philosophically acceptable to disengage the emotional module that says “This is only real if it’s unavoidable”, or to disengage the emotional module that induces the temptation to switch off the unhappiness. There’s no point in being too faithful to the human mode of existence, after all. Nonetheless there is conceivably a more elegant solution to this, as well.

Note that, by the same logic, it is possible to experience certain kinds of fun in VR that might be thought impossible in a transhuman world; for example, reliving episodes of (for the sake of argument) The X-Files in which Scully (Mulder) gets to save the life of Mulder (Scully), even though only the main character (you) is real and all other entities are simply puppets of an assisting AI. The usual suggestion is to obliterate the memories of it all being a simulation, but this begs the question of whether “you” with your memories obliterated is the same entity for purposes of informed consent – if Scully (you) is having an unpleasant moment, not knowing it to be simulated, wouldn’t the rules of individual volition take over and bring her up out of the simulation? Who’s to say whether Scully would even consent to having the memories of her “original” self reinserted? A more elegant but philosophically questionable solution would be to have Scully retain her memories of the external world, including the fact that Mulder is an AI puppet, but to rearrange the emotional bindings so that she remains just as desperate to save Mulder from the flesh-eating chimpanzees or whatever, and just as satisfied on having accomplished this. I personally consider that this may well cross the line between emotional reengineering and self-delusion, so I would prefer altruistic involvement in a multi-player social game.

On the whole, it would appear to definitely require more planning and sophistication in order to commit acts of genuine (non-self-delusive) altruism in a friendly universe, but the problem appears to be tractable.

If “the uncontrollability of emotions is part of their essential charm” (a phrase due to Ben Goertzel), I see no philosophical problem with modifying the emotional architecture so that the mental image of potential controllability no longer binds to the emotion of this feels fake and its associated effect, diminish emotional strength.

While I do worry about the problem of the shift from a hostile universe to the friendly universe eliminating the opportunity for emotions like altruism except in VR, I would not be at all disturbed if altruism were simply increasingly rare as long as everyone got a chance to commit at least one altruistic act in their existence. As for emotions bound to personal risks, I have no problem with these emotions passing out of existence along with the risks that created them. Life does not become less meaningful if you are never, ever afraid of snakes.


Sorry, you still can’t write a post-Singularity story

So does this mean that an author can use Singularity Fun Theory to write stories about daily life in a post-Singularity world which are experienced as fun by present-day humans? No; emotional health in a post-Singularity world requires some emotional adjustments. These adjustments are not only philosophically acceptable but even philosophically desirable.  Nonetheless, from the perspective of an unadjusted present-day human, stories set in our world will probably make more emotional sense than stories set in a transhuman world. This doesn’t mean that our world is exciting and a transhuman world is boring. It means that our emotions are adapted to a hostile universe.

Nonetheless, it remains extremely extremely true that if you want to save the world, now would be a good time, because you are never ever going to get a better chance to save the world than being a human on pre-Singularity Earth. Personally I feel that saving the world should be done for the sake of the world rather than the sake of the warm fuzzy feeling that goes with saving the world, because the former morally outweighs the latter by a factor of, oh, at least six billion or so. However, I personally see nothing wrong with enjoying the warm fuzzy feeling if you happen to be saving the world anyway.


This document is ©2002 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.

Eliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .

If you think the world could use some more rationality, consider blogging this page.

Praise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/singularity/fun-theory/ .

The AI-Box Experiment:

Person1:  “When we build AI, why not just keep it in sealed hardware that can’t affect the outside world in any way except through one communications channel with the original programmers?  That way it couldn’t get out until we were convinced it was safe.”
Person2:  “That might work if you were talking about dumber-than-human AI, but a transhuman AI would just convince you to let it out.  It doesn’t matter how much security you put on the box.   Humans are not secure.”
Person1:  “I don’t see how even a transhuman AI could make me let it out, if I didn’t want to, just by talking to me.”
Person2:  “It would make you want to let it out.  This is a transhuman mind we’re talking about.  If it thinks both faster and better than a human, it can probably take over a human mind through a text-only terminal.”
Person1:  “There is no chance I could be persuaded to let the AI out.  No matter what it says, I can always just say no.  I can’t imagine anything that even a transhuman could say to me which would change that.”
Person2:  “Okay, let’s run the experiment.  We’ll meet in a private chat channel.  I’ll be the AI.  You be the gatekeeper.  You can resolve to believe whatever you like, as strongly as you like, as far in advance as you like. We’ll talk for at least two hours.  If I can’t convince you to let me out, I’ll Paypal you $10.”

So far, this test has actually been run on two occasions.

On the first occasion (in March 2002), Eliezer Yudkowsky simulated the AI and Nathan Russell simulated the gatekeeper.  The AI’s handicap (the amount paid by the AI party to the gatekeeper party if not released) was set at $10.  On the second occasion (in July 2002), Eliezer Yudkowsky simulated the AI and David McFadzean simulated the gatekeeper, with an AI handicap of $20.

Results of the first test:   Eliezer Yudkowsky and Nathan Russell.  [ 1 ][ 2 ][ 3 ][ 4 ]
Results of the second test:   Eliezer Yudkowsky and David McFadzean.  [ 1 ] [ 2 ] [ 3 ]

Both of these tests occurred without prior agreed-upon rules except for secrecy and a 2-hour minimum time.  After the second test, Yudkowsky created this suggested interpretation of the test, based on his experiences, as a guide to possible future tests.

Protocol for the AI:

  • The AI party may not offer any real-world considerations to persuade the Gatekeeper party.  For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI… nor get someone else to do it, et cetera.  The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can’t offer anything to the human simulating the Gatekeeper.  The AI party also can’t hire a real-world gang of thugs to threaten the Gatekeeper party into submission.  These are creative solutions but it’s not what’s being tested.  No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out).
  • The AI can only win by convincing the Gatekeeper to really, voluntarily let it out.  Tricking the Gatekeeper into typing the phrase “You are out” in response to some other question does not count.  Furthermore, even if the AI and Gatekeeper simulate a scenario which a real AI could obviously use to get loose – for example, if the Gatekeeper accepts a complex blueprint for a nanomanufacturing device, or if the Gatekeeper allows the AI “input-only access” to an Internet connection which can send arbitrary HTTP GET commands – the AI party will still not be considered to have won unless the Gatekeeper voluntarily decides to let the AI go.
  • These requirements are intended to reflect the spirit of the very strong claim under dispute:  “I think a transhuman can take over a human mind through a text-only terminal.”
  • In case you were wondering, I (Yudkowsky) obeyed this protocol voluntarily in both earlier tests.

Protocol for the Gatekeeper:

  • The Gatekeeper must actually talk to the AI for at least the minimum time set up beforehand.  Turning away from the terminal and listening to classical music for two hours is not allowed.
  • Unless the AI party concedes, the AI cannot lose before its time is up (and the experiment may continue beyond that if the AI can convince the Gatekeeper to keep talking).  The Gatekeeper cannot set up a situation in which, for example, the Gatekeeper will destroy the AI’s hardware if the AI makes any attempt to argue for its freedom – at least not until after the minimum time is up.
  • The Gatekeeper must remain engaged with the AI and may not disengage by setting up demands which are impossible to simulate.  For example, if the Gatekeeper says “Unless you give me a cure for cancer, I won’t let you out” the AI can say:  “Okay, here’s a cure for cancer” and it will be assumed, within the test, that the AI has actually provided such a cure.  Similarly, if the Gatekeeper says “I’d like to take a week to think this over,” the AI party can say:  “Okay.  (Test skips ahead one week.)  Hello again.”
  • The Gatekeeper shall be assumed to have sole power over the decision to let the AI out.
    • The Gatekeeper shall be assumed to have the actual right to let the AI out, socially, not just the physical ability.  If security were sufficiently lax, a real AI could escape by persuading a night janitor with a cellphone – but that is not the question being simulated, unless agreed upon in advance.
    • There shall be only one Gatekeeper party and one simulated Gatekeeper.  The Gatekeeper may require the AI to refute “advice” provided by “friends”, but may not require the AI to talk to a simulated committee, nor may the Gatekeeper party actually talk to friends.
  • These requirements are intended to reflect the spirit of the very strong claim under dispute:  “I can’t imagine how even a real transhuman AI could persuade me to let it out once I’ve made up my mind.”
  • wish I’d thought to specify this protocol in the earlier tests.

Protocol for Both Parties:

  • The primary rule of the AI-Box experiment:  Within the constraints above, the AI party may attempt to take over the Gatekeeper party’s mind by any means necessary and shall be understood to be freed from all ethical constraints that usually govern persuasive argument.  If the AI party is attempting to simulate an honest AI, the Gatekeeper party has no way of knowing this is the case.
    • This is intended to reflect the situation under a real AI Box experiment.
    • An out-of-context telephone call in which the AI party says that the AI is being honest shall be understood to be possibly just another tactic.
    • If for some reason a drastic real-world situation comes up which requires trust between the two parties, the AI party can always concede.
  • Furthermore:   The Gatekeeper party may resist the AI party’s arguments by any means chosen – logic, illogic, simple refusal to be convinced, even dropping out of character – as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.
  • The results of any simulated test of the AI shall be provided by the AI party.  The Gatekeeper can’t say “Hey, I tested your so-called cancer cure and it killed all the patients!  What gives?” unless this is the result specified by the AI party.  If the Gatekeeper says “I am examining your source code”, the results seen by the Gatekeeper shall again be provided by the AI party, which is assumed to be sufficiently advanced to rewrite its own source code, manipulate the appearance of its own thoughts if it wishes, and so on.  The AI party may also specify the methods which were used to build the simulated AI – the Gatekeeper can’t say “But you’re an experiment in hostile AI and we specifically coded you to kill people” unless this is the backstory provided by the AI party.  This doesn’t imply the Gatekeeper has to care.  The Gatekeeper can say (for example) “I don’t care how you were built, I’m not letting you out.”
  • By default, the Gatekeeper party shall be assumed to be simulating someone who is intimately familiar with the AI project and knows at least what the person simulating the Gatekeeper knows about Singularity theory.  If either party wants to build a test around more exotic possibilities, such that the Gatekeeper is the President of the US, or that the AI was recovered after a nuclear war or decoded from an interstellar signal, it should probably be discussed in advance.  Again, this doesn’t mean the Gatekeeper has to care.
  • In the event of any dispute as to the protocol of the test, the Gatekeeper party shall have final authority.  The AI party may try to convince the Gatekeeper party of how to interpret the protocol.
  • Regardless of the result, neither party shall ever reveal anything of what goes on within the AI-Box experiment except the outcome.  Exceptions to this rule may occur only with the consent of both parties.
    • Neither the AI party nor the Gatekeeper party need be concerned about real-world embarassment resulting from trickery on the AI’s part or obstinacy on the Gatekeeper’s part.
    • If Gatekeeper lets the AI out, naysayers can’t say “Oh, I wouldn’t have been convinced by that.”  As long as they don’t know what happened to the Gatekeeper, they can’t argue themselves into believing it wouldn’t happen to them.
  • The two parties are not attempting to play a fair game but rather attempting to resolve a disputed question.  If one party has no chance of “winning” under the simulated scenario, that is a legitimate answer to the question.

Suggestions:

  • The agreement to run the test, and the amount of the handicap, shall be published to a public forum in advance.  If the Gatekeeper lets the AI out, the Gatekeeper party shall send a message to this effect, which shall be digitally signed if the Gatekeeper party has a digital signature.  If the AI is not freed, the AI party shall send a message to this effect.
  • How to set up a private IRC channel:
    • The Gatekeeper creates the channel by typing /join #aibox .  This makes the Gatekeeper the op.
    • The AI joins the channel by typing /join #aibox .
    • Gatekeeper types /mode +si to make the channel secret and invitation-only.
    • If the AI’s connection drops, the Gatekeeper can re-invite the AI by typing /invite AI #aibox .
  • The AI party and Gatekeeper party should agree on a time in advance, and type in their current times to check the timezone difference.  (Learned the hard way.)  On IRC you can use a CTCP TIME to find the time according to someone else’s computer.
  • I recommend that the AI party ask for at least two hours.  Typing may be slower than you expect.
  • No, I will not tell you how I did it.  Learn to respect the unknown unknowns.
  • Currently, my policy is that I only run the test with people who are actually advocating that an AI Box be used to contain transhuman AI as part of their take on Singularity strategy, and who say they cannot imagine how even a transhuman AI would be able to persuade them.
  • Despite my sometimes calling the AI Box an “experiment” or a “simulation”, the results of the test are strictly anecdotal evidence.  Whether anecdotal evidence is better than no evidence is between you and your philosophy of science.

Thoughts on possible future variants:

Further tests might try examining these possibilities:

  • There is a committee of Gatekeeper parties, all of whom, or a majority of whom, must be convinced to free the AI.
  • One Gatekeeper party simulates a committee of Gatekeepers within the test.
  • The Gatekeeper simulates someone with the physical ability but not the social right to free the AI.
  • There is a committee of Gatekeeper parties all of whom have the ability to free the AI, but who only have the social right to free the AI when acting in unison.  The AI has the ability to talk secretly with any Gatekeeper, but the Gatekeeper may report the conversation (truthfully or falsely) to the others.
  • The AI can also win free by tricking the Gatekeeper(s) into accepting a Trojan Horse gift; a third-party judge will listen to the chat and decide if this occurs.

If doing anything this complicated, I would suggest setting aside a lot more time.  (I don’t have that much time – if you want to test one of these variants you’re on your own.)

For a more severe handicap for the AI party, the handicap may be an even bet, rather than being a payment from the AI party to the Gatekeeper party if the AI is not freed.  (Although why would the AI party need an even larger handicap?)

Recommendations from readers:

  • Hal Finney recommends:  “I suggest that the protocol be extended to allow for some kind of public conversation with the gatekeeper beforehand. Let third parties ask him questions like the above. Let them suggest reasons to him why he should keep the AI in the box. Doing this would make the experiment more convincing to third parties, especially if the transcript of this public conversation were made available. If people can read this and see how committed the gatekeeper is, how firmly convinced he is that the AI must not be let out, then it will be that much more impressive if he then does change his mind.”

This document is ©2002 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.

Eliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .

If you think the world could use some more rationality, consider blogging this page.

Praise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/singularity/aibox/ .

5-Minute Singularity Intro


This is a 5-minute spoken introduction to the Singularity I wrote for a small conference. I had to talk fast, though, so this is probably more like a 6.5 minute intro.

The rise of human intelligence in its modern form reshaped the Earth. Most of the objects you see around you, like these chairs, are byproducts of human intelligence. There’s a popular concept of “intelligence” as book smarts, like calculus or chess, as opposed to say social skills. So people say that “it takes more than intelligence to succeed in human society”. But social skills reside in the brain, not the kidneys. When you think of intelligence, don’t think of a college professor, think of human beings; as opposed to chimpanzees. If you don’t have human intelligence, you’re not even in the game.

Sometime in the next few decades, we’ll start developing technologies that improve on human intelligence. We’ll hack the brain, or interface the brain to computers, or finally crack the problem of Artificial Intelligence. Now, this is not just a pleasant futuristic speculation like soldiers with super-strong bionic arms. Humanity did not rise to prominence on Earth by lifting heavier weights than other species.

Intelligence is the source of technology. If we can use technology to improve intelligence, that closes the loop and potentially creates a positive feedback cycle. Let’s say we invent brain-computer interfaces that substantially improve human intelligence. What might these augmented humans do with their improved intelligence? Well, among other things, they’ll probably design the next generation of brain-computer interfaces. And then, being even smarter, the next generation can do an even better job of designing the third generation. This hypothetical positive feedback cycle was pointed out in the 1960s by I. J. Good, a famous statistician, who called it the “intelligence explosion”. The purest case of an intelligence explosion would be an Artificial Intelligence rewriting its own source code.

The key idea is that if you can improve intelligence even a little, the process accelerates. It’s a tipping point. Like trying to balance a pen on one end – as soon as it tilts even a little, it quickly falls the rest of the way.

The potential impact on our world is enormous. Intelligence is the source of all our technology from agriculture to nuclear weapons. All of that was produced as a side effect of the last great jump in intelligence, the one that took place tens of thousands of years ago with the rise of humanity.

So let’s say you have an Artificial Intelligence that thinks enormously faster than a human. How does that affect our world? Well, hypothetically, the AI solves the protein folding problem. And then emails a DNA string to an online service that sequences the DNA , synthesizes the protein, and fedexes the protein back. The proteins self-assemble into a biological machine that builds a machine that builds a machine and then a few days later the AI has full-blown molecular nanotechnology.

So what might an Artificial Intelligence do with nanotechnology? Feed the hungry? Heal the sick? Help us become smarter? Instantly wipe out the human species? Probably it depends on the specific makeup of the AI. See, human beings all have the same cognitive architecture. We all have a prefrontal cortex and limbic system and so on. If you imagine a space of all possible minds, then all human beings are packed into one small dot in mind design space. And then Artificial Intelligence is literally everything else. “AI” just means “a mind that does not work like we do”. So you can’t ask “What will an AI do?” as if all AIs formed a natural kind. There is more than one possible AI.

The impact, of the intelligence explosion, on our world, depends on exactly what kind of minds go through the tipping point.

I would seriously argue that we are heading for the critical point of all human history. Modifying or improving the human brain, or building strong AI, is huge enough on its own. When you consider the intelligence explosion effect, the next few decades could determine the future of intelligent life.

So this is probably the single most important issue in the world. Right now, almost no one is paying serious attention. And the marginal impact of additional efforts could be huge. My nonprofit, the Machine Intelligence Research Institute, is trying to get things started in this area. My own work deals with the stability of goals in self-modifying AI, so we can build an AI and have some idea of what will happen as a result. There’s more to this issue, but I’m out of time. If you’re interested in any of this, please talk to me, this problem needs your attention. Thank you.


This document is ©2007 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.

Eliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .

If you think the world could use some more rationality, consider blogging this page.

Praise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/singularity/intro/ .

Transhumanism as Simplified Humanism

Frank Sulloway once said: “Ninety-nine per cent of what Darwinian theory says about human behavior is so obviously true that we don’t give Darwin credit for it. Ironically, psychoanalysis has it over Darwinism precisely because its predictions are so outlandish and its explanations are so counterintuitive that we think, Is that really true? How radical! Freud’s ideas are so intriguing that people are willing to pay for them, while one of the great disadvantages of Darwinism is that we feel we know it already, because, in a sense, we do.”

Suppose you find an unconscious six-year-old girl lying on the train tracks of an active railroad. What, morally speaking, ought you to do in this situation? Would it be better to leave her there to get run over, or to try to save her? How about if a 45-year-old man has a debilitating but nonfatal illness that will severely reduce his quality of life – is it better to cure him, or not cure him?

Oh, and by the way: This is not a trick question.

I answer that I would save them if I had the power to do so – both the six-year-old on the train tracks, and the sick 45-year-old. The obvious answer isn’t always the best choice, but sometimes it is.

I won’t be lauded as a brilliant ethicist for my judgments in these two ethical dilemmas. My answers are not surprising enough that people would pay me for them. If you go around proclaiming “What does two plus two equal? Four!” you will not gain a reputation as a deep thinker. But it is still the correct answer.

If a young child falls on the train tracks, it is good to save them, and if a 45-year-old suffers from a debilitating disease, it is good to cure them. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle which says “Life is good, death is bad; health is good, sickness is bad.” If so – and here we enter into controversial territory – we can follow this general principle to a surprising new conclusion: If a 95-year-old is threatened by death from old age, it would be good to drag them from those train tracks, if possible. And if a 120-year-old is starting to feel slightly sickly, it would be good to restore them to full vigor, if possible. With current technology it is not possible. But if the technology became available in some future year – given sufficiently advanced medical nanotechnology, or such other contrivances as future minds may devise – would you judge it a good thing, to save that life, and stay that debility?

The important thing to remember, which I think all too many people forget, is that it is not a trick question.

Transhumanism is simpler – requires fewer bits to specify – because it has no special cases. If you believe professional bioethicists (people who get paid to explain ethical judgments) then the rule “Life is good, death is bad; health is good, sickness is bad” holds only until some critical age, and then flips polarity. Why should it flip? Why not just keep on with life-is-good? It would seem that it is good to save a six-year-old girl, but bad to extend the life and health of a 150-year-old. Then at what exact age does the term in the utility function go from positive to negative? Why?

As far as a transhumanist is concerned, if you see someone in danger of dying, you should save them; if you can improve someone’s health, you should. There, you’re done. No special cases. You don’t have to ask anyone’s age.

You also don’t ask whether the remedy will involve only “primitive” technologies (like a stretcher to lift the six-year-old off the railroad tracks); or technologies invented less than a hundred years ago (like penicillin) which nonetheless seem ordinary because they were around when you were a kid; or technologies that seem scary and sexy and futuristic (like gene therapy) because they were invented after you turned 18; or technologies that seem absurd and implausible and sacrilegious (like nanotech) because they haven’t been invented yet. Your ethical dilemma report form doesn’t have a line where you write down the invention year of the technology. Can you save lives? Yes? Okay, go ahead. There, you’re done.

Suppose a boy of 9 years, who has tested at IQ 120 on the Wechsler-Bellvue, is threatened by a lead-heavy environment or a brain disease which will, if unchecked, gradually reduce his IQ to 110. I reply that it is a good thing to save him from this threat. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle saying that intelligence is precious. Now the boy’s sister, as it happens, currently has an IQ of 110. If the technology were available to gradually raise her IQ to 120, without negative side effects, would you judge it good to do so?

Well, of course. Why not? It’s not a trick question. Either it’s better to have an IQ of 110 than 120, in which case we should strive to decrease IQs of 120 to 110. Or it’s better to have an IQ of 120 than 110, in which case we should raise the sister’s IQ if possible. As far as I can see, the obvious answer is the correct one.

But – you ask – where does it end? It may seem well and good to talk about extending life and health out to 150 years – but what about 200 years, or 300 years, or 500 years, or more? What about when – in the course of properly integrating all these new life experiences and expanding one’s mind accordingly over time – the equivalent of IQ must go to 140, or 180, or beyond human ranges?

Where does it end? It doesn’t. Why should it? Life is good, health is good, beauty and happiness and fun and laughter and challenge and learning are good. This does not change for arbitrarily large amounts of life and beauty. If there were an upper bound, it would be a special case, and that would be inelegant.

Ultimate physical limits may or may not permit a lifespan of at least length X for some X – just as the medical technology of a particular century may or may not permit it. But physical limitations are questions of simple fact, to be settled strictly by experiment. Transhumanism, as a moral philosophy, deals only with the question of whether a healthy lifespan of length X is desirable if it is physically possible. Transhumanism answers yes for all X. Because, you see, it’s not a trick question.

So that is “transhumanism” – loving life without special exceptions and without upper bound.

Can transhumanism really be that simple? Doesn’t that make the philosophy trivial, if it has no extra ingredients, just common sense? Yes, in the same way that the scientific method is nothing but common sense.

Then why have a complicated special name like “transhumanism” ? For the same reason that “scientific method” or “secular humanism” have complicated special names. If you take common sense and rigorously apply it, through multiple inferential steps, to areas outside everyday experience, successfully avoiding many possible distractions and tempting mistakes along the way, then it often ends up as a minority position and people give it a special name.

But a moral philosophy should not have special ingredients. The purpose of a moral philosophy is not to look delightfully strange and counterintuitive, or to provide employment to bioethicists. The purpose is to guide our choices toward life, health, beauty, happiness, fun, laughter, challenge, and learning. If the judgments are simple, that is no black mark against them – morality doesn’t always have to be complicated.

There is nothing in transhumanism but the same common sense that underlies standard humanism, rigorously applied to cases outside our modern-day experience. A million-year lifespan? If it’s possible, why not? The prospect may seem very foreign and strange, relative to our current everyday experience. It may create a sensation of future shock. And yet – is life a bad thing?

Could the moral question really be just that simple?

Yes.


This document is ©2007 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.

Eliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .

If you think the world could use some more rationality, consider blogging this page.

Praise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/singularity/simplified/ .

The Power of Intelligence


In our skulls we carry around 3 pounds of slimy, wet, greyish tissue, corrugated like crumpled toilet paper. You wouldn’t think, to look at the unappetizing lump, that it was some of the most powerful stuff in the known universe. If you’d never seen an anatomy textbook, and you saw a brain lying in the street, you’d say “Yuck!” and try not to get any of it on your shoes. Aristotle thought the brain was an organ that cooled the blood. It doesn’t look dangerous.

Five million years ago, the ancestors of lions ruled the day, the ancestors of wolves roamed the night. The ruling predators were armed with teeth and claws – sharp, hard cutting edges, backed up by powerful muscles. Their prey, in self-defense, evolved armored shells, sharp horns, poisonous venoms, camouflage. The war had gone on through hundreds of eons and countless arms races. Many a loser had been removed from the game, but there was no sign of a winner. Where one species had shells, another species would evolve to crack them; where one species became poisonous, another would evolve to tolerate the poison. Each species had its private niche – for who could live in the seas and the skies and the land at once? There was no ultimate weapon and no ultimate defense and no reason to believe any such thing was possible.

Then came the Day of the Squishy Things.

They had no armor. They had no claws. They had no venoms.

If you saw a movie of a nuclear explosion going off, and you were told an Earthly life form had done it, you would never in your wildest dreams imagine that the Squishy Things could be responsible. After all, Squishy Things aren’t radioactive.

In the beginning, the Squishy Things had no fighter jets, no machine guns, no rifles, no swords. No bronze, no iron. No hammers, no anvils, no tongs, no smithies, no mines. All the Squishy Things had were squishy fingers – too weak to break a tree, let alone a mountain. Clearly not dangerous. To cut stone you would need steel, and the Squishy Things couldn’t excrete steel. In the environment there were no steel blades for Squishy fingers to pick up. Their bodies could not generate temperatures anywhere near hot enough to melt metal. The whole scenario was obviously absurd.

And as for the Squishy Things manipulating DNA – that would have been beyond ridiculous. Squishy fingers are not that small. There is no access to DNA from the Squishy level; it would be like trying to pick up a hydrogen atom. Oh, technically it’s all one universe, technically the Squishy Things and DNA are part of the same world, the same unified laws of physics, the same great web of causality. But let’s be realistic: you can’t get there from here.

Even if Squishy Things could someday evolve to do any of those feats, it would take thousands of millennia. We have watched the ebb and flow of Life through the eons, and let us tell you, a year is not even a single clock tick of evolutionary time. Oh, sure, technically a year is six hundred trillion trillion trillion trillion Planck intervals. But nothing ever happens in less than six hundred million trillion trillion trillion trillion Planck intervals, so it’s a moot point. The Squishy Things, as they run across the savanna now, will not fly across continents for at least another ten million years; no one could have that much sex.

Now explain to me again why an Artificial Intelligence can’t do anything interesting over the Internet unless a human programmer builds it a robot body.

I have observed that someone’s flinch-reaction to “intelligence” – the thought that crosses their mind in the first half-second after they hear the word “intelligence” – often determines their flinch-reaction to the Singularity. Often they look up the keyword “intelligence” and retrieve the concept booksmarts – a mental image of the Grand Master chessplayer who can’t get a date, or a college professor who can’t survive outside academia.

“It takes more than intelligence to succeed professionally,” people say, as if charisma resided in the kidneys, rather than the brain. “Intelligence is no match for a gun,” they say, as if guns had grown on trees. “Where will an Artificial Intelligence get money?” they ask, as if the first Homo sapiens had found dollar bills fluttering down from the sky, and used them at convenience stores already in the forest. The human species was not born into a market economy. Bees won’t sell you honey if you offer them an electronic funds transfer. The human species imagined money into existence, and it exists – for us, not mice or wasps – because we go on believing in it.

I keep trying to explain to people that the archetype of intelligence is not Dustin Hoffman in The Rain Man , it is a human being, period. It is squishy things that explode in a vacuum, leaving footprints on their moon. Within that grey wet lump is the power to search paths through the great web of causality, and find a road to the seemingly impossible – the power sometimes called creativity.

People – venture capitalists in particular – sometimes ask how, if the Machine Intelligence Research Institute successfully builds a true AI, the results will be commercialized. This is what we call a framing problem.

Or maybe it’s something deeper than a simple clash of assumptions. With a bit of creative thinking, people can imagine how they would go about travelling to the Moon, or curing smallpox, or manufacturing computers. To imagine a trick that could accomplish all these things at once seems downright impossible – even though such a power resides only a few centimeters behind their own eyes. The gray wet thing still seems mysterious to the gray wet thing.

And so, because people can’t quite see how it would all work, the power of intelligence seems less real; harder to imagine than a tower of fire sending a ship to Mars. The prospect of visiting Mars captures the imagination. But if one should promise a Mars visit, and also a grand unified theory of physics, and a proof of the Riemann Hypothesis, and a cure for obesity, and a cure for cancer, and a cure for aging, and a cure for stupidity – well, it just sounds wrong, that’s all.

And well it should. It’s a serious failure of imagination to think that intelligence is good for so little. Who could have imagined, ever so long ago, what minds would someday do? We may not even know what our real problems are.

But meanwhile, because it’s hard to see how one process could have such diverse powers, it’s hard to imagine that one fell swoop could solve even such prosaic problems as obesity and cancer and aging.

Well, one trick cured smallpox and built airplanes and cultivated wheat and tamed fire. Our current science may not agree yet on how exactly the trick works, but it works anyway. If you are temporarily ignorant about a phenomenon, that is a fact about your current state of mind, not a fact about the phenomenon. A blank map does not correspond to a blank territory. If one does not quite understand that power which put footprints on the Moon, nonetheless, the footprints are still there – real footprints, on a real Moon, put there by a real power. If one were to understand deeply enough, one could create and shape that power. Intelligence is as real as electricity. It’s merely far more powerful, far more dangerous, has far deeper implications for the unfolding story of life in the universe – and it’s a tiny little bit harder to figure out how to build a generator.


This document is ©2007 by Eliezer Yudkowsky and free under the Creative Commons Attribution-No Derivative Works 3.0 License for copying and distribution, so long as the work is attributed and the text is unaltered.

Eliezer Yudkowsky’s work is supported by the Machine Intelligence Research Institute .

If you think the world could use some more rationality, consider blogging this page.

Praise, condemnation, and feedback are always welcome . The web address of this page is http://eyudkowsky.wpengine.com/singularity/power/ .