Welcome to the website of a man who wears more than one hat.
One Eliezer Yudkowsky writes about the fine art of human rationality. Over the last few decades, science has found an increasing amount to say about sanity. Probability theory and decision theory give us the formal math; and experimental psychology, particularly the subfield of cognitive biases, has shown us how human beings think in practice. Now the challenge is to apply this knowledge to life – to see the world through that lens. That Eliezer Yudkowsky’s work can be found in the Rationality section.
The other Eliezer Yudkowsky concerns himself with Artificial Intelligence. Since the rise of Homo sapiens, human beings have been the smartest minds around. But very shortly – on a historical scale, that is – we can expect technology to break the upper bound on intelligence that has held for the last few tens of thousands of years. Artificial Intelligence is one of the technologies that potentially breaks this upper bound. The famous statistician I. J. Good coined the term “intelligence explosion” to refer to the idea that a sufficiently smart AI would be able to rewrite itself, improve itself, and so increase its own intelligence even further. This is the kind of Artificial Intelligence I work on. For more on this see the Singularity tab. (Yes, I know, the term is obsolete and no longer means what it did, I need to get around to reorganizing this website.)
My short fiction, miscellaneous essays, and various other things can be found under Other.
I am a full-time Research Fellow at the Machine Intelligence Research Institute, a small 501(c)(3) public charity supported primarily by individual donations.
A majority of all my written material is currently stored at the community blog Less Wrong, after being originally produced on the blog Overcoming Bias. (I found that I could write much faster in a blog format.) Those wishing to read through should use the indices found here.
My parents were early adopters, and I’ve been online since a rather young age. You should regard anything from 2001 or earlier as having been written by a different person who also happens to be named “Eliezer Yudkowsky”. I do not share his opinions.
Though I have friends aplenty in academia, I don’t operate within the academic system myself. (For some reason, I write extremely slowly in formal mode.) I am not a Ph.D. and should not be addressed as “Dr. Yudkowsky”.
If you’re having trouble finding something, I suggest consulting the Sitemap.
- Cognitive biases potentially affecting judgment of global risks
- Artificial Intelligence as a positive and negative factor in global risk
- An Intuitive Explanation of Bayesian Reasoning
- The Cartoon Guide to Lob’s Theorem
The first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth. To feel the burning itch of curiosity requires both that you be ignorant, and that you desire to relinquish your ignorance.