Michael Anissimov ‘Debunks’ Ray Kurzweil – Pot Calls Kettle Black

A while ago, Michael Anissimov wrote the following post:


In it, he pits Kurzweil against the colloidal silver crowd. In doing so, he makes a few screw ups of his very own.

I posted a comment to his post calling him out on it. But he never published it. So here it is:


A few observations:

1. Nano assemblers might still be sometime off, but the first generation of nanobots already exists. That’s way earlier than most people would have expected it (http://www.youtube.com/watch?v=-5KLTonB3Pg). Jim von Ehr has stated he expects rudimentary digital matter between 2015 – 2020 (http://nextbigfuture.com/2010/05/diamond-mechanosynthesis-paper-from.html). That’s straight from the horse’s mouth. You can’t ignore a near term prediction like that, especially not from a person with the necessary authority in the field.

2. Kurzweil’s rebuttal to your rebuttal was actually pretty good. The reason why people tend to get disappointed with future predictions, is because their expectations were too high. Take the 3D circuitry for example. Kurzweil correctly pointed out that 3D circuitry is already being done today, as he predicted. But 3D circuitry is typically one of those things that isn’t going to make us happy directly. Us humans only care about the benefit of more computational power. So was the 3D circuitry prediction incorrect, just because the arrival of 3D circuitry didn’t end up making us feel like we’re living in the future? Ofcourse not. Defensive pessimism might be a good ego protector. But the problem with it is that it is not rational and as such leads us to incorrect conclusions.

3. Making predictions is pretty hard, especially about the future. I thought it was an unwritten rule for anybody not named Kurzweil to not try and predict the future beyond 2030. But yet you state confidently we will not be immortal cyborgs by 2045. How can you possible know this? Just because Kurzweil is, in your opinion, overly optimistic about things in the present, that means all of his predictions are always going to be wrong? What about the ones where he was simply too pessimistic? What about the extremely early arrival of the self driving car? Kurzweil was not predicting that to happen until much later. Do Kurzweil’s overly pessimistic predictions not count? Do they not average out the ones where he was overly optimistic? If Kurzweil’s timetable is ‘one decade too early’, does that mean we can safely conclude we’ll be immortal cyborgs by 2055 then? Because that would mean predicting the future even further beyond 2030.

4. If you believe that the future is not accelerating exponentially, as you have stated sometime ago already, then I can imagine that you feel like you can confidently predict what will (or won’t) happen. However, I think your fall from the exponential bandwagon is caused by emotional reasons and therefore not rational. It still looks to me like the future is accelerating plenty exponential. The early arrival of quantum computers, first generation nanobots and the self driving car have me convinced technology is moving even faster than I had anticipated only 3 years ago. If the future is indeed accelerating exponentially, as I suspect it is doing, then it’s a typical case of ‘bend it like Beckham’. It’s extremely hard to predict the path of an exponential curve. So (some of) Kurzweil’s predictions might turn out to be wrong, but the exact same goes for your predictions about his predictions. If Kurzweil and the colloidal silver crowd can be wrong, then every crowd can be wrong. Including you and me. Predicting what won’t happen, is just as silly as predicting what will happen.

5. Don’t forget that the Maes-Garreau law turned out to be incorrect (http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/). The article’s conclusion is as follows: “There is no evidence that predictors are predicting AI happening towards the end of their own life expectancy.”. That means Kurzweil’s predictions aren’t necessarily wrong because he conveniently predicts them to occur within his lifetime (because he isn’t).

Leave a Reply