A friend of mine was angry about people who are fearing AI/robots taking over the world. He said this is nearly impossible. I immediately made a subconscious estimate how likely this event of an „AI-Apocalypse“ is. My first guess…99% chance. Then I thought of some arguments for and against such an event. Here are a few:
1. Murphy’s Law: As time progresses (over centuries perhaps), technological developments will sum up and (if we won’t destroy ourselves in some war) nearly any technological achievement that can happen, will happen.
Remark: This subordinate clause „if we won’t destroy ourselves in some war“ may be the best argument against a 99% chance.
2. Egoism: One could say that people may anticipate the danger of a smarter AI BUT…if someone (a state, person, company) can achieve some advantage from creating an AI smarter than humans, it will do so, not concerning about some later effects on the world. His advantage may likely be much higher than his (or its) personal risk.
3. Increasing Computer Speed: If the unlikely event happens that we are just not able to build an AI as generalizable as our brain, we could just, if we have enough computer power, simulate mostly of our brain…what is equivalent of building an AI as generalizable as our brain.
…to be continued