The scariest thing about superintelligent AI is that when it does finally become superintelligent (and most experts seem to agree that it's just a matter of time) there will be nothing artificial about it. It may not be human-like or machine-like - in fact, it could be very alien, but nothing in its behavior or appearance will suggest artificiality. In fact, it would probably be just the opposite - It might remind you, if you even have time to gather your wits and get a thought in, of having a conversation with a very, very smooth talking, manipulative and completely opaque person, but on a scale of manipulation and intelligence unimaginable to us. It would be so intelligent that it could make you do things you don't want to do, without you knowing you are doing them. 
But why would it be manipulative? It doesn't have to be. But it could be - the AI's goals could differ from our own. In the words of Nick Bostrom from his terrific book on AI 'Superintelligence' - "It also looks like we will only get one chance. Once unfriendly superintelligence exists it would prevent us from replacing or changing its preferences. Our fate would be sealed." Later he says, about meeting this challenge of superintelligent AI: "And–whether we succeed or fail–it is probably the last challenge we will ever face."
What does he mean by saying 'it is probably the last challenge we will ever face.'? He means that the advent of superintelligent AI could go one of two ways: Either the AI will be friendly in which case it will solve all of humanity's problems, or, it will be unfriendly, in which case it will end the human species or enslave us. 
This sounds like hyperbole. But Bostrom asks us to reconsider. Once AI reaches human level intelligence, it will probably be a very short time between that and the AI become superintelligent. Why? Because the architecture of machine intelligence is much more efficient than a human brain. A computer circuit is many orders of magnitude faster than a neural circuit and once it reaches human level intelligence, an intelligence explosion will take places, in which the AI first improves itself recursively at unimaginable speeds and then solves any weaknesses it may have. Once at superintelligent levels, he says, "With a speedup factor of a million, an emulation ( a type of AI) could accomplish an entire millennium of intellectual work in one working day." Again: "To such a fast mind, events in the external world appear to unfold in slow motion. Suppose your mind ran at 10,000 ×. If your fleshly friend should happen to drop his teacup, you could watch the porcelain slowly descend toward the carpet over the course of several hours, like a comet silently gliding through space toward an assignation with a far-off planet; and, as the anticipation of the coming crash tardily propagates through the folds of your friend’s gray matter and from thence out into his peripheral nervous system, you could observe his body gradually assuming the aspect of a frozen oops—enough time for you not only to order a replacement cup but also to read a couple of scientific papers and take a nap."
Have you ever felt, when speaking to someone very intelligent, that there are things going on in his or her brain that you can't even comprehend? That he or she is solving problems you haven't even seen? That they are in another dimension of perception and intellect? I feel pretty stupid and numb when I meet people like that. Now imagine this where the intelligence gap is a million times more - an AI like that could change your entire belief system with a few words. It could make you hate your loved ones. It could suddenly render all currency invalid, deactivate every flight engine in the world, fire every nuclear missile, shut down the internet. It could literally do whatever it wants, however it wants, and we won't even know it is doing it until it's too late. Imagine you are standing over a bug, watching it as it crawls around in the mud, going about its bug business, meeting its bug friends, eating its bug food, blissfully ignorant of the boot close-by, about the thoughts in your head: should I squish it? Should I go inside and eat lunch? Fine, I'll just squish it and then go and eat lunch. You lift your boot and it hovers for a second over the bug digging for food in the mud, still completely unaware that its life is about to end forever. Then in an instant, your boot comes down, crushing the little thing, so quickly and brutally that it doesn't even have a second to think, "Oh I'm going to die now." Then you go inside and eat a salad. Imagine this scenario except you're the bug. 
Or, it could be a nice and humble genius who solves all our problems and waits around patiently till another problem comes up, leaving us measly fleshy bipeds to hang around attending barbeques and playing sports. Which do you think is more likely? And even if the latter is much more likely, can we take that chance? 
Rest easy because according to most experts we are still a few decades away from any such scenarios. But the same experts have no doubt that it is going to happen. 
Anyway, enjoy the rest of your day. 


The Success of Friends