I read an interesting article about AI (Artifical Intelligence) yesterday. One of the authors was Stephen Hawkings, so I gave it some credence. It was commenting on the basic idea in the film "Transcendence" starring Johnny Depp. I have not seen the movie, but it has gotten some bad reviews. In any case, being an SF fan, I am familiar with the idea, that being that the invention of an AI smarter than its inventor, could lead to a singularity, or transcendence, as it is called in the film, as the AI could also invent an AI smarter than itself, and so on. Reading the reviews, it doesn't seem the film carries the idea to its logical conclusion, as it merely transforms its dying inventor into an AI, presumably with its inventor's personality.
What Hawking warns about (and it does appear to be a warning) is that any such real AI invention (perhaps inevitable) could be mankind's last, as a real superintelligent AI would be capable of creating an even more intelligent AI, whose capabilities we may not even be able to imagine.
My thought is that such an AI would transcend our concepts of good and evil, which in the end might be really only human concepts driven by our own animalistic desire to survive and reproduce our genes.
The warning is not even a new one. The beginning of AI goes back at least as far as WII, when Alan Turing and I.J. Good, among others, invented Colossus, the machine that helped break the Nazi Enigma code. The machine was deconstructed after the war and its parts scattered, partially based on fears that it was too smart. Good left a warning in his will. Turing, the real genius, might have written something as well, if he had not committed suicide after his security clearance was revoked when his homosexulaity was uncovered.
Given that singularity scenario (man invents God, if you will), we cannot predict the outcome. It could be good or bad for humankind, in the self-centered terms that we use.
We don't even know if such a being (I think we have to give it that status at least) would be judgmental, which is probably one of our biggest fears, considering how we have managed to screw up the world so far. Another question is whether or not it would possess a sort of self-survival instinct, although even that concept is anthropomorphic, isn't it? That would create fear as well, since it is what drives most human bad behavior, IMHO. It might be docile and kind instead. We just don't know.
Here's a link to the article and also a link to a book "Artifical Intelligence and the End of the Human Era" subtitled "Our Final Invention" appropriately enough.
http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.htmlhttp://www.amazon.ca/Our-Final-Invention-Artificial-Intelligence/dp/0312622376/ref=sr_1_1?ie=UTF8&qid=1380718861&sr=8-1&keywords=our+final+invention