Muligheden for Neuroevolution åbner op for denne historie i Science:
A small group of researchers is studying how science could destroy the world—and how to stop that from happening
Philosopher Nick Bostrom believes it’s entirely possible that artificial intelligence (AI) could lead to the extinction of Homo sapiens. In his 2014 bestseller Superintelligence: Paths, Dangers, Strategies, Bostrom paints a dark scenario in which researchers create a machine capable of steadily improving itself. At some point, it learns to make money from online transactions and begins purchasing goods and services in the real world. Using mail-ordered DNA, it builds simple nanosystems that in turn create more complex systems, giving it ever more power to shape the world.
Now suppose the AI suspects that humans might interfere with its plans, writes Bostrom, who’s at the University of Oxford in the United Kingdom. It could decide to build tiny weapons and distribute them around the world covertly. “At a pre-set time, nanofactories producing nerve gas or target-seeking mosquito-like robots might then burgeon forth simultaneously from every square meter of the globe.”