Artificial intelligence and the extinction of humans

نوع المستند : أوراق العمل

المؤلف

maadi mokatam - 7426 el hadaba maadi

المستخلص

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "super-intelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence. The likelihood of this type of scenario is widely debated and hinges in part on differing scenarios for future progress in computer science. Once the exclusive domain of science fiction, concerns about superintelligence started to become mainstream from the 2010s until now. One source of concern is that controlling a super-intelligent machine, or instilling it with human-compatible values, maybe a harder problem than naïvely supposed. Many researchers believe that a superintelligence would naturally resist attempts to shut it off or change its goals—a principle called instrumental convergence—and that preprogramming a superintelligence with a full set of human values will prove to be an extremely difficult technical task. In contrast, skeptics argue that super-intelligent machines will have no desire for self-preservation.

الكلمات الرئيسية