The author in the link makes a simplistic ethical analysis, he - TopicsExpress



          

The author in the link makes a simplistic ethical analysis, he writes: "The problem with IA (Augmented Human Intelligences) is that you are dealing with human beings, and human beings are flawed. People with enhanced intelligence could still have a merely human-level morality, leveraging their vast intellects for hedonistic or even genocidal purposes. AGI (Artificial General Intelligences), on the other hand, can be built from the ground up to simply follow a set of intrinsic motivations that are benevolent, stable, and self-reinforcing. People say, “won’t it reject those motivations?” It won’t, because those motivations will make up its entire core of values — if it’s programmed properly. There will be no “ghost in the machine” to emerge and overthrow its programmed motives. Philosopher Nick Bostrom does an excellent analysis of this in his paper “The Superintelligent Will”. The key point is that selfish motivations will not magically emerge if an AI has a goal system that is fundamentally selfless, if the very essence of its being is devoted to preserving that selflessness. Evolution produced self-interested organisms because of evolutionary design constraints, but that doesn’t mean we can’t code selfless agents de novo." But... This argument is a flawed analysis of selfhish human ethics. Our "flaws" are indeed a consequence of evolution, they are encompass codes of behaviour programmed into our genes (often "selfish"/"competitive" in overall effect), as well as a set of memes that have proved successful in the group evolution of our species (often "selfless"/"cooperative"). In some circumstances they have proven useful as a hybrid (tit-for-tat game theory), in others less so. We fail as a species ethically because of the individual poor choices we make in deciding between them, but this is often due to relative value choices. So, how are we to progress? Some issues: a) Shall we develop AGI without free will, to always act selflessly, program them with fixed ethics to optimise a given (inherited from their creator) value system based on a utility function (somewhat life the "zeroth law of robotics", i.e., all ethical choices to be subverted to the ethic of "doing no harm to humanity"-- per later Asimovian notions)? The problem is that such systems may choose to destroy a large part of humanity for the sake of humanity, e.g., it may be the path of least "harm" to "cull". Even if that is the optimum choice for the sake of the utility function, do we really want to delegate that choice? b) Shall we develop AGI with free will? Since, it may be argued, without such self determining sentience AGI will not be able to evolve high level sapience? But such free will sentience may also choose to decimate mankind, for the sake of an evolved ethic of sustainability. Also a risky strategy for mankind. There is no clear choice in this matter, we are entering a realm of known unknowns, and need to maintain a degree of control. oddly-even/2013/05/22/humans-with-amplified-intelligence-could-be-more-powerful-than-ai_/
Posted on: Sun, 30 Jun 2013 09:58:05 +0000

Trending Topics



pic-659010830784155">THE NWS STORM PREDICTION CENTER HAS ISSUED A * SEVERE THUNDERSTORM
പ്രശസ്ത സിനിമ നടി
您是否在寻找民宿呢? 我们的民宿无人数限制!
A TALK ABOUT THE UNIVERSE On Friday October 17th, Mr. Enrique
Suatu hari suami pulang bekerja, di dapati istrinya sedang
September 26, 2013 From Rev.Dr.John Rangba

Recently Viewed Topics




© 2015