Since the beginning of the first robots, man has always considered the possibility of machines taking over the world. This prospect is being explored by Kevin Warwick at the University Readings in England and his research suggests that robots could take over the world. According to his current research, robots have been able to learn and think creatively, though not as creative as a human. The relationship we have with robots is a master-slave relationship but this unusual idea of robots taking over the world would only occur if we would allow robots to be equal to us in stature and respect. However, it seems highly unlikely that robots would ever rise to the position to take over the world for two reasons. Firstly, Warwick’s idea only seems a possibility if robots have free will, therefore if we limit their capabilities they should not be in a position to take over the world. Secondly, robots would never reach the status of power to take over the world as the human workforce would resist to losing their jobs to machines.
Robots in this century have limitations as to what they can do and it is believed that some of these limitations will be permanent. It is these limitations that hinder a robot’s ability to gain dominance over human beings. These limitations are explained by Daniel Wolpert (a Royal Research Society Professor in the Department of Engineering) in a Phys.Org article. He states that ‘there is no machine that can identify visual objects or speech with the reliability and flexibility of humans.’ Furthermore he goes on to compare this ability with creativity when he states ‘these abilities are precursors to any real intelligence such as the ability to reason creatively and invent problems.’ These insightful quotes suggest that robots cannot outsmart humans now, and will probably not outstrip us in creativity for many years. Moreover, even if robots do reach our level of intelligence, humans can place laws within their coding, to prevent them from ever having absolute free will. This concept has already been explored by many science fiction books such as I, robot. In Isaac Asimov’s book, he formulated three laws:
1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second law.
These laws are what control robots in Asimov’s fictional future and they could also be incorporated into the robots humans create now; to ensure that robots never enslave us. This concept is also supported by other experts such as Martin Rees who is the Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge. He states that “we should ensure that robots remain as no more than ‘idiot savants’ – lacking the capacity to outwit us.” Robots cannot...