Should People make Intelligent Robot

The case is about a robotic boy who is indistinguishable from others. He is designed in a way that allows him to experience human-like emotions implying that he needs caretakers. The boy is adopted by a couple who later abandons him. It becomes a complex case because the robot is a computer artifact that shares human attributes. The indication that this robot experiences emotions implies that it is conscious of people around it and especially, the actions that impact its well-being. Although it is a computer artifact, the robot can take part in conversations and social interactions. After being abandoned by the parents who adopted him, the robotic boy felt betrayed and neglected just like a normal human child would feel. Did his parents commit any wrong by abandoning him?

Central stakeholders in this case are people who designed and created the artificial boy, himself, and the couple who adopts him. Other stakeholders can include the society where the boy lives and the mediators of his sale. This case will focus on the manufacturer, the artificial boy, and his adopters.

The technical problem in this case is finding a caretaker for the artificial boy. Although the boy is a robot that can be dismantled, he is indistinguishable from other boys meaning that people who see him will assume he is alive. Additionally, he is designed with the capability to experience emotions. Emotions cause people to take actions that can either have a positive or negative impact on their society. If the artificial boy takes an inappropriate action, somebody needs to be accountable for his deeds. He needs to be taught to cope with emotions assuming that he has the ability to learn new behaviors from his experiences.

Although the artificial boy finds a couple to adopt him, his caretakers later abandon him. There is an ethical problem in the abandonment. For one, the boy’s actions are unpredictable. Because of the emotion void that is created after his parents abandon him, he is likely to harm those around himself. Secondly, the fact that this boy is capable of participating in life by processing what people say and contributing to their discussions makes him ‘alive’ to some extent. Some would, therefore, argue that abandoning him is equivalent to neglecting any other child.

The existence of the boy as a computer artifact is one of the perspectives that can be used to argue the case. He has been designed by humans and runs on software which, makes him a computer artifact regardless of his human attributes. If he becomes unbearable to his caretakers and the society at large, the best thing would be, to dispose him properly by dismantling. Moral responsibility is another factor that can be used to argue the case. The couple who adopted the boy is answerable for his behavior. According to ethics, people who buy a computer artifact should be responsible for its use. The adopters should therefore, explore better ways of solving the problem instead of abandoning him. If the social technical systems paradigm is used to assess the case, the manufacturers are the ones to blame. The manufacturers should have foreseen the complexity of interacting with a robot that has emotional feelings and avoid building it to uphold ethical standards.

The first rule states that people who design a computer artifact are morally responsible for its foreseeable effects. Additionally, those who use it as part of a socialtechnical system are to blame for the effects of the artifact. Using this rule, both the designers and the adopters of the robot are to blame for the crisis. The designers can solve the problem by recalling the robot and dismantling it completely or taking away the part that gives it emotional responsiveness. Its adopters need to devise strategies that will ensure the boy does not disturb them and the adjacent society. They could need to see the manufacturer for necessary modifications. The second rule states that everybody who is involved in designing and deploying a computer artifact is responsible for its outcome. The responsibility of solving the problem should therefore be shared by all stakeholders. They need to collaborate and agree on a solution instead of blaming each other. The third rule states that people who use a computer artifact are morally responsible for that particular use. The parents therefore take a heavier blame for abandoning their robotic boy. Before adopting him, they should have learned about his behaviors and capabilities first to reassure themselves that they can live with him comfortably.

The solutions given for the technical and ethical problems in this case can stand criticism for various reason. The first solution where the adopters are recommended to return the artificial boy to his creators for dismantling is the most feasible of all approaches. It could be argued that even humans need to die at some point in their lives and especially, when it becomes difficult for society to bear with them. As long as the artificial boy is allowed to continue existing, he will create more problems because his ability to process emotion makes him unpredictable. His creators can decide on whether to dismantle him completely or to modify his design so that he can no longer ‘feel’.

I would recommend the manufacturers of robots to avoid making creations that have unpredictable behavior. All computer artifacts should have foreseeable outcomes so that it is easy to solve problems. If creating an unpredictable artifact is inevitable, the manufacturer needs to have a contingency plan that outlines the process of disposing artifact. I would consider it unethical to create a robot that process emotions and interact with the conventional society.