AMAZON
Let’s teach artificial intelligence they way we learn.. by storytelling.
Introduction
Advancement in artificial intelligence is picking up pace at a substantial level. Entering humans in to an era where decision making will be at least machine consulted, if not machine governed. Since, these intelligent machines or agents do not experience the same emotions and experiences as humans do. Their suggestions or outputs will more likely be calculated decisions, which sometimes are not appropriate from a human standpoint. At this stage it is essential that such intelligent agents are programmed so that their suggestions or outputs coincide with the human ethics and traditions. And that we tame these agents to some laws that are congruent with human laws. For example,
Teaching a machine that unemployment is not the same leisure.
This article discusses a possibility to teach intelligent agents (robots), the way we humans learn. Moreover, provides some guidelines to how we should achieve this.
Trending AI Articles:
Runway Trolley Problem — The Dilemma
Dignum[1] provides a runway trolley problem that begs the question that what will an intelligent trolley, running on a rail, will do if it has the option to either kill 5 people on the track or the person sitting in the trolley by pulling the breaks. Such examples pose the dilemma that intelligent agents will have to face in any environment they operate from.
While such scenarios are difficult to calculate mathematically, they are also not very well defined in human ethical terms. Although, we can argue that, in case of a human driver, each decision in the trolley problem will have a separate consequence. However, when an intelligent system is involved the situation is that the system is programmed in a way to take one decision from a designated list and the list has ‘just failed’.
Who is Responsible?
Since, such situations are no exceptions in everyday life, the actions of an intelligent agent must be held responsible. Dignum[1] provides three factors to clearly specify when designing an intelligent agent:
- Accountability: representation of moral values and societal norms in the deliberation process of intelligent agents
- Responsibility: a clear connection linking the agent to the user, owner, manufacturer, developer and all other stakeholders
- Transparency: algorithms must be designed in ways that let authorities inspect their workings.
Following this design will allow us to track down the human responsible for the agent’s behavior.
Russel et al.[2] provides an argument that intelligent agents will not only effect decision making as a general intelligence but for specific fields such as marketing, policy making, manufacturing. For example, chat bots and autonomous trading agents.
Knowing the unmeasurable potentials of intelligent agents, Russel et al.[2] provides a measure for intelligent agent manufacturers, in order to avoid anomalies that can lead to undesired outcomes:
- Verification: the agent system should be quantifiable to check desired formal properties.
- Validity: the agent system must be able to identify faults that lead to unwanted outcomes.
- Security: an intelligent agent must be prone to hacking and deliberate unaccountable manipulation.
- Control: the agent should provide some level of human control when desired, however keeping the security concerns in line.
Both aforementioned works provide guidelines to what needs to be taught to an agent, in order to indicate the agent to pursue goals that are beneficial to humans, this process is called value alignment. However, a more pressing question is to how these values can be taught to an agent that is not human?
Stories for Machines?
Let’s talk about how we teach our kids ethics. Well, ethics is not a book or set of rules that we expose them to. On the contrary, kids learn by example. They watch other’s actions and examples. One way to induce these examples are in the form fictional and non-fictional storytelling.
Riedl et al.[3] suggests that intelligent agents can learn these ethics the same way as humans do, by storytelling. Similar mechanism can be designed for an intelligent agent. Stories that have inspired humans for shaping cultures and laws can be used to aid intelligent agents on decision making. One way of achieving this is through reverse reinforcement learning (RRL) and decision trees (DT). RRL can provide the perception part of the intelligent agent to reshape the DT system.
Conclusion
We can design safe and reliable intelligent systems using the checklist provided by Russel et al.[2], however, for teaching human values its necessary to provide some reference to which the intelligent agent makes its decision. The method by Reidl et al.[3] seems effective, however, it is essential that the stories do not serve as a bad influence on the intelligent agent. Therefore, the stories themselves must coincide with the factors provided by Dignum,[1] hence, creating an intelligent agent with moral values.
A question is, how will these stories look like?
References
[1] Dignum, Virginia. “Responsible autonomy.” arXiv preprint arXiv:1706.02513 (2017).
[2] Russell, Stuart, Daniel Dewey, and Max Tegmark. “Research priorities for robust and beneficial artificial intelligence.” Ai Magazine 36.4 (2015): 105–114.
[3] Riedl, Mark O., and Brent Harrison. “Using Stories to Teach Human Values to Artificial Agents.” AAAI Workshop: AI, Ethics, and Society. 2016.
Don’t forget to give us your 👏 !
https://medium.com/media/c43026df6fee7cdb1aab8aaf916125ea/href
Teaching Machines About Human Ethics was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.