February 5, 2018 § 1 Comment
This is may be a speculative blog from the technological perspective but its behavioral background is solid; at least I believe so. The idea presented here is somehow funny and very serious at the same time. The simple question I have on my mind is how to teach manners to AI? It is not about a polite AI only, the problem scales up to as high spheres of human behavior and culture as far as we can see.
Seeing only a gloomy AI future?
We have read and heard Elon Musk and Stephen Hawking painting a scary future for the potentially destructive AI if it manages to escape our control and starts running wild. Some may think that ‘we can always take the plug off’ or that ‘AI has no will’. However, observing the recent false missile attack alarm in Hawaii made it clear to everyone, how simple human errors in using technology can cause devastating effects. It was a wake-up call to me too, especially noticing the official reaction to the error. The person who made the mistake was fired – a strange solution to the catastrophe – but the designers of the UI run free, I assume.
In AI the risks can become much worse than in Hawaii, especially when human errors can trigger complex, and difficult to follow chains of AI-based actions and the design for human-technology relationships is unfit to prevent this. The Hawaii example was extremely simple: the operator chose a wrong function from a simple, easily understandable set of alternatives. The mistake was taken seriously in the relevant organizations but I have not noticed this incident to launch much discussion on how we should prepare for the coming of AI where similar ‘human errors’ (actually they are design errors) will become possible.
In a recent panel discussion at Fire 2017 (Future of Work for Humans and Machines) the participants Joseph Smarr from Google and David Brin from Future Unlimited seemed to agree that it takes some time, some years perhaps, before the risk of a AI running dangerously wild becomes real. However, they did not discuss the ways in which AI could start living its own dangerous life in the net already today. Brin did imagine an AI, for example, which would be able and have a chance to scan and look at all the movies there are in the net and to learn whatever human behaviors we know is available there. What it would learn form the movies would not always be the best of humanity, so we need to find out human-controlled ways to teach the future AI manners and good behaviors. We should start it today.
Making AI a better person
We must teach AI manners. It is not different from educating our children and showing them how to behave in different life situations. At the moment there is no unique and scalable way to achieve this for AI.
Following the FiRe 2017 panel discussion and some of the comments from the audience made me think about the following: how to teach AI such manners and to do good or as someone from the audience suggested, to even nudge us to be better humans? AI cannot do anything like this unless it has a chance to learn behaviors that are good in nature, in some agreed-upon, human sense.
I’ve earlier introduced the concept of Internet of Behaviors where the idea is to introduce individual behavior data into the net and to make it (globally) available for a number of purposes, from health care, entertainment and education to marketing. The psychological thinking behind IoB is described here. It is like IoT but the idea is to assign addresses to an ongoing (it can be a historical or fictional, as well) behavior, which makes it possible to address and follow such a behavior and everything physical, digital and virtual related to it. This would also make it possible to build a contact with the person X showing that specific behavior (in case he/she is willing to allow it; I will not deal with the privacy issue here).
What if there was a systematic way to offer models of good behavior for AI to follow, to teach it behaviors we know and define as good behavior? In many cases it would be easy to define the criteria and to use such behaviors as models for AI to follow and learn. With the Internet of good Behaviors (IogB) approach we could offer AI access to behaviors (and companionships J ) we think are good for its development just like we do to our children. By allowing this we would let it use all the relevant data related to that behavior and to learn from it. It is quite possible we could learn from that too, but that’s another matter.
Of course there is no IogB system as of yet, but the potential exists already and is in use where personal data is collected by various devices. We do know how the deep neural nets already learn from examples but it’s some way to teaching them manners. IogB would take their learning to a higher level.
My idea for the AI community is to start a trial within a well-defined AI context where we know the criteria for good behavior and where good manners are relevant. It can be as simple as being polite in certain cultural situations, ways of speaking and interacting or getting food and support for the poor and looking at various ways people and citizens are now doing it globally, helping those in trouble, all over the world. Only imagination sets the limits here.
To run such a trial, we should arrange for people to adopt a coding (addressing) system for their detailed behaviors by using a simple app and monitoring system. Of course they should be willing to reveal (without revealing their identities) their behavior for the specific AI we want to teach manners. A feasible coding system for such behaviors is needed; you can consider this as a process of addressing specific behaviors in the same way as objects are addressed in IoT, which can be, for example, verbal or bodily expressions, emotional states, but they can also be physical or virtual transactions relevant to the specific entity of good behavior, practically anything related to human internal or external behavior. My main point here is that the occurrence of the addressed behavior makes it available in the net and the AI can then use it as learning data. There is much to do in this and to build a Teach manners to AI framework.
We can imagine the huge scope and scale of the approach by considering all possible contexts of good human behavior, from documents, and movies to real, human ongoing behaviors. Then there is the scary thing: Internet of bad Behaviors. It’s possible that we cannot stop it unless we can teach good behaviors first and even that may not be enough. Without going deeper into this I see a real, even important possibility for building and educating a human AI. We have time to do this. In the IoB blogs I have explained the background of this behavioral approach in more detail.