Ready to learn Internet of Things? Browse courses like Internet of Things (IoT) Training developed by industry thought leaders and Experfy in Harvard Innovation Lab.
A friend of mine works with puppies being trained as guide dogs for the blind. She had her daughter help her choose a pup’s name. “What about Jarvis?” she suggested. When asked why, the girl seemed astounded that her mother hadn’t already made the connection. “You know. After Tony Stark’s J.A.R.V.I.S. He looks out for Tony. Even though he does what Tony tells him to, there are times he won’t, but he does it for Tony’s own good.” I had to chuckle at how readily today’s generation understands artificial intelligence and sees how it can be applied to help humankind. It made me wonder, can intelligent disobedience in IoT work similarly?
Intelligent partnerships can increase safety
Guide dogs are trained to obey their master’s commands. However, intrinsic to a seeing eye dog’s training is a behavior termed “intelligent disobedience.” Intelligent disobedience occurs when a service animal goes against the owner’s instructions in an effort to make a better decision. The behavior is built into their training and is essential to a service animal’s success on the job.
Imagine, for instance, a blind man who walks the same path every day. He is familiar with points where he needs to cross the street. His better developed senses of smell and hearing help him understand things like traffic and crowds. He can make appropriate decisions. However, not everything can be foreseen. A surprise may await him around a corner that he had never before encountered. The dog, which sees what the owner does not, is able to perceive the danger and disobey the command to move forward.
Intelligent disobedience: Intelligent disobedience stems from the concept of teaching a guide dog to go directly against the owner’s instructions if the commands given risk the owner’s safety.
When the IoT system knows more than you do
This sort of behavior may be similarly beneficial in the world of IoT, with its AI and machine learning. Intelligent systems should be designed to push back on instructions when they are not in the best interests of the job at hand — particularly when there is danger involved.
Sensors act as the nervous system of IoT implementations, collecting continuous streams of data to be processed. They supply the data to computers that process it faster than any human can. The intelligence derived through an IoT system should have a broader view than any individual operator. It may see danger faster than it can be perceived by operators and determine that it needs to take necessary precautionary measures. While the operator tells it to do one thing, the system “knows” it is too dangerous to do so. This is when intelligent disobedience can kick in to alert the worker that the original instructions will have negative consequences given the current conditions.
The override switch
Of course, there will be times that the instructions need to be followed in spite of the system’s push back. Imagine again that the blind man comes around the corner and the dog notices two large men with whom he is unfamiliar standing in the way. He stops his master from proceeding forward.
However, when one man speaks, the blind man recognizes the sound of his brother-in-law’s voice and reassures his dog that he is friendly. In this way, the intelligent disobedience has been overridden. Similarly, the operator may know something that the IoT system has no way of knowing. An escalation procedure should be in place so that the worker can countermand the challenge to the request and move ahead with the original instructions.
Intelligent disobedience in action
At Red Hat, we created an Industry 4.0 demo with our partners, Cloudera and Eurotech, featuring a predictive maintenance application. In it, historic and real-time data are fed through a data hub for analysis, modeling and machine learning. New business rules can be established based on this data, and machine learning models can be executed at the edge to solve problems and react to unpredicted events. Models like this can be used to statistically analyze and predict when a machine may fail and when to service it, and even push back if something is amiss.
Are digital twins the best way to implement this?
As systems become more intelligent, they evolve to a point where the user or administrator should integrate them into the process, so that decisions are made with their consultation. Taking this feedback into account prior to making a decision can help ensure better decisions all around. One way to do this is through a virtual factory — essentially a software representation of a machine. It can be used as a combination of a testbed and a way to queue things up.
Eclipse Kapua offers digital twins that do this. When one system is unavailable, its twin will show the last known state, a virtual representation of the other. In this scenario, users can query the state of operation and ask for measurements. They can even send changes to a device. But it doesn’t comment on the effect of the changes.
Another concept is having a virtual copy of an engine that the user can test changes against. It employs AI and machine learning to tell what happens if the changes are made, before deploying out to the physical machine.
There are few products in the market today that do the latter. In order for the industry to get to a state where we can do it more broadly, open standards should be in place so that information can be shared. To create the complex models required, there should be standards for simulating these devices in software. A drive toward standardization with the digital twin model is one potential way of doing this.
I wonder if it should be named J.A.R.V.I.S.