Meet the Researcher: Shiwali Mohan Talks About Human-Aware AI Systems
Shiwali Mohan is an AI researcher with an interest in the design of interactive, communicative agents. Her research spans artificial intelligence, agent design and learning, cognitive systems and architectures, and psycholinguistics.
Shiwali, please tell us about your research at PARC.
My research focuses on what I’m calling human-aware AI systems. Traditionally, AI algorithms and systems are built without incorporating any reasoning about the people who use them. Because they are not designed with humans in mind, the adoption of AI systems can be challenging. Often, we do not clearly define the problem upfront when designing interactive AI systems. What are the humanistic goals that we are trying to achieve, what defines success, and how can we evaluate the usefulness of AI systems?
I feel that now our AI algorithms have reached a point where they are very robust when it comes to reasoning about the world. There has been a lot of progress in the human sciences such as cognitive psychology, neuroscience and educational psychology, and we have a good understanding of human behavior. It’s a good time to bring the two together to focus on domains that are very humanistic in nature such as supporting human learning.
At PARC, we are exploring how AI systems can support humans in learning novel tasks such as building an artifact or repairing a machine. If you want to learn something, an AI system has to reason about how you are learning. It can’t be centered solely around the lesson plan. Because each person has unique strengths and weaknesses, we want to leverage their strengths and alleviate their weaknesses to help them learn in the best possible way. Typically, the lesson plan is generated and executed independent of the person who is learning. My research aims to incorporate the human into the lesson plan so that we are adapting the lesson for the person who is involved.
We recently hosted a joint event that was centered around machine learning and user experience. Can you explain how these two fields intersect at PARC?
Yes, there are different ways in which our machine learning group connects to the user experience/social sciences group. First, the AI algorithms themselves are built with scientific insights from social sciences and psychology. These disciplines study how people reason about the world and learn from experience. Theories and models from these disciplines are at the center of AI system design. Once we build the system, my group collaborates with the user experience team and social scientists to understand if the system works the way we intended it to. The user experience team also helps us determine how to best structure our studies and design our experiments around the AI systems.
Can you give examples of some projects that are centered around human-aware AI systems?
One of my projects focused on AI systems for health behavior change, where we incorporated learnings from physiotherapists’ approach to increasing exercise for their patients. We also looked at how certain health habits are linked to the way people remember things. We tried to understand why these behaviors occurred and what were the causal models for them.
In another project funded by the Advanced Research Projects Agency-Energy (ARPA-E), we partnered with Xerox and Virginia Tech to explore why people take certain transportation methods in the city of Los Angeles and what would motivate them to shift to a more sustainable mode. We ran several simulations and designed AI algorithms to better understand this problem.
Currently, I’m working on a cognitive learning project. The goal is to help end users of technology such as printers or cars perform certain maintenance and repair tasks without the need for a technician. We are developing an augmented reality system that aids a user’s task performance using visual and conversational guidance. Our solution incorporates a model of how humans reason and learn about novel tasks.
These projects illustrate that a large class of problems can’t be solved unless we reason about the humans that we are trying to influence. We have to put how humans think about the world at the center of the AI technology.
What do you see as the potential opportunities for human-aware AI systems?
Making humans the integral part of AI systems will help us define the right evaluation metrics and open up a new class of problems for AI systems to address.
I think the core of humanity is being able to learn new things and I am greatly motivated to design intelligent systems that support it. When we consider AI for learning, we tend to focus only on academics, which is only a small part of human learning. I think AI learning approaches should also support the routine, everyday tasks that occur outside the classroom setting. For example, in the workplace setting, there’s an opportunity to help workers learn the physical practice of certain tasks and acquire new skills. With computer vision and conversational technologies being so advanced, they are great platforms to support this new type of learning.
View Shiwali’s recent talk on human-machine collaboration at the joint PARC, MLUX SF and Ladies that UX SF event: https://www.youtube.com/watch?v=z9c8iU8l7C0
To learn more about Shiwali’s research, read these recent publications:
Our work is centered around a series of Focus Areas that we believe are the future of science and technology.
We’re continually developing new technologies, many of which are available for Commercialization.