Explainable AI: An Overview of PARC’s COGLE Project with DARPA

This article originally appeared on MSJBlog and was written by PARC Research Fellow Mark Stefik, who leads PARC’s Human-Machine Collaboration Area.

By Mark Stefik

The Project

The COGLE Project (COmmon Ground Learning and Explanation) is developing science and technology for explainable AI as part of DARPA’s XAI program. Why do we need explainable AI? Here’s how DARPA explains it:

• Machine learning is the core technology.
• Machine learning models are opaque, non-intuitive, and difficult for people to understand.
• The current generation of AI systems offers tremendous benefits, but their effectiveness will be limited by the machine’s inability to explain its decisions and actions to users.
• Explainable AI will be essential if users are to understand, appropriately trust, and effectively manage this incoming generation of artificially intelligent partners.


COGLE is part of PARC’s larger research program in artificial intelligence. It is also part of our research on Human-Machine Collaboration.

PARC is the lead organization for the COGLE project. We are partnering with several universities and research organizations on the project. Our partners for the machine learning research are the University of Edinburgh and the University of Michigan. Our partners in cognitive modeling research are Carnegie Mellon University and West Point. Our partner in user modeling and evaluation is the Institute for Human Machine Cognition.

COGLE is initially being developed using an autonomous Unmanned Aircraft System (UAS) test bed that uses reinforcement learning (RL) to improve its performance. COGLE will support user sensemaking of autonomous system decisions, enable users to understand autonomous system strengths and weaknesses, convey an understanding of how the system will behave in the future, and provide ways for the user to improve the UAS’s performance.

We frame explanation in terms of sensemaking and common ground. Herb Clark’s term “common ground” was developed to describe how it is that people collaborating must establish common vocabulary and understanding before they can work together effectively. His research studied how people engage in a nuanced conversational dialog and negotiate the meanings of terms through their discourse. Each party observes that world and develops representations in its own mind.

We are interested in enabling common ground between people and machine-learning systems. Rather than requiring computers to master natural language, we use a shared database as external memory for common ground in human + computer teams. Because sharing of experience via common ground is 2-way, it can enable humans to understand what machines have learned, and also enable machines to understand what humans have learned.

In COGLE, the human users and COGLE operate on a common and observable world. For COGLE, that world is a simulated world (“Game of Drones”). The database has representations of actions (e.g., “turning left” or “landing”), domain features (e.g. ,”mountains” or “lost hikers”), goals (e.g., “find the lost hiker” or “drop a package” or “use fuel efficiently”) and also abstractions of these for conceptual economy in reasoning (e.g., “taking the shortest route” or “efficient foraging patterns” or “avoiding obstacles”).  Each party translates its internal representation to and from representations in the external memory.

The humans and computers learn from each other through demonstrations and explorations using the external memory. Given the different cognitive abilities of humans and machines, we expect human plus computer teams with common ground to work better and learn faster than humans or machines alone.

Explanations for Casual versus Cognitive Analyst Users

We distinguish two classes of users for explainable AI systems. Most users are casual users. Consider a driver using a GPS-enabled map and route planning system for guidance on a trip. At some point the app says “turn at the next exit and head south.” This can be confounding if the destination is clearly to the north. A useful “explanation” for a casual user might be “we are routing you south to avoid an accident five miles ahead”. Casual users want an AI system to explain the gist of the reasons behind its actions or advice. Casual users do not need much more than that, although they could in principle learn about a domain from the explanations provided by a computer.

In contrast to casual users, we distinguish cognitive analyst users. Cognitive analysts are responsible for evaluating or improving system performance. They have much more than a casual interest in the competencies of the autonomous system. Their job is to assess the circumstances under which the autonomous system may fail and where it is highly competent.

A Curriculum for Robots

We have created a curriculum for COGLE to guide its learning. In the “Game of Drones” simulation world of COGLE, the first simple lesson is about “taking off”. The second lesson is about taking off and landing. Each lesson builds on the previous ones and requires the autonomous system to develop particular new competencies.  Later lessons involve increasingly difficult choices that resemble dilemmas. In example later lessons, COGLE needs to carry out efficient foraging or search for lost hikers in a mountainous area, deliver packages, avoid flight hazards, and so on. Difficult choices arise, for example, when more than one hiker is lost. There can be (virtual) life-and-death consequences choices in deciding what to do. An autonomous system could decide to “sacrifice itself” in order to help a hurt hiker. It may need to to prioritize one group of hikers over another. It may be on a time-sensitive mission but decide whether to get more fuel before embarking on a potentially long search.

By supporting the creation of common ground, COGLE’s explanation interface provides users with explanations and insights into COGLE’s reasoning. Since common ground is fundamentally two-way, it will also enable users to influence COGLE’s own learning. COGLE’s reinforcement learning enables it to learn from its interactions and mission successes using the simulated environment in COGLE’s testbed. Deep learning enables it to generalize from its learning and explore a space of learning abstractions. In addition, interaction with the explanation layer enables it to use knowledge about missions and abstractions from human trainers as a guide.  In this way COGLE not only learns from observing the performance of teachers, but also learns on its own, creating a partnership around common ground where “teachers” and “students” learn from each other. In analogy with pedagogy, we call this two-way human-in-the-loop partnership mechagogy in analogy with pedagogy.

We find the “teacher and student” perspective useful in guiding our thinking about these human-machine partnerships. Teachers are familiar with a domain and the experiences and challenges that it can present. On the one hand, their testing and evaluation of students can have a analytic quality. They may devise ways to learn the limits of the knowledge and experience of a student. As gifted teachers know, teachers can also learn from the novel approaches tried by their students.Our research is exploring the basis for faster learning by machines when machines share common ground abstractions previously discovered by people. We also are exploring the creation of effective partnerships and teams of humans and machines, which can take advantage of their differences and their asymmetric cognitive strengths.  In this way, sensemaking, common ground, and mechagogy have become our inspirations for COGLE, and for how we can extend machine learning to create practical ways for creating autonomous systems and human-machine teams.

COGLE’s Layered Approach

The figure on the right shows COGLE’s three-layer cognitive architecture. The explanation layer is the part of the system that interacts directly with users and supports the creation of common ground. The bottom layer runs missions in the simulated world. It learns from its interactions, successes and failures in that world. A cognitive layer in between translates the memory structures of the learning layer into representations more suitable for human understanding.

Users will explore COGLE’s actions and the salient reasons behind its actions through a mission explanation interface. They will explore the extent of COGLE’s experience through a coverage explanation interface. Other interfaces support training and the development of a domain ontology as user’s interact with and name the concepts, actions and policies underlying COGLE’s performance.

To assess COGLE’s performance in the world, we conduct user studies using an explanation framework. User interact with COGLE, explore its behavior and explanations on sample missions, and explore the coverage of its missions and training examples. We measure the quality of user predictions about system behavior and also the efficiency of user sense making.

Learn more about PARC’s work in AI and Human-Machine Collaboration.

Additional information

Focus Areas

Our work is centered around a series of Focus Areas that we believe are the future of science and technology.

FIND OUT MORE
Licensing & Commercialization Opportunities

We’re continually developing new technologies, many of which are available for Commercialization.

FIND OUT MORE
News

PARC scientists and staffers are active members and contributors to the science and technology communities.

FIND OUT MORE