A Framework for Systematically Applying Humanistic Ethics when Using AI as a Design Material

This post is an excerpt of an article recently published in Temes de Disseny – No 35 (2019): AI and Emerging Digital Technologies as a Design Material. The article describes the AI Ethics Committee that was recently created at PARC.

By Kyle Dent, Richelle Dumond, Mike Kuniavsky

AI Design Guidelines

Our approach when considering the potential effects from new technology is not to consult a list of simple ethical standards hoping to be nudged in the right direction. Real-world ethics are rarely simple. Ethical excellence begins by facing hard questions leading to complex discussions that deal with difficult trade-offs. Moreover, questions and discussions must place the ethical considerations in the context of the expected use of a particular technology.

We propose a committee of relative experts within the organization who can help researchers and designers negotiate the ethical landscapes of their projects. Committee members continue their own ethical education and refine their expertise through training, topical readings with discussion, and ongoing discussions about ethically relevant incidents and events in the world.

Below, a set of guiding questions is given that a committee can use to prompt project discussions to drive the conversation towards the complexity, not away from it.

1. Preliminary Checklist

Respect for designers’ time and attention requires that ethical reviews are not overly cumbersome and aren’t demanded when not required. A simple checklist helps to assess the potential for harm and the degree of impact.

When the AI learns from existing data, ask yourself:

Does the data contain individual personal attributes (especially protected attributes such as race, sex, gender identity, ability, status, socio-economic status, education level, religion, country of origin)?

Does your project data represent individuals or populations of people?

Is the goal to make predictions about people’s behavior?

Is the goal to classify people or otherwise make predictions about them?

Is the goal to make decisions about people or populations of people that could have a significant impact on their lives (e.g., job performance, judicial sentencing, fraudulent activity, etc.)?

When AI participates in physical space as with cameras, microphones, sensors, or anything that records human likenesses or activity:

Is it in a public place?

Is it hidden from anyone who might be recorded? In other words, could subjects be recorded without knowing they are being recorded?

Is it possible that any members of vulnerable populations (this could be any disadvantaged sub-segment of an overall population, e.g. children, prisoners, refugees, people facing discrimination) might be recorded?

Is there heavy equipment or is it operating at high speed?

Answering ‘yes’ to any of the above questions indicates additional scrutiny is most likely needed.

2. Comprehensive Guidelines

When a project requires more careful consideration, we propose a comprehensive set of questions to provoke discussion and exploration for potential harm. Not all of them are relevant to all projects. They are designed to prompt ideas that committee members will use to formulate questions related to the project under consideration. The subsequent discussions are meant to lead designers to consider many aspects of the technology. We organize the questions along the dimensions of: human data, social and environmental impact, physical interaction, malicious intent, and ecological impact.

Some of these questions have been adapted from the FAT/ML organization’s (Fairness, Accountability, and Transparency in Machine Learning) set of principles (FAT/ML n.d.).

2.1. Human Data

Respect for people as individuals is essential. Data collection of personal information and behavior may include less than fully informed consent from the subjects. Think about how the analysis might affect each subject involved. People should have their individuality and autonomy respected, and steps should be taken to protect them from harm.

Have you taken appropriate data protection steps given the sensitivity of the data? 

If the project is for a public entity, can you disclose the sources of your data? 

Have you confirmed that your data accurately reflects the real-world situation for the problem you are trying to solve? Can you also consider alternative data sources? 

Have you checked the internal consistency of the data (through random sampling, for example)? Should you report any issues to stakeholders? How will you assess the data for both explicit and implicit biases? 

Will the benefits of your design extend to the entire population or could there be subgroups who are inadvertently excluded? 

For projects that have a significant risk of causing human rights abuse, is it possible to submit them for independent third-party audits? 

Can stakeholders audit the system? At a minimum, you should not be delivering black-box systems that are not inspectable and subject to verification by those involved.

Is there a plan to communicate the decisions about trade-offs, project assumptions, shortcomings, error rates, etc. to stakeholders? 

Is there a process for people to correct data or contest erroneous decisions? 

If appropriate, is there a way for data to be removed when necessary (e.g., the right to be forgotten as practiced in the EU and Argentina)?

2.2. Social Impact

AI is increasingly becoming a more prominent part of many aspects of our lives, including decision-making processes that could alter their course. Successful designers view possible social impact not as an afterthought or hindrance to a project but actively think about how to prevent harm while the system is being developed.

2.2.1. AI decisions that could bias or alter societal norms:

Are there any subgroups who benefit more or less from the system? 

Think about what services, products, and industries – and perhaps jobs – might be replaced by the deployment of the algorithm/system. How will this impact society? 

When using human data, do the benefits outweigh the risks to those involved? Consider the benefits of a project and the potential harm to those affected by it.

Are the decisions produced by an algorithmic system explainable to the people affected by those decisions? Explanations help with the validation of results and build trust in the system.

Useful exercise: Imagine yourself subject to all possible outcomes of a system, are they all equally fair and just from all points of view?

2.3. Environmental Impact

Resource consumption of a deployed system itself might be an overlooked aspect of the design.

Consider the energy impact of the computational resources necessary for the project. Is there a way to minimize that impact? Can the computation be handled differently to use less energy? Do the resources provisioned for the project match the requirements, or are they too excessive?

Does the algorithm/system/design encourage unsustainable behavior?

2.4. Physical Interaction

Much of the concern related to AI has to do with respecting individuals’ privacy and rights. Risk of physical injury may also be a concern. Broadly speaking, users should be aware of possible risks and developers must think through potential unintended consequences of interacting with the technology.

Is there the equivalent of informed consent for the usage? Informed consent means that users fully understand the facts and consequences of their participation.

Will individuals interacting with the technology understand the capabilities and limits of it?

Does the design minimize potential risk?

Do the warning signals match the severity of the danger?

2.5. Misuse and Malicious Intent

An often-overlooked concern is how a technology might be misused. Misuse can happen by mistake, overconfidence in the technology, or by bad actors trying to subvert it. In all cases care should be taken to mitigate potential harm and provide appropriate restrictions to limit misuse.

Are there guardrails in place to prevent off-label usage (the intentional or accidental misuse of the design)?

Has the system been adequately secured to prevent manipulation from outside? If physical devices are used, are they adequately secured from hacking or subversion?

2.6. Post Deployment

The responsibility of a design or system doesn’t end at handoff or deployment. Although every effort should be made to prevent unexpected outcomes, considering a sunset plan before delivering the project to the client can prevent confusion about how to solve unforeseen consequences.

Do individuals and groups have access to meaningful remedy and redress? 

What will the reporting process and process for recourse be?

Is there a plan for what to do if the project has unintended consequences? 

Through these principles and guidelines, we aim to include the discussion around AI Ethics as a natural component of AI projects designed and engineered by PARC.

Read full article

Additional information

Focus Areas

Our work is centered around a series of Focus Areas that we believe are the future of science and technology.

FIND OUT MORE
Commercialization Opportunities

We’re continually developing new technologies, many of which are available for Commercialization.

FIND OUT MORE
News

PARC scientists and staffers are active members and contributors to the science and technology communities.

FIND OUT MORE