Harnessing the Demon: Artificial Intelligence, Ethics and Product

Artificial Intelligence, Ethics and Product

STEPHEN HAWKING WARNED THAT FULL ARTIFICIAL INTELLIGENCE could spell the end of the human race, and Elon Musk compared AI to summoning a demon. But, after attending a robotics challenge in which most robots couldn’t open a door—much less carry out other, similarly simple tasks—MIT professor and pioneering roboticist Rodney Brooks recommended, “If you’re worried about The Terminator, just keep the door closed.”

Regardless of whether you’re an AI believer or already have your off-the-grid safe house prepped, there’s no arguing that technology is developing at a pace that can’t be ignored—making it critical that those who develop AI-based products have a strong understanding of the broader implications.

 


The Product Management/AI Collaboration

Military strategist and U.S. Air Force Col. John Boyd developed the OODA loop cycle:

  • Observe: actively absorb the situation
  • Orient: understand blind spots and biases
  • Decide: form a hypothesis for action
  • Act: test your hypothesis

This is what product managers do daily, according to Drew Dillon, a freelance product leader who has presented on the topic of “Product Management Ethics in AI.”

“The product manager’s job is to observe the problem, orient themselves to the value they can provide, decide what to do, and then do it,” he said.

But when AI enters the equation, there is less product-manager intuition involved, Dillon said. In fact, a machine-learning capability might be able to surface problems itself based on data it’s been fed, and a human decides what to do from there. A more autonomous system may go beyond simply surfacing problems to acting on its own. Then it’s up to a human to fine-tune the output.

“I don’t think product managers need to be technical, in the same way that a painter doesn’t need to be a chemist,” Dillon said. “But, like a painter, product managers need to understand the nature of the tools they’re using and the capabilities of those tools. It’s about understanding the inputs and outputs and how we can be effective with them.”

 


The Challenge

While simpler expert systems may make life easier by automating certain tasks, things become stickier when machine learning enters the picture.

##tweet##

For example, across the nation, judges and probation and parole officers use risk-assessment tools built on algorithms to assess a criminal defendant’s likelihood of becoming a repeat offender. A group of researchers analyzed the commercial risk-assessment tools developed by consulting and research firm Northpointe Inc., which offers software products, training, and implementation services to criminal justice systems and policymakers. The researchers conducting the analysis found Northpointe’s tools to be biased, with black defendants being twice as likely as white defendants to be misclassified as a higher risk of violent recidivism. And white violent recidivists were 63 percent more likely to have been misclassified as a low risk of violent recidivism compared with black violent recidivists.

In another example from earlier this year, the U.S. Department of Housing and Urban Development (HUD) announced it was charging Facebook with violating the Fair Housing Act by “encouraging, enabling, and causing housing discrimination through the company’s advertising platform.” HUD alleged that the social media platform unlawfully discriminated based on race, color, national origin, religion, family status, sex and disability by restricting who could view housing-related ads on Facebook’s platforms and across the internet. Further, HUD claimed Facebook mines extensive data about its users, then uses those data to determine which of its users view housing-related ads based, in part, on those protected characteristics.

In yet another example, 25 AI researchers issued a statement in April calling on Amazon Web Services (AWS) to stop all sales of its Rekognition facial-recognition technology to law enforcement agencies until legislation and safeguards are in place to protect civil liberties from “the often-biased algorithms used in such systems,” wrote GeekWire’s Tom Krazit. The researchers’ request came after a similar May 2018 request from the American Civil Liberties Union.

Matt Wood, general manager of deep learning and AI for AWS, and Michael Punke, the company’s vice president for public policy, argued for Rekognition’s benefits and defended the company’s technology in response to the researchers’ request. But the researchers pushed back, saying that “Caution, concern and rigorous evaluation—sensitive to the intersecting demographics that affect human-centric computer vision for images—are even more pressing when considering products that are used in scenarios that severely impact people’s lives, such as law enforcement.”

Each of these examples showcases the biases that machine-learning algorithms have developed as systems have continued to learn. But how did these biases occur? What causes an unemotional piece of technology to exhibit unethical tendencies?

“All of these systems are created by humans, and we’re inherently flawed,” Dillon said. “The data in and of itself is biased because it was collected by a human, and that human had to make choices.”

Otis Anderson, a data scientist, and colleague of Dillon’s, said it succinctly: “All data has an opinion. You can’t trust the opinions of data you didn’t collect.”

Dillon offered the example of time-stamping data. A multinational organization may choose to time-stamp transactions in Pacific Standard Time, and a human may see data that reflects random transactions happening in the middle of the night. In reality, the transactions may have occurred in another time zone during that region’s regular business hours. The choice to time-stamp transactions in Pacific Standard Time was made by a human.

“There are so many things in the data that you can’t understand because the people who programmed the systems don’t work there anymore or you can’t know the rules around how or why data was formatted in a certain way,” Dillon said.

The business implications of taking ethics lightly are significant, regardless of intent. In his paper, “The Ethics of AI: How to Avoid Harmful Bias and Discrimination,” Brandon Purcell, principal analyst serving customer insights professionals for Forrester, noted that customer-insight professionals who lead data science teams that create biased models risk:

  • Damaging their organizations’ reputations
  • Incurring regulatory fines and legal action
  • Seeing a dip in revenue as customers make decisions with their dollars

And ethical and responsible AI development is a top concern for IT decision-makers, too, according to recent research from SnapLogic, a commercial software company that provides integration platform as a service tools. Among the U.S. and UK IT professionals surveyed, 94 percent said they believe that more attention needs to be paid to corporate responsibility and ethics in AI development, and another 87 percent said they believe AI development should be regulated. (See “Ethical AI: Technology Leaders Call for More Accountability.”)

“AI is the future, and it’s already having a significant impact on business and society,” said SnapLogic CEO Gaurav Dhillon in a statement. “However, as with many fast-moving developments of this magnitude, there is the potential for it to be appropriated for immoral, malicious or simply unintended purposes. We should all want AI innovation to flourish, but we must manage the potential risks and do our part to ensure AI advances in a responsible way.”

 


Ethical AI in Action

Axon, which develops technology and weapons products for law enforcement and civilians, is taking steps to mitigate ethical risks.

“Our company’s foundation is around helping society and doing it ethically,” said Moji Solgi, Axon’s vice president of artificial intelligence and machine learning. “As technology gets more powerful and more capable, the importance of ethics also increases.”

Axon created an AI and Policing Technology Ethics Board in early 2018 to advise the company on the responsible development of AI technologies. The board is composed of external experts from a variety of fields including AI, computer science, privacy, law enforcement, civil liberties, and public policy. Its members provide guidance to Axon on the development of its AI products and services—and look closely at the effect of those products and services on communities.

Based on a recommendation from its ethics board, earlier this year Axon announced the implementation of a full-day implicit-bias training for its employees that is similar to training that police receive. Working with a third-party provider, employees—including Solgi’s AI team—will be familiarized with the types of procedures and work law enforcement personnel go through every day. Staff then will be trained on implicit-bias situations and their respective counter-mechanisms.

“Law enforcement officers go through this training so that their subconscious and implicit biases don’t affect their work,” Solgi said. “We thought our employees, researchers, and technologists should go through a similar bias training so our subconscious and implicit biases won’t reflect in the way we build these technologies.”

Axon doesn’t have a set list of ethical guidelines that developers refer to when building new products. Rather, the company relies on a set of operating principles that drives the type of transparency it delivers to the ethics board. Beyond that, the company has a set of cultural values with ethics woven into each.

“Making something that is ethical is really something intangible and deep down in the way that people operate and think,” Solgi said. “Converting something like that to a set of checklists isn’t going to be the best approach. It has to be deep in the culture, and then you have to rely on people to be ethical in their work.”

Though law enforcement personnel don’t have many AI tools in their hands today, as more automated technologies become part of the workflow, the issue of bias will be important, Solgi said. “When these machines or models are trained on data sets that are biased—be it trained on certain demographics or ethnicities—when you deploy this in the real world, given the diversity we have in the real world, it’s not going to perform well in certain situations.”

 


A Way Forward

“I think most people and most organizations are well-intentioned,” Solgi said. “At the same time, when we are developing these technologies, you always have a lot of blind spots and that is, again, innocent mistakes. As someone who has a purely technical background, I can tell you that, in our training and in our backgrounds, most of the people who develop—engineers, researchers, technologists—they have not thought too deeply about the ethical aspects of their work. So, what’s important for leaders and executives and boards of directors is how to infuse these ethical principles into the day-to-day work so that, once a product is out there, it’s gone through different levels of due diligence in terms of ethics.”

Dillon asserts that inclusion must be a fundamental rule of any AI system so that if a system starts to pull out a group and treat it in a way that is less positive than the broader user group, it’s something to look at as an ethical violation. Feedback loops are critical for this, he said.

“We can’t teach the machine about broader racial or gender issues, but we can reason about those things if it provides that data back to us,” he said. The problem, he went on to say, is that many machine-learning systems aren’t storing data or giving much feedback to analyze—yet.

Solgi offered his thoughts on how organizations should approach the implementation of ethical practices in AI-based products:

  • Create an advisory board: Assign a group to review the ethical aspects of technology development. “It’s important for these boards to be composed of people outside of the company,” he said. “Employee boards are also beneficial, but not nearly as when you get a fresh perspective from people who are completely independent.”
  • Be mindful: Starting at the C-suite, it’s important for companies to be aware of ethical issues and to look for ways to mitigate. This can include implicit-bias training, courses, book clubs, or anything else that will raise awareness about the ethical aspects of technology. “This movement is already big in the tech industry, but we still have a long way to go,” Solgi said.
  • Educate the public and consumers: Many people hear the words “machine learning” and “artificial intelligence” in the news, but that’s different from having a basic understanding of how these technologies are developed, what they mean as well as their limits and shortcomings. “The technology industry could do a better job of being proactive in terms of that education so that the citizens who are participating in this open democracy can decide for themselves what product and what technology they should be using.”

And, at the end of the day, the No. 1 thing to remember when attempting to harness the AI demon is that implementing systems with the intent to learn about audiences or build better products boils down to the people who are doing the work.

“The only way you combat the conversion of a human into a bunch of numbers is with more humanity,” Dillon said. “And that’s kind of what we need.”

Author

Author:

Other Resources in this Series

Most Recent

Article

7 Critical Product Manager Interview Tips

Getting ready for an upcoming product manager interview? This article offers essential strategies and tips for showcasing your skills effectively and using research and practice to make a strong impression.
Article

Women Product Leaders and Changemakers

In the spirit of Women’s History Month, we brought together the insights of product management leaders and Pragmatic instructors Cindy Cruzado and Amy Graham in a comprehensive interview that sheds light on their experiences, challenges,...
Category: Leadership
Article

How to Write a Product Manager Resume

A comprehensive guide to writing a product manager resume with best practices, dos and don’ts for writing a resume, and templates.
Article

Top Reasons to Pursue a Product Management Certification 

Earning a product management certification is a strategic move for professionals immersed in product development, whether they're officially holding the title or managing related tasks. It’s also a smart step if you’re aspiring to move...
Article

How to Choose a Product Management Certification

Learn how to choose the best product management certification for your career development and what makes a certification worth it.

OTHER ArticleS

Article

7 Critical Product Manager Interview Tips

Getting ready for an upcoming product manager interview? This article offers essential strategies and tips for showcasing your skills effectively and using research and practice to make a strong impression.
Article

Women Product Leaders and Changemakers

In the spirit of Women’s History Month, we brought together the insights of product management leaders and Pragmatic instructors Cindy Cruzado and Amy Graham in a comprehensive interview that sheds light on their experiences, challenges,...
Category: Leadership

Sign up to stay up to date on the latest industry best practices.

Sign up to received invites to upcoming webinars, updates on our recent podcast episodes and the latest on industry best practices.

Subscribe

Subscribe