An integral, pragmatic approach to explainable AI

We approach the challenges and opportunities in the space from a humanity- and responsibility-centered school of thought.

Unpacking the rationale behind any given decision that the human brain makes is no small feat. In fact, trying to explain the literal intent behind the firing of neurons across our brains during a decision process is just not something we fully understand yet. So when it comes to interpreting decisions behind a complex machine learning model, which approximates the neural networks of the brain, it's little wonder that the tech industry is abuzz with intrigue around achieving explainability in artificial intelligence (essentially the study of how an AI model came to its conclusion).

For many organizations, the goal of explainable AI centers on satisfying legal and/or ethical requirements. These considerations are undoubtedly true for us, but the ability to better explain our AI systems is also critically important from an even broader lens.

When we look at technology, and specifically AI/ML, we approach the challenges and opportunities in the space from a humanity- and responsibility-centered school of thought. We ask ourselves whether the technology we develop will better serve our current and future customers, as well as our associates. For us, achieving better explainability in AI is a crucial means to ensuring responsible and transparent outcomes for all of our stakeholders.

For example, when we’re building technology systems and products, much like Eno, our intelligent assistant, we know that every decision we make on the back-end needs to contribute to an accurate, helpful, conversational, appropriate, and transparent interaction with the customer at the front-end. To achieve this, we’re aiming to embed levers of explainability throughout the entire AI/ML lifecycle, from data collection to model execution and monitoring.

This is a critical challenge we’re working on day in and day out, and while we don’t have all the answers yet, I want to share with you some key approaches and considerations we’re focusing on as we build on our Responsible and Explainable AI efforts across Capital One.

Humanity- and responsibility-centered philosophy

Put simply, we want to ensure that humans are at the forefront of every decision that goes into the AI/ML lifecycle. From a macro perspective, the autonomous systems we develop and deploy should positively contribute to our customers’ lives—helping them spend more time living and doing what means the most to them, not more time banking. For example, we believe the application of AI/ML to a critical area like fraud can take an experience that is often a source of fear and anxiety for customers, and turn it into something that engenders trust and confidence in us and our ability to have our customers’ backs.

Interdisciplinary collaboration

Achieving success in explainable AI—and ultimately understanding how we can ensure that the technology can mitigate unintended harm and contribute to furthering positive impacts—requires collaboration across disciplines, life experiences, and industries. We believe that an approach comprising viewpoints and contributions from all walks of life, and from diverse disciplines—from social scientists to designers, legal scholars, data scientists, engineers, community advocates and more—can help us institute levers of responsibility and transparency into the systems we build.

Working against a framework

In a field as nuanced as Explainable AI, a framework from which to work, measure success, and reach incremental milestones over a fixed period of time is helpful to focus priorities and vision. While there’s no universal standard as of yet, much foundational and emerging work is already underway at leading conferences and gatherings, like FAT/ML, which is coalescing around endeavors to achieve fair, accountable, transparent, explainable, safe, and secure applications of the technology.

For us, there are many components that contribute to our approach for thoughtful, safe, and responsible technological innovation, but it requires that we ask ourselves hard questions and decisions on where to prioritize our energy; for example: How can we institute positive accountability across our engineering teams? How can we expand that outside of engineering? What questions are we not asking ourselves and what are we not including that we should?

Trying new approaches, within reason

With so much energy around explainable AI, we’re regularly experimenting with new techniques to further get at explainability, like visualization of word embeddings and attention layers, natural language explanation and example generation, and generative adversarial networks (GANs) for assessing the strength and weakness of dialogue responses.

But at the end of the day, the most sophisticated and exciting techniques may not be the most effective for the interpretability challenges an organization may need to solve for a given use case. It’s important to be clear-eyed about which techniques work and which don’t, and to make decisions about what will realistically move the needle on the organization’s goals.

Grounding in reality

While we must approach the changes that AI/ML will bring with caution, care and thoughtfulness, I’m optimistic about the transformative power that it can bring to enhance human well-being. But to achieve this, it means that companies, governments, organizations and society broadly will have to make important decisions—like weighing the tradeoffs of explainability and transparency with accuracy. And it means that we will have to be realistic about what AI/ML can and cannot do, including how that reality may evolve in the near- and long-term.

It’s a fascinating time to be working on the expansion and evolution of artificial intelligence, and it’s important that we don’t squander the opportunities to get it right. If you have thoughts, learnings, and/or insights on your experience in the Explainable AI space, I invite you to engage and continue the conversation with us!

***

Margaret Mayer is a Vice President of Software Engineering for Conversational AI Platforms at Capital One. She sets the technology strategy for Eno, Capital One’s banking virtual assistant, as well as enterprise messaging capabilities for email, push and SMS. Mayer is also an advocate for closing the gap in women in technology as a member of Capital One’s Women in Technology working group; she has been with Capital One for 20 years.

Mayer is a featured speaker at this year’s Capital One House at SXSW -- to learn more, visit Capital.One/SXSW.


Margaret Mayer, MVP, Software Engineering

Margaret is a Vice President of Software Engineering for Conversational AI Platforms at Capital One. She sets the technology strategy for Eno, Capital One’s banking chatbot, as well as enterprise messaging capabilities for email, push and SMS. Margaret is also an advocate for closing the gap in women in technology as a member of Capital One’s Women in Technology working group. Margaret has been with Capital One for 20 years, in roles of increasing responsibility within Technology and Operations. She holds a BS in Operations Research & Industrial Engineering from Cornell University and her MS and PhD from Lehigh University in the same field. Prior to joining Capital One, she spent three years as an Assistant Professor at the University of Virginia, in Systems Engineering. Her research interests have been varied, but always along the common theme of using technology to solve complex business problems. Margaret is a board member of CodeVA.org and the Computer Science Industrial Advisory Board at Virginia Commonwealth University.

Related Content

2 abstract head silouettes made of multicolored clouds. in middle of head on the right, there are large pink block numbers coming together in a pink light
Article | October 10, 2017
Article | September 29, 2017