close up of a woman's hands who is weaving on a loom
Feature

AWS's Diya Wynn: Embed Responsible AI Into How We Work

11 minute read
Siobhan Fagan avatar
By
SAVED
As senior practice manager for responsible AI at AWS, Diya Wynn believes responsible AI should become part of the fabric of every organization working with AI.

Artificial intelligence is just a tool. How we use it is up to us, for better or for worse. Responsible AI, which seeks to bring fairness, accuracy and trustworthiness into every step of AI development, is one approach organizations are taking to avoid those potentially negative consequences.

Diya Wynn joined AWS over six years ago as the senior practice manager for responsible AI after a near two-decade career in technology. In her role, Diya helps AWS customers understand where there is potential for risk or unintended impact in AI and provides specific approaches to help them avoid the negative impacts of AI. Her hope is that organizations will view responsible AI as a strategic initiative rather than a checkbox in a technology deployment.

We caught up with Diya during the AWS Summit in New York City.

This interview has been edited for clarity.

Siobhan Fagan: How do you define responsible AI?

Diya Wynn: Responsible AI is an operating approach that is people-centered. It takes into consideration the people, process and technology in order to minimize the risk and unintended impact of using AI and to better maximize its benefit. That's my grounding definition. As much as responsible AI is operational or needs to be part of the technology, it's not technology alone, or technical solutions alone. It requires a much more comprehensive look. And hopefully, responsible AI gets embedded into the way in which organizations work, not just as a point solution, or a set of checkboxes, even though they may use checklists as a part of their process. But essentially, you want it to be interwoven into a culture of responsibility that our people commit to and understand what that looks like throughout the entire lifecycle.

How does responsible AI differ from ethical AI? Or does it?

Diya Wynn, senior practice manager of responsible AI at Amazon Web Services (AWS)
Diya Wynn, senior practice manager of responsible AI at AWS

People use them interchangeably, but I do think there is some difference. As a tool or service provider, we're not making judgment calls or moral decisions on the products that our customers are building, or the ways in which they are creating those products. The notion of ethics can feel subjective.

The notion of responsibility gives us a much more concrete structure to approaching and engaging with the challenges of AI, the areas of concern, and how we address those. When we say ethical AI, it feels like more of a moral question of who's right and who's wrong. With responsible AI, there's some of that too, but I think that the term responsible AI removes that subjective connotation. It also helps to minimize the visceral reaction that we often get when we hear ethics, for some people.

When I hear responsible AI, my first thought is where does responsibility start?

It starts at the beginning. When we are thinking about our AI strategy, most companies are making some pretty substantial commitments. A Gartner study recently found that generative AI is driving businesses' investment in AI overall, and that we're going to continue to see increases in AI use in companies. We're already seeing that now. So when we think about our strategy and approach, we really should be leaning into a strategy around our AI that includes responsible AI and that is the responsibility of the organization. It should be inherent and embedded into an organization’s AI strategy, from the very beginning.

Responsible AI should also be infused into the way we think about how we solve problems with AI. AI is a tool, right? So how do we employ it? Which ways are best for us to do that? What does that mean for our people resources? How do we think about the implications of all of our stakeholders or consumers? What does that mean in terms of the data that we use and how we obtain that data? How do we then build and ensure that the outputs are fair? It's all of that. So it should start at the beginning and be infused or baked into the way that they're building — e.g., their product development, their machine learning lifecycle.

When you think about your customers, and working with their developers who are creating these systems, do you think there's some kind of training those developers should go through to help them think in these terms?

Yes. I've been using a framework in order to talk and engage with customers that touches on seven distinct areas or pillars of responsibility. One of them is training and education. If we're saying that everyone has some responsibility in this, then we need to equip them with the tools. I don’t mean technical tools, but the tools, the skills, and the structure to be able to do that. Developers absolutely need to and should be appropriately trained to understand where bias occurs, what other kinds of risks they should be mindful of, and how to address those in their specific area of code or influence.

And I assume that training would go across the board. I mean, if everybody's responsible ....

Sure, but I think you have to have training at different levels for each role. Our technical resources that are focused on development may be different from what’s needed for our security teams that are looking at how they minimize risks related to vulnerability or security of the infrastructure platform. That training is also going to be different than what we're going to recommend for our product managers, or product teams that are thinking about the design. Part of product design at the very early stages is thinking about the customer — so from the start, they need to define these customer personas, understand their characteristics and how we’re going to serve them.

The way that we get at unintended impact is by identifying the anti-personas, or the people that we intend not to serve, so that we can make sure there's no implications or impact to them. Those kinds of activities are things that your product teams or your designers might be a part of. Some organizations have user UX teams now that are focused on the user experience when there is an interface with an end user. Many of them already employ participatory design or user centered design practices. Those are some of the things that we would recommend as well for product managers to think about. Are we including the right people? Are we getting the right voice? Are we thinking about the risks?

Who ultimately holds responsibility for the outcomes of the AI?

That's an interesting question that is being debated quite a bit among the public as well as with folks from a legal perspective. I think when you say responsibility, there's an element of accountability.

Ultimately, humans make the decisions about how we use AI. Where we allow AI to make recommendations, and inferences from the product would be in low-risk environments. There are other instances where humans are more engaged in using AI and are ultimately the ones making and owning the decisions. Looking at these different situations and use cases could give some degree of context for where responsibility lies. That question, however, is still being explored in terms of what that looks like because of legal considerations.

There's a lawsuit going on now, where a job candidate is, rather than suing the organizations that didn't hire him for discrimination, he's suing Workday, because of their AI product.

Those lawsuits are important ones to watch, because they may provide some degree of precedent about other places where the law will end up falling in other places, and set an expectation of accountability. But I think there is a real distinction to keep in mind. Although we provide tools, you also have the organizations and the people that are using them There's a degree of responsibility they have to have as well. We take our responsibility seriously in terms of the way in which we build and develop our services. And we also are leaning in to provide additional tooling support, best practice guidance and resources.AI experts are working alongside of our customers to help them be responsible in their endeavors. After that, they own the end products. And that would ultimately mean that there is some degree of responsibility they should maintain as well.

Amazon recently entered into a partnership with the White House and a number of the other tech giants. That was a voluntary commitment.

Yes.

And the guidelines that partnership set up were great, but there's no real accountability. Do you think that it went far enough? Or do you think it's just the first step of many.

Learning Opportunities

I think this is the first step of many, but I think it's the first right step. It's very much in alignment with what we already had established in terms of our overall strategy. AWS has a four-part strategy that talks very much about our responsibility, and what we believe our responsibility is in terms of the way in which we are building services, and then supporting the overall ecosystem. What we saw in the way of those commitments made in partnership with the White House are consistent with that. I think that we are a good step in that direction, in terms of being able to meet those needs.

But there’s also a call for collaboration, having the ability to collaborate around areas where we see vulnerability and risk, and around practices to ensure that we are sharing where there might be potential areas of risk or vulnerability, in order to work towards common safeguards. This collaboration is important and a necessary step. If we want to see AI be a force for good, then we have that kind of collective partnership. Some have felt that we're in a bit of a race to prominence in terms of our technology, but we see that we are able to partner so there's not one player that is going to advance or do something that might be harmful without sort of the collective good being considered.

Do you think other stakeholders should be involved in this collaboration?

I think this is the first start, certainly with some of the top tech giants being in that conversation. We need others to make similar commitments and work towards the kinds of safeguards that are necessary for us to have more trust in the technology.

When I was thinking stakeholders, I was thinking civic organizations, and outside of the tech world as well.

That's a great point. I think that we're going to see other voices, from civic rights organizations as well that are going to continue to encourage or advocate for a safe use of the technology. These partnerships across the public and public sectors, and other communities like academia all need to play a part in us having the right kind of guardrails or safeguards in place to make sure this is a technology that is safe for us.

A lot of the talk around AI, currently is either full on Doomsday, the sky is falling, it's going to wipe out all jobs, it's going to incite hatred, etc. or it's going to solve all of the world's problems. Where do you see it falling on that spectrum?

So I think that there is always the potential that any technology can be used for good or nefarious or bad ways, right? There is always that duality. But I am hopeful that with the right governance in place, with the right sort of regulation in place, and with organizations being committed to responsible practice and use of AI that we will see more of the good than the negative.

If we're realistic, we see this every day, irrespective of the technology of the day. We see evidence of both good and bad happening in and around us. We should in some ways probably expect that what we as humans do will continue to exist, and we'll continue to see that as well.

Humans are going to keep humaning?

Unfortunately! I wish we had some technology that could just make us all good. But I am hopeful and recognize that there are some pretty tremendous AI use cases and opportunities. We have a customer, for instance, that is working on and has used AI in order to be able to predict up to 15 months in advance of the indications of heart disease. Heart disease is the number one killer of people broadly. Imagine what they can use to now reduce the number of individuals that are dying from heart disease. That's extremely good, right? If we can get that kind of preventative and predictive care to individuals, that actually will make a difference in the number of people that die every day. That's one of the use cases for me that's like — we need this, and why we should be doing it.

Another example is Yellow Credit, a financial institution that is providing banking to the unbanked. These are populations that are being underserved or that are less represented. By using technology they now have access to resources and finances that weren't available to them before from traditional banks, and in remote regions of Africa. These are good use cases that show some of the opportunity for AI or technology to level the playing field and provide service to those that have been marginalized. These for me are the things that actually excite me about wanting us to be able to lean in and use it more.

I'm curious with the medical situation where would the guardrails exist there? Because the the medical applications of AI are exciting. But when you're thinking with your responsible AI hat on, and again, humans being humans, can you see the health insurance companies using that information and so where would you put the guardrails in?

Yes, that's exactly one of the places where responsible AI should be, right? This is where education is necessary too. Are we thinking about it properly, and how do we roll this out, to provide predictive care, that kind of resource or access will be provided to everyone. Can we ensure that there are ways for us to be able to deliver and provide preventative care so that it's not just certain communities that might benefit? For instance, the difference between being on Medicaid versus having private insurance is a benefit that somebody's getting, and somebody else may not get. Those will be the places where we need to make sure that in the rollout or release of that sort of opportunity, they're thinking well about the process and the structure so that even though it's beneficial, some people aren't limited from its access.

These are some examples of where responsibility or responsible elements need to come in. We also need to make sure that as we look at this, [we ask] do we have the right sort of representation across the population so that what we’re able to predict is consistent across all demographics and not just for the majority population.

Data is always a bit of a challenge in the arena of trust because certain individuals or populations have been under represented. And there are a number of steps throughout that process that the healthcare industry is doing. We even have a health equity initiative, that actually is helping to provide support and assistance for companies that are working towards advancing health equity. It's those sorts of things that help us get over some of the challenges that we have and hopefully provide a better outcome for our healthcare. Healthcare is complicated. There are so many social determinants of health that impact all of those stages. But I think that they're the right things being done that actually give us some opportunity and help. The continuation of guardrails, and having these conversations to unpack these challenges by looking at the potential for risk, and making sure we have the right inclusion is what needs to be in place so that we can see the outcomes we expect.

About the Author
Siobhan Fagan

Siobhan Fagan is the editor in chief of Reworked and host of the Apex Award-winning Get Reworked podcast and Reworked's TV show, Three Dots. Connect with Siobhan Fagan:

Main image: Alan De La Cruz | unsplash
Featured Research