Building trust in AI systems and AI systems that deserve our trust: A conversation with Miriam Vogel, president and CEO of EqualAI

AI, Data & Analytics

Building trust in AI systems and AI systems that deserve our trust: A conversation with Miriam Vogel, president and CEO of EqualAI

Miriam Vogel shares what it means to her to lead in an AI-enabled world and how she believes that leaders must balance guardrails and innovation to remain competitive, building and using AI tools that earn and deserve trust.
June 10, 2025
Listen to the Heidrick & Struggles Leadership Podcast on Apple Podcasts Listen to the Heidrick & Struggles Leadership Podcast on Spotify

In this interview, as part of our ongoing series examining how leaders across functions are managing and incorporating AI, Heidrick & Struggles’ Julian Ha spoke to Miriam Vogel, the president and CEO of EqualAI, a nonprofit organization created to promote artificial intelligence governance.

Miriam, who has extensive experience working with C-suites and boards and the government to help them establish best practices for legal regulatory compliance, as well as operationalizing those best practices, discusses the ways that leaders of all kinds have become much more AI literate with respect to potential harms and litigation risks, and how they are mitigating liability as well as avoiding those harm to companies, individuals, and communities. She also shares what she’s excited about when it comes to the future of AI, what it means to her to lead in an AI-enabled world, and how she believes that leaders must balance guardrails and innovation, so that their organizations remain competitive, ensure there's trust in AI systems, and that those AI systems deserve our trust.


Below is a full transcript of the episode, which has been lightly edited for clarity.


Welcome to The Heidrick & Struggles Leadership Podcast. Heidrick is the premier global provider of diversified solutions across senior level executive search, leadership assessment and development, team and organizational effectiveness, and culture-shaping. Every day, we speak with leaders around the world about how they're meeting rising expectations and managing through volatile times, thinking about individual leaders, teams, organizations, and society. Thank you for joining the conversation.

Julian Ha: Hi, I'm Julian Ha, a partner in Heidrick & Struggles’ Washington, DC, office and the global managing partner of the Legal, Risk, Compliance & Government Affairs Practice.

This interview you'll hear today is one in an ongoing series exploring AI: its impacts on leaders across functions, how they are embedding AI tools into their teams, how they are managing its use to drive performance, and how they're adapting their leadership skills and capabilities to best address the challenges AI presents and seize its opportunities.

Today I'm thrilled to be joined by Miriam Vogel, the President and CEO of EqualAI, which is a nonprofit organization created to promote artificial intelligence governance. Miriam also serves as chair of the National AI Advisory Committee, mandated by Congress to advise the President and White House on AI policy.

Most recently, she served as Associate Deputy Attorney General at the Department of Justice, where she advised the Attorney General and the Deputy Attorney General on a broad range of legal, policy, and operational issues. Miriam has extensive experience working with C-suite, board of directors, policy makers, lawyers, and other key stakeholders to establish best practices for legal and regulatory compliance, as well as establishing the operationalizing best practices and AI governance within Fortune 100 companies across multiple industry sectors.

Miriam, thank you so much for joining us today.

Miriam Vogel: Julian, thank you for having me. Always a pleasure.

Julian Ha: Well, let's kick off. Can you share with us a bit about EqualAI—how it came about, what is your mission, and who are your primary stakeholders?

Miriam Vogel: Thank you for that question. About six and a half years ago, we started noticing that there are significant risks with AI. People were starting to adapt, starting to invest, getting excited about it, but there was not enough conversation about what the perils could be, what the potential harms could be, what the liability could be in investing in this new, exciting, shiny object.

And so, we spoke to different audiences, mostly executives and policy makers, about understanding the landscape of potential risks and harms. And as you know, Julian, I like to be proactive. I'm an optimist and I can't talk all the time about the negatives. I really needed to focus on what we could do proactively to make sure that AI was adopted.

Our end goal is pro-adoption and I am net positive on AI, as long as we put the right guardrails in place. I want to make sure everyone is using it and using it effectively. And so, about five and a half or six years ago, we started focusing on AI governance.

It's really been a wonderful journey as this has gone from a conversation that was quite nascent to one that is global, where there's increasingly clear best practices across the board and across industries on what it means to do this well—what it means to be an AI company, because I believe most companies and organizations are AI companies, AI organizations  now, whether or not they realize it or not.

That means, in turn, they need to be ready. They need to have a governance system in place. What does that mean in terms of our audience? A lot of our work is with C-suite, making sure that they are ready, that their companies are ready, that there's alignment and preparation for being an AI company. Talking to their boards about what this looks like, talking to their lawyers in particular; lawyers play a key role here.

When we started this journey, it was mostly on the computer scientists, the engineers within a company, or the CTOs to understand the AI capabilities and aspirations. Part of what we do at EqualAI is make sure that it's a C-suite conversation, that it's a multi-stakeholder, cross-enterprise conversation, because that's really the only way to approach AI. As everyone knows, AI touches everything now; it will only increasingly be that type of innovation across the board.

And so, that has been our primary audience. We touch those audiences in a variety of different ways: we have a C-suite program, we have a program for senior executives to align on AI governance, we have a badge program, we have a CLE for lawyers, and the companies we work with have been asking for new and different programs. We love that. We've become a safe space for them to align on these best practices.

And then in doing so, policymakers increasingly wanted to talk to us and our members. They want to know from those companies who are really committed to this work, what does it mean? What does it look like in practice? How do we navigate that all-important balance of guardrails, as opposed to innovation, so that we get this right, remain competitive, and make sure at the end of the day, no matter what our end goal is, ensure there's trust in AI systems and that they deserve our trust?

Julian Ha: That's fantastic. I think you've sort of touched on this a little bit already, but maybe we could just double click a bit. You obviously have extensive experience working with C-suites and boards and the government to help them establish best practices for legal regulatory compliance, as well as operationalizing those best practices, especially in AI governance.

When you approach these folks, how do you begin the conversation? What's your pitch,  although I imagine these days they're probably pitching you for advice? How does that conversation unroll and how does that go from your perspective?

Miriam Vogel: It has been nice. Increasingly, we've been trying to figure out how to adapt to the hugely increased and scaled interest in our work. That's really been a privilege in helping to answer the call on what it means to be thoughtful, prepared AI actor.

But, you know, we always have to be ready to make the pitch and to follow up with a variety of different stakeholders on why they should partner with us, and I'm very happy to always have that conversation. It comes down to making sure that this company is prepared to succeed—making sure this organization’s or this policymaker's goal is for success, and success includes AI adoption.

The piece that we just talked about, though—the trust component—cannot be overstated. Right now, many (if not most) companies are investing a significant amount of their infrastructure, employee time, and resources to AI adoption, to huge investments in AI.

And yet, there's a huge gap in the actual adoption within companies. A lot of executives are not interested in using AI, not turning to these tools in a way that will actually ensure that the investment is being realized and that the company has the competitive advantage that it's hoping for.

And so, what we try to tell companies is that there's really four reasons that you need to be making sure that you are doing all you can to have a strong AI governance system in place.

First of all, you need to act quick. This takes some time and investment, so you really can't wait until the global landscape is clear. You need to get started. This can take 18 months to two years, and so it's something you need to start yesterday. If you were around when cybersecurity was a nascent idea, if you waited, it was too late by the time you realized this was a problem for your company. That's really where we are with AI governance, for a variety of reasons: the EU AI Act, liability, other things I know we'll dive into.

Then we take a step back and we say, look at your employees. They're so important to you. They're an invaluable resource. They don't want to be part of a company that is using AI in ways that are nefarious or even negligent.

They want to make sure that the way that you're using AI is aligned with your mission and is aligned with your values as a company. If you're not making sure that you have good AI governance, there’s a potential risk of losing the trust of your employees.

Second of all, it's about brand integrity. If it is discovered that your AI use—whether it's in your HR systems, in financial determinations, or otherwise—is harmful or not to be trusted, that will absolutely jeopardize your brand. Integrity is something that every company needs; how do you rebuild that once that's been put in question?

Next is product viability. Each of these steps also has a carrot and a stick; if you're thoughtful, you get more trust with your employees, with your brand integrity. Likewise, you build a broader consumer base. You are making sure that your AI-driven products are safer and more effective for a broader group.

And if you are not thoughtful in your AI governance, that creates more risk, more harms to populations, and use cases that you have not thought through because you did not go through the proper AI governance infrastructure and framework.

And finally, if those reasons are not compelling enough, there's the liability. Julian, you know well from all your clients that there's an increasing field of liability, whether it's being on the front page of The Wall Street Journal or the increased number of legal actions. There was a sixfold increase in legal liability in the US alone in litigation brought over a six-year period. I expect that to increase significantly in the coming year. So, people want to make sure that they are known for being a responsible and thoughtful actor in this space, and not the defendant in a litigation hearing.

Julian Ha: So, just picking up on that point around liability: I think in a recent article, you and your colleagues wrote that as AI adoption expands, so does the landscape of related legal liability. You discuss the ways that leaders of all kinds have become much more AI literate with respect to those potential harms and litigation risks.

Can you maybe expand a little bit on that and share with us some of those insights on how leaders are able to mitigate liability—and, as you wrote, avoid those associated harms to companies, individuals, and communities?

Miriam Vogel: So, first of all, it's about grouping what these potential harms are; it's about looking at the landscape. We've been watching for several years for where the potential harms and liabilities are, and we categorize those as nine different issues. 

First of all, accuracy and reliability. I'm happy to delve further into each of these, but we know if you're talking about AI, generative AI, you need to be checking to make sure that the outputs that it's recommending to you or your consumers are accurate and reliable, first of all.

Second of all, you need to be looking at fairness and bias. Third of all, interpretability, explainability, transparency—all issues that, again, are important for building trust with your consumers, but also making sure that you can be responsive if it ever does come to litigation.

Fourth, accountability. Fifth, privacy, security, and IP and confidentiality. I know you've been monitoring the significant uptick in IP cases that are playing out in the courts. There's still some unknowns. We know that an AI system cannot, on its own, create a copyrightable work; that's been asked and answered. You need to have human intervention. How much human intervention is an open question, and we're seeing some judges start to give us answers in that space.

There are workforce issues. That is both, again, a trust building issue and it's your insurance; it's making sure that you are AI ready to make sure that you have a workforce who's ready in your AI economy, in your AI future. And finally, there are environmental impacts, which people need to be thinking about, both for accountability as well as the liability that I think will be ultimately forthcoming.

In terms of mitigation strategies, awareness is key. Making sure you know the steps that you need to be taking, making sure you know the potential harms that could come from your AI use, is all important.

First of all, you need to survey your AI use. Sometimes, one of the best functions we serve at EqualAI is showing up and making sure the C-suite is having that conversation—surveying the landscape of where they're using AI today, where they're planning to use it (and, for any company that doesn't think they're using it, talk to your HR teams, because you're using it).

There's an interesting survey recently that talked to HR professionals and asked how many were using AI products in their HR systems. 82% said they were, but was interesting is when they talked to the CEO, it was only 17% said that they were. It was only just over half, 52%, of the general counsels and chief legal officers, who believed they were.

So, it's happening. You want to make sure you know where.

Julian Ha: That's quite a disconnect—something important to bridge. That's fascinating.

Miriam Vogel: Exactly.

Julian Ha: So, Miriam, are there further mitigation strategies that you see that could be effective in this?

Miriam Vogel: Yes, thank you so much for asking, Julian.

What's nice about good governance, big picture, is that good governance with AI is good governance. Most of it will be familiar to the clients you work with. It is not a new concept, but it's making sure that you're being intentional and applying it to AI.

So, what do I mean by that? Five easy steps—or, rather, five known steps, is what I should say. I do not want to pretend this is easy, but it is familiar and it should be happening.

First of all, you need to make sure you have a framework in place. What is going to be your approach to AI governance? We're fortunate that we're far enough along that there are many good examples. One that we use at EqualAI is the NIST Risk Management Framework. It is law agnostic—wherever you are, it will be applicable—it is organization, industry agnostic, and it can guide you through whatever stage you're in. It can help determine what the questions are that you should be asking to interrogate your systems and help make sure that they are operating in the way that you expect, to make sure that you're not missing potential harms or, or flaws. 

So, we also have on our EqualAI website—what I just learned recently is the highest trafficked area of our website—our impact assessment tool based on the NIST Risk Management Framework. People are free to use it. It's available for anyone to understand how to use an impact assessment tool, which is a really important part of AI governance. And again, this one is based on that well-done framework.

Second of all, accountability. So, I'm sure, Julian, you have this conversation in any good governance discussion, but here again, accountability is key. Here it means you need to have someone in your C-suite who is accountable for your AI determinations, for your AI systems. 

Too often it is put to, you know, the IT person or the IT help desk somewhere in the company. No. This needs to be front and center. There are going to be questions about budget, questions about priorities, as well as questions about integration, and you need to know throughout your enterprise that there is someone in the C-suite that is holding everyone accountable and who herself is accountable for these decisions.

Third, you want to make sure you have a clear process in place. There are different models, whether it's a system close to your product development, if you are a company that creates a product. Do you have interaction and people responsible for your AI governance? Do you have a committee with multi-stakeholders into your enterprise? Does it also involve outside stakeholders? Do you have a committee, at the end of the day, that is answering what your process is going to look like? 

And when you have that process in place for what your AI governance system looks like, you want to make sure that it's communicated across the company. Again, that's building trust as well as accountability so people know what their role is and that they can take confidence in the fact that you have a clear process in place. 

The other reason that's really important is that we're talking about AI. So much of what happens is going to play out with your employees and with your consumers. They're your front lines; building that trust with them that you have a process in place and letting them know they're in very important role, letting them know there's accountability so that it's safe to tell you if there's a potential problem, is key to making sure that your AI governance works well.

Fourth, you want to make sure you're documenting what you're testing and when. Your AI systems will cross numerous hands across enterprise, probably other companies and organizations as well. You want to be very explicit about what you've tested, how you're defining it, and when. So then, you can get back to the key question of, “for whom could this fail?” 

And finally, you need to audit, reiterate, audit, reiterate, rinse, repeat, rinse, repeat. AI constantly iterates and learns new patterns, and so you need to have a clear cadence in place of how routinely you're going to audit to make sure that it's giving you the outcomes you're expecting.

Julian Ha: Let's turn to opportunities. Let's turn to the upside of all this. What are you excited about regarding the future of AI? 

Miriam Vogel: So much! So much. I mean, on a daily basis, I'm using AI in ways that's propelling my work. That is, it becomes a fun thought partner, writing partner. It is never the final draft of anything I do; it's not even the second draft. But for a first draft, it's so helpful, whether it's in writing, in thinking through prospects, in communications, et cetera. 

So personally, I am certainly a huge beneficiary. But in terms of what it'll do for society, the opportunities are endless. Particularly when we think about healthcare, there's already been significant, important developments in this space. 

For instance, in my family, I've had two aunts die of lung cancer. We've seen that it's on the uptick for women non-smokers. My aunts were also non-smokers and were taken from us way too early. Well, Mass General and MIT got together to create this program, Sybil, that in early testing demonstrated tremendous potential success and advancements in society. One early study showed, I believe, 87 to 94% success of detection of lung cancer one year before incidence—one year before it presented. 

So, to think of all the lives that tool alone could save. We think about, in education, the ways that we can create a personalized tutor if we're thoughtful, if we're intentional, if we're making sure that students and workers have access to this learning tool.

Part of the work is empowering people to know that it is there to support them. I don't think that's currently happening. There's a gender gap in adoption of AI. There are gaps regionally in who across the country is using AI. So, we have a lot of work to get there, but if we get this right, so many of us can have a learning partner.

Again, we have to be AI literate. We have to know when we're using AI; we have to know how it can propel us. We have to know the risks.

But if you are prepared, you have examples like Mason Grimshaw. He is Lakota and had an amazing story of how he went to MIT and became a computer science major. And now, he's bringing that back to his Lakota tribe and creating coding camps for his students. Some of it is very specific to his people: making sure that it is only Lakota teachers who are training the students, because that is one of their keys to success. They found that that is hugely vital to building that trust—again, it always goes back to trust. 

They're also finding projects that are important for their students. So, for instance, preserving native language, teaching them to go out in nature and identify medicinal plants, as well as ways that any other kids could benefit from learning AI programs. Again, it's ways that are deeply personal and particular to communities.

But I think there are a lot of broad lessons, even from that example, as to how we can all benefit by building that trust, by bringing in new students, by finding ways to support our community with AI opportunities.

Julian Ha: I think you've touched on some really exciting areas—healthcare, education, empowerment. That gives me a lot of hope. 

So, Miriam, what does it mean to you to lead in an AI-enabled world, right? Are there any specific skills or mindsets that you think leaders will need to have to be able to thrive in this AI-driven world that we have before us?

Miriam Vogel: Thank you for that question—I'd love to hear your answer too, Julian, because I know it's something you think a lot about. 

What's exciting to me about this moment is that, as much as there is uncertainty and so many novel questions and applications, increasingly I see that it still gets back to the basics. It still gets back to mindsets that are required in all times in order to propel yourself, in order to ensure success.

Those are elements like curiosity. You will not succeed in this AI world unless you are curious—unless you're open to trying new things, you're open to exploring what this new AI-propelled world is going to look like, what it does look like. Because we're there, the future is here. And so, it requires that curiosity.

And with that, it requires humility. We have to be comfortable knowing that we don't know all the answers. That's really hard. It’s really hard to know that. When we hear those leaders who are creating the AI they tell us they're not always certain why a certain function works, that's really hard to hear. That really challenges our need to know that somebody understands all of what's happening. So, we have to go through this with humility. 

But I think third, that means that we also can all take confidence. We're learning this together. We all have questions. We all can benefit equally from AI use, from AI adoption, from our curiosity and understanding how this innovation will propel us and serve us. So we should go forward with confidence knowing that it's going to take the same basic skills to succeed in an AI future that it has always taken.

As long as we are clear on our values and how they apply to our AI use, as long as we are clear on our intent to thrive through AI and to make sure that all of our communities are thriving through our AI use, then we can take confidence in how we approach AI.

But again, it has to come with curiosity and humility as we go forward with the strength of confidence in ourselves.

Julian Ha: I really appreciate that. As you were answering the question, Miriam, I put the same question into an AI chatbot. I wanted to see what that would spit back out.

It said, “Mindsets and skills leaders will need: Curiosity over certainty.” AI is evolving fast. Leaders who remain curious will be able to adapt more effectively. You’ve got to have digital data literacy. You don't need to code, but you need to understand how the data's collected and analyzed and used. “You must have EQ, emotional intelligence.” As AI takes over some of these routine tasks, I think empathy and communication are going to be core to building teams and resolving tensions.

“Agility and experimentation. Leaders need to foster a test-and-learn culture.” And then, finally, it says “Ethical foresight. Leaders need to not just ask ‘can we use this technology?’ but ‘should we?’”

So I think it's really interesting, posing that question to AI in real time. It comes back with some really cogent recommendations. 

Miriam Vogel: Julian, I'm so glad you did that, because it's such a helpful demonstration of how it propels a conversation. It gives you additional data points to think about.

It's studying the patterns of conversations across the world, across time, across populations, across subject matter. So, it's going to be really helpful in adding a different point for you to be thinking about. 

It doesn't mean it's right, and it doesn’t mean it's the final answer. But again, with this curiosity, it can really propel us and help us think about points we've missed or people we might've forgotten to take into account, which is hugely critical.

If there's one piece of advice I would share with every executive, lawyer, person who is helping to navigate AI out there, it is one that Kathy O'Neill teaches us: for whom could this fail? 

Julian Ha: Well, actually, that is our last question: what's one piece of advice you'd have for business leaders in making the most of AI? I think you just gave us that, but would you like to expand on that at all?

Miriam Vogel: If we're thinking about for whom could this fail, it gives us so much opportunity. It gives us the opportunity, first of all, to avoid liability and risk. Who can be harmed by this use?

It's also an opportunity. Like I said, it's all a carrot and a stick. If you're thinking about new use cases and new users, you’re considering a whole different group who can benefit from your products, who you might not have been serving otherwise. It's a new consumer base. It's new opportunities, it's new ways to solve for seemingly intractable issues or issues that might not have been within your purview previously.

And so, “For whom could it fail?” opens up the aperture for you to make sure that you are thriving and that those around you are a part of the success and a part of your success.

Julian Ha: Well, Miriam, you shared some incredible insights, and we can't thank you enough for joining us on this podcast on AI governance—an incredibly important topic. Again, thank you for spending time with us, and we really appreciate it. 

Miriam Vogel: I really enjoyed it. Thank you, Julian.

Thanks for listening to The Heidrick & Struggles Leadership Podcast. To make sure you don’t miss the next conversation, please subscribe to our channel on your preferred podcast app. And if you’re listening via LinkedIn or YouTube, why not share this with your connections? Until next time.


About the interviewer

Julian Ha (jha@heidrick.com) is a partner in Heidrick & Struggles’ Washington, DC, office and global managing partner of the Legal, Risk, Compliance & Government Affairs Practice. He also founded and co-leads the Association Practice and is a member of the CEO & Board of Directors Practice.

Stay connected

Stay connected to our expert insights, thought leadership, and event information.

Leadership Podcast

Explore the latest episodes of The Heidrick & Struggles Leadership Podcast.