From Strategy to Execution: The Board’s Role in Ensuring Effective AI Adoption
- Frank Wander
- Sep 23
- 10 min read
Updated: Sep 25

Introduction
Artificial intelligence (AI) is the latest in a series of transformational technologies that have emerged over the last sixty years, dating back to the mainframe revolution. The internet and search engines were transformational; distributed computing was transformational; cloud computing was transformational; and so on. Personally, I think the emergence of the internet will remain more consequential than AI. This is not to diminish AI - it is very impressive. But connecting every business, person, and application in the world over high-speed communication is difficult to surpass.
Having spent a career in IT, mostly as a Fortune 250 turnaround CIO, I believe AI is just the latest transformational technology. We’ve been here before. Consequently, artificial intelligence should fold into the Board’s existing oversight structure – either committees or standing agenda items. But Boards are generally not technology savvy so that structure is lacking. If you would like to learn more about this opportunity, please read When Technology Demands a Dedicated Board Committee.
Successful oversight of AI requires strong technology, operational risk, and compliance oversight. If you are wondering, as a Board, how you are going to oversee AI, then you should capitalize on this moment and proactively embrace it as an opportunity. Use AI as the driving force to finally create a technology oversight model that will accommodate whatever transformational technologies come after this.
Like the preceding generations of disruptive technology, AI is not just a technology initiative — it will be a core driver of business strategy, competitive advantage, and risk exposure. For Boards of Directors, this means the executive team must move quickly from the experimental phase and integrate it effectively, responsibly, and at scale into their business architecture. It is therefore incumbent on the Board to evaluate whether AI initiatives have a clear ROI are not a response to the latest technology hype cycle; that they are delivering measurable business outcomes; that they are aligned with enterprise strategy; and that they are governed in a way that mitigates risk.
Reader Guidance: This article is written from the perspective of a Board that oversees a large, complex enterprise. If you are from a small or midsize company, your Board will operate without a large committee structure. However, that doesn’t eliminate your oversight responsibilities. Simply apply the oversight framework provided here to craft your Board agenda so it delivers comprehensive oversight of AI and technology.
Table of Contents
Key Takeaways
Board Education
Understanding the Board’s Fiduciary Responsibilities
Understanding the Firm’s Strategy and Implementation Success
Weaving AI Oversight into Existing Committees
The Legal Landscape
Strategic Questions for Management
Conclusion
About the Author
Key Takeaways
Board governance over technology has been a long-standing gap.
AI represents an opportunity to fill this gap by creating a technology committee.
Boards should not set up an AI committee. That is not an enduring solution to technology oversight.
Boards require tech savvy members.
Boards require AI literacy to carry out their oversight function.
Technology goes beyond AI, so Boards need technology literacy first and foremost.
AI oversight needs to be woven into each committee or, absent committees, into the standing Board agenda.
Board Education
Boards require both AI and technology literacy. Effective governance will only occur if the Board has the right level of AI and technology knowledge. This can be accomplished by recruiting individuals with broad expertise in technology, including AI, or establishing the right committee structure to advise the Board, thereby enabling informed decision-making.
The Board needs to understand both the risk of using AI - and not using AI - and should not jump to conclusions that it’s an immediate competitive necessity because of this technology’s overpowering technology hype cycle. Additionally, the Board has the responsibility to fully understand AI and technology so it can ask informed questions and evaluate the risks AI poses. This understanding cannot be delegated to management. A lack of understanding leads to a lack of effective oversight.
Accordingly, Boards should avail themselves of outside experts and independent briefings to raise their literacy and comfort level eliminating dependence on internal experts.
Understanding the Board’s Fiduciary Responsibilities
As with any transformational technology, the Board must expand its fiduciary lens. This is where a properly staffed risk committee comes in. The risk committee should oversee the legal, regulatory, and technology risks associated with artificial intelligence.
Legal and Regulatory Risks
The legal and regulatory landscape for AI is rapidly changing, creating significant compliance challenges. This includes the current and emerging risks associated with:
Compliance Monitoring: The Board should oversee management's efforts to monitor and comply with an increasingly complex patchwork of regulations. This includes rules related to data privacy (e.g., GDPR), anti-discrimination laws, and new AI-specific acts like the EU AI Act.
Intellectual Property (IP): Directors must be aware of the IP risks associated with AI, particularly regarding the use of data for training models and the potential for AI-generated content to infringe on copyrights.
Accountability: The Board should ensure that clear accountability structures are in place for AI development and deployment. This is critical for navigating potential legal challenges related to biased outputs or harmful decisions made by AI systems.
Disclosure and "AI Washing": Boards must ensure that public statements about the company's use of AI are accurate and not misleading. Regulatory bodies like the SEC are actively pursuing cases of "AI washing," where companies overstate their AI capabilities or benefits.
Technology and Operational Risks
AI introduces a new layer of technical and operational vulnerabilities that the board must oversee.
Operational Risks: As AI is integrated into transaction processing and customer service, continually monitoring how AI makes its decisions is imperative. This type of transparency is both a design and operational requirement. Management must ensure that decision making is transparent and that controls are in place to oversee this ongoing. AI cannot be a black box – that is a significant operational risk.
Data Governance: The Board must ask management to demonstrate how they know the data used to train and operate AI models is accurate, secure, and used responsibly. Poor data quality can lead to biased or incorrect AI outputs with serious consequences.
System Integrity and Security: Directors need to ensure that robust controls are in place to manage the integrity and security of AI systems. This includes protecting against cyberattacks, data breaches, and inadequate control over change management.
Third-Party Risks: Many companies rely on third-party AI vendors. The Board must oversee the due diligence process for these vendors to ensure their models are not biased or unreliable and that their data privacy practices are sound.
Bias: “Trusted AI” is more rigorous than AI. Trusted AI means that management has mechanisms to detect, mitigate, and prevent bias in its AI systems. As always, trust is earned and is a result of making unbiased models an operational priority. That means providing the funding and time it takes to train models correctly.
Understanding the Firm’s AI Strategy and Implementation Success
Boards should require clarity and alignment on management’s AI vision.
How does AI adoption:
Directly support business objectives? These include efficiency gains, revenue growth, and customer experience improvements.
Integrate into the strategic roadmap? What is the multi-year AI adoption roadmap, including key milestones and success criteria? These must be updated annually because the rapid evolution of technology continues to shorten the half-life of strategic plans.
Integrate organization-wide? Confirm that AI is not siloed within IT as a technology responsibility, but rather is co-owned by business units, HR, legal, and risk management.
What are the key metrics to Monitor? ROI and payback period, workforce adoption rates, model accuracy, and business KPIs.
What are the leading indicators? For instance, percent of decisions augmented by AI, percent of workforce trained, or the number of AI-driven process redesigns implemented.
What frequency of updates are required? Initially, quarterly AI progress reports tied to measurable outcomes should be the ante to play. At each meeting, management should share their progress vs. plans and what they have learned since the last meeting.
Weaving AI Oversight into Your Committee Structure (or Board Agenda)
I see people recommending that Boards establish an AI Committee. This is not a good recommendation. If you don’t have a technology committee, you need one – you do not need an AI committee.
The oversight of AI implementation is a collaborative effort, with each Board committee focusing on its area of expertise. While some overlap exists, a clear division of responsibility ensures comprehensive governance.
These committees provide reports to the full Board on issues that could materially impact the company, thereby ensuring that all directors are aware of significant risks.
Let’s examine the committees and how AI weaves into them.
HR
This committee oversees management's plans for developing a workforce with the necessary talent and AI skills. In addition, they need to see evidence that management is creating an AI-Ready culture.
Specifically:
Workforce Planning: Oversight of management's strategy for how AI will affect the company's workforce, including talent acquisition, upskilling, and potential displacement.
Ethical Use of AI: Oversight that AI is being used ethically in employment decisions, such as hiring, performance reviews, and promotions, and that these systems do not create biases.
Employee Well-being: Oversight of the psychological and social impact of AI on employees, including job satisfaction, stress, and the need for new skills.
Organizational Culture: Oversight that the culture intentionally supports human-machine teaming by being psychologically safe, agile, adaptable to change, questioning, and innovative. It should also be risk-aware, so every employee is on the lookout for risks, which are in turn reported up to that department’s risk team and placed onto the risk register.
Technology
The Technology Committee focuses on the technical and strategic aspects of AI implementation. Their primary responsibilities include:
Technology Assessment: They oversee the technical feasibility, scalability, and integration of AI systems into the company's existing architecture and infrastructure.
AI Strategy: They oversee the company's AI roadmap, ensuring that the technology is being leveraged to drive innovation, competitive advantage, and long-term business goals.
Resource Allocation: They are apprised of the budget and resources allocated for AI development, research, and talent acquisition, and how effectively it is being invested based on actual metrics.
Third-Party and Vendor Oversight: They are apprised of the risks posed by third-party AI providers, in terms of security, performance, and ethical standards.
Risk Committee
This committee’s role is to identify and mitigate the broad spectrum of risks that AI introduces. Their key oversight areas are:
Risk Framework: Oversight to ensure management has a robust framework to identify, assess, and manage AI-related risks, from data security breaches to model failures. This should fold into the firm’s enterprise risk governance.
Bias and Ethical Risks: Oversight to ensure policies and procedures are in place to detect and mitigate algorithmic bias, ensuring that AI systems are fair, transparent, and accountable.
Operational and Systemic Risks: Oversight to ensure the potential for AI to cause unintended consequences, system failures, or disruptions to business operations are being monitored. This is particularly true for agentic AI solutions riding on top of the company's existing business process infrastructure.
Reputational Risk: Oversight to monitor how the use of AI could impact the company's brand and public trust, particularly in sensitive applications.
Audit and Compliance
This committee is responsible for overseeing the financial integrity, internal controls, and regulatory adherence of AI systems. Their duties include:
Financial Integrity: They oversee the use of AI in financial reporting, auditing, and fraud detection, to ascertain that management’s application of AI in these areas is reliable and accurate.
Legal and Regulatory Matters: They monitor the company's compliance with an evolving landscape of AI-specific regulations and laws, such as data privacy acts, and see that the company is prepared for future legal requirements. They serve as the liaison between the Board and the General Counsel, Internal Audit, and Chief Compliance Officer, receiving regular updates on legal proceedings, investigations, and emerging regulatory requirements. The committee evaluates management's response to these issues and oversees that they are being handled appropriately.
Internal Controls: They are apprised of the effectiveness of internal controls and governance around the development and deployment of AI models to prevent errors, manipulation, and unauthorized use.
Data Governance: They maintain oversight of management's data governance efforts, to learn whether the data used to train AI models is secure, accurate, and free of legal or regulatory risks.
The Legal Landscape
Per Cornerstone Research’s report, Securities Class Action Filings – 2024 Year in Review (Link), the number of AI related filings went from seven in 2023 to 15 in 2024, more than doubling.
The artificial intelligence filings are tied to companies that develop AI models, make products used in AI infrastructure, and use AI models for business purposes. The allegations are for misrepresentations, or failure to disclose the risks related to use of AI.
I will be releasing a separate post on the AI Legal and Compliance risks faced by Boards.
Strategic Questions for Management
-“Which parts of our business are most AI-ready?”
- “How does the AI roadmap advance shareholder value?”
- “What metrics are you tracking, and what ones are you unable to track but would like to, or need to?”
- “How is your internal governance structure and governance mechanisms being changed to accommodate AI?”
- “How are you monitoring AI’s present and future impact on the competitive landscape?”
- “How are you moving from the experimental phase to mainstreaming AI into business operations?”
Conclusion
It’s been sixty years since mainframe computing emerged as the backbone of business technology. Since that time, companies have navigated wave after wave of technological change — from client–server to cloud to mobile — yet many Boards still struggle to oversee technology driven change - this time AI oversight and governance.
How is this possible, given how far we’ve traveled into the digital era? The reality is that most Boards remain largely non-technical, leaving them ill-equipped to evaluate the risks and opportunities of transformative technologies like AI.
Today, technology is no longer just a support function — it is a primary driver of competitive advantage. This makes it imperative for Boards to treat AI as a catalyst for change and establish a governance framework that ensures enduring, disciplined oversight of technology in a world where the digital and physical are now inseparable.
About the Author
Frank Wander is an accomplished senior technology executive, cybersecurity Board member, technology company founder, author, keynote speaker, CIO coach, and was featured in the Wall Street Journal in The View from the CIOs Office.
Wander spent most of his career in technology and led IT departments/divisions across Fortune 250 companies and went on to found a B2B SaaS technology company in 2014. He has deep experience using talent, culture and governance to drive turnaround transformations of underperforming IT organizations.
In each turnaround, he transformed IT into a partner that delivered on its promises, and built a high performing, collaborative, and inspired IT culture that moved faster and got much more done. Because of these turnarounds, Wander researched and mastered how to use the human factors of productivity and innovation to tap the hidden potential of those organizations. Those experiences led him to author Transforming IT Culture under contract with Wiley Publishing as part of their CIO Series.
Frank also spent many years intimately involved in cybersecurity, with direct ownership of this function as a Fortune 250 CIO, as a Board member where he oversaw cybersecurity governance at a midsize Insurance company for seven years, and as a SaaS software company founder and CEO with responsibility for oversight of cybersecurity practices.
Currently, Wander produces a weekly show, Creating Cultures that Outperform, and founded Boardroom Edge, a site that focuses on the business rationale for stronger technology oversight over by Boards.






Comments