Skip to main content
Nov 20, 2023

How boards can help manage AI risks and opportunities

Rich Harper and Travis Wofford explain steps boards can take to help their company make the best of the new technology, including paying attention to D&O liability

Although 95 percent of directors acknowledge artificial intelligence’s (AI) future significant impact on their businesses, only 28 percent indicate that AI is a regular feature in their board’s conversations, according to a survey by the National Association of Corporate Directors (NACD).

It is imperative that boards assess the opportunities and risks the technology presents and establish processes that address AI’s impact on issues ranging from directors’ and officers’ (D&O) insurance to the oversight responsibilities for management’s adoption and implementation of AI. This imperative is created by an imminent rise in government regulations and disclosure requirements, but it remains fundamentally an opportunity for those boards that can confront and successfully manage the issues AI presents.

First steps
A first step to mitigate risk is to consider the establishment at the board level of an advisory committee in which management selects management members who will work with the board to establish a clear governance policy and oversight plan for adoption and implementation of AI. Where possible, the board should look for employee representatives with the requisite expertise in IT, finance, operations, legal and other relevant fields.

Boards should work with their advisers to assess the potential risks associated with the use of AI, including data privacy, security and regulatory compliance. Separately, companies should develop an AI-use strategy focused on identifying and prioritizing potential and existing uses of AI tools within the organization.

Some specific steps include:

  • Invest in AI literacy and education to have a basic understanding of AI concepts, capabilities and limitations. Understanding AI and its implications is crucial for making informed decisions
  • Develop a clear AI strategy that aligns with the company’s goals. The strategy should include guidelines for responsible AI use, ethical considerations and risk management
  • Regularly conduct comprehensive risk assessments to identify potential AI-related risks, including those related to data privacy, security, bias, transparency and compliance with regulations
  • Consider third-party audits of AI systems to identify potential vulnerabilities, risks and areas for improvement
  • Prepare a crisis management plan that outlines steps to be taken in the event of AI-related issues, including data breaches, ethical concerns or legal disputes
  • Collaborate with external AI experts, legal counsel or consultants to stay informed about emerging AI risks and mitigation strategies.

Attention should also be paid to mitigating D&O liability, which can arise if directors and officers fail to exercise proper oversight and due diligence in AI implementation. For example:

  • Ensure that AI systems and applications adhere to ethical and legal standards and comply with laws and regulations, including those regarding anti-discrimination and data privacy, among others
  • Take steps to protect data against breaches and reinforce compliance with data-privacy regulations
  • Memorialize establishment and monitoring of board-level AI oversight efforts in board agendas, minutes and presentations, as appropriate.

Efficiency and revenue
Despite the challenges AI presents, it also creates opportunities to increase revenue and enhance the efficiency of a company’s operations. Here are some ways in which that can happen:

  • Using AI to conduct due diligence on third-party relationships, such as those with customers, vendors, partners and service providers. AI is an excellent tool for analyzing datasets to help identify potential risks associated with these relationships and inform decision-making
  • Improving customer acquisition and retention, increasing revenue and ensuring customer loyalty using AI-driven marketing and personalization techniques
  • Reducing the risk of non-compliance by using AI to automate data collection, analysis and reporting
  • Considering how AI can augment or innovate existing business lines and processes, such as research and development and product development by reducing development time and innovating, iterating and testing products and customer experiences.

Directors should be proactive in addressing concerns related to AI. Importantly, companies working with the technology must also stay informed about the rapidly evolving regulatory landscape, as non-compliance can lead to legal and financial consequences. By doing so, boards can help ensure AI adoption within their company is both beneficial and safe.

Rich Harper is partner-in-charge of Baker Botts’ New York office and partner in the firm’s litigation department. Travis Wofford is chair of Baker Botts’ corporate department in Houston and vice chair of the firm’s global M&A practice

Rich Harper

Partner-in-charge of Baker Botts’ New York office and partner in the firm’s litigation department

Travis Wofford

Chair of Baker Botts’ corporate department in Houston and vice chair of the firm’s global M&A practice