How Scared of AI Should Corporate Boards Be? Tactics to Mitigating the Risk and Getting the Governance Right.

shutterstock_1586797216

Artificial Intelligence (AI) is just the latest risk related to the use of technology. Still, the incredible speed of its development (and widespread media attention) and potential for misuse should put it close to, if not at the top, of every Board’s  “pay attention” list.

Over the past ten years, we’ve seen an explosion in how AI, like machine learning, is being used, along with a growing list of potential risks. Navigating the AI landscape is crucial to corporate governance and risk management. Many boards already have teams focusing on risk oversight (72%, according to a survey by Deloitte), with members skilled in managing various business risks. But AI brings unique challenges. Whether protecting sensitive data, ensuring fair AI decisions, or deploying AI systems responsibly, there’s much at stake for companies, consumers, and society. Boards have a crucial role in helping tackle these challenges.

If Boards were already struggling with basic AI, Generative AI brings game-changing challenges with its incredible, ever—increasing capabilities. Generative AI intensifies the risks associated with AI and compresses the timeline for implementing crucial strategies for AI risk mitigation. The risks are real and present today and will multiply as Generative AI matures. With barriers to using AI falling away and its potential growing by the day, it’s essential for companies to manage the risks that come with it carefully.

The board of directors plays a critical role in identifying strategic emerging opportunities and overseeing risks, more so for those associated with AI. The board’s responsibility is to supervise management, assess how AI might influence the corporate strategy, and consider how the company can effectively handle risks, particularly those that pose a threat to the company’s clients, employees, and reputation. As such, company boards need to assess and understand the implications of AI for their strategy, business model, and workforce.

Board directors need to start at least thinking about AI’s impacts on corporate governance.  Even if this only leads to some overarching principles that can be developed in the supporting guidance, which is designed to be more flexible and pragmatic.

For board members wanting to stay ahead of the curve with Generative AI, here are five things to consider:

Improve your AI know-how: Understanding AI is critical to asking the right questions and making informed decisions. Whether it’s through workshops, reading, or even using AI tools, there’s a lot you can do to get up to speed.

Encourage AI savvy in top management: It’s just as crucial for company leaders to be fluent in AI. They need to grasp not just the opportunities but also the risks, ensuring AI is used responsibly.

Consider adding AI expertise to the board: Having someone with hands-on AI experience can be invaluable for guiding strategic decisions and governance.

Plan for the future: Setting up dedicated groups or incorporating AI governance into existing committees can help keep focus on this fast-evolving area.

Keep a big-picture perspective: While board members might not deal with AI directly, they play a critical role in guiding the company’s AI strategy and ensuring it’s used ethically and effectively.

The Generative AI landscape is still emerging. These are the early days of a new technology that will profoundly impact business and society. In addition to the five steps above, companies and their board members should turn to advisors who are already developing the tactics and standards for Generative AI governance, oversight, and reputation management.

While risk management is always a moving target, boards with literacy, professional experience, dedicated attention, and a vision for the future are positioned to guide their organizations into this new era of AI.

So far, regulators and policymakers have struggled to keep pace with the rapid development of AI, while the more recent evolution of GenAI has added further pressure to act now. Many countries have already published national AI strategies, and several legislations and regulations are being developed to provide guardrails. It took years for regulations around cybersecurity to catch up to the threats and risks associated with it, but the adoption of the SEC’s disclosure rules in July 2023 could serve as a guide for AI. Requiring companies to describe the board of directors’ oversight of risks from AI and management’s role and expertise in assessing and managing material risks could be an excellent place to start.

Companies must establish ethical guidelines and robust risk management frameworks to ensure safe AI adoption. Given the technology’s somewhat unpredictable development, real-time, human-led oversight will be more critical than ever.

Purposeful Advisors delivers premium communications advisory, crisis communications, and public affairs services to founders and senior management teams, empowering companies to move minds, bodies, dollars, media, and governments to reach their business objectives.

Purposeful Advisors’ roots are in the boardrooms of the biggest brands in the world and the war rooms of some of the most complicated political and business campaigns in recent memory. With its senior advisors, the team brings over a century of hard-earned experience to start-ups, scale-ups, and midsize enterprises. Purposeful also leverages its expertise to help clients through crises, litigation, and other critical situations.

Facebook
Twitter
LinkedIn

GET IN TOUCH