Published on 27 October 2023

KPMG Belgium Board Leadership Center | Artificial Intelligence Boardroom questions

What's the issue and why does it matter for boards

-----

In a digital world the value of Artificial Intelligence (AI) can be huge: you can capture the exact context of decisions through data, use AI to optimize those decisions and through digital pathways immediately turn decisions into actions. This promises to bring speed, precision and scalability to the most sensitive processes. The rise of Generative AI in the last months, e.g. ChatGPT, has only further brought these thoughts front and center.

Given the speed of change, it’s not credible to think that any organization will have the same look and feel, or even the same fundamental way of working in 3- or 5-years’ time. So, companies need to answer the question: where do we want to be in 5 years and how will we get there?

At the same time, there are many issues to consider: how do you set an ambition level? How do you reach the point where you can create or use such solutions as a matter of course? If they take over your most sensitive processes, how can you do it in a fully trustworthy way? And does all this sufficiently account for the distinction between the now-popular Generative AI and other AI applications?

This requires a dedicated strategy with solid execution, governance, a cultural shift, workforce changes, an ethical framework, etc. It’s the Board’s role to ensure that there is a clear vision and strategy, and proper governance, including model validation, ethical dividers and identification of risk points and control measures. It must also provide oversight on how this vision is achieved. Not only does this require a solid understanding of the modern topics of data and AI, but it also requires Boards to create a balanced view on the opportunities and risks, fully accounting for the costs.


Political action, regulation and releavant frameworks and guidelines

-----

In Belgium, overall attention to privacy and AI risks is increasing, not in the least driven by an understanding of the potentially huge impact of AI solutions. Both public and private sector are looking at how to use generative AI for improving and making the business more efficient and – at the same time – understanding how to approach the risks around it.

At the EU level, regulatory initiatives are more concrete. Core regulations regarding “data” are relatively well-known and accepted by most companies, for example, the 2017 launch of GDPR, regulating the use of data. Since then, further developments have been made in a number of areas:

  • First, the regulator has moved to not just regulating data and their use, but also the AI products themselves that are built on top of these data. In this context:
     
    • AI Act: On 14 June 2023, the European Parliament adopted its negotiating positions of the AI Act, and as of the date of writing (17 October 2023), it is finalizing the text of the legislation, which will regulate the use of AI in the EU. It will aim to clarify what an AI system is and relevant actors and obligations, and introduce a risk-based approach for the creation, use and governance of AI solutions. Local authorities are also taking action through policies and recommendations. For more information on the AI Act, read: AI Act: the great divider of AI practice? and Linking the AI Act with Privacy & Ethics.
    • AI Liability Directive: In September 2022, the European Commission published a proposal for a directive on adapting civil liability rules to artificial intelligence (the 'AI Liability Directive'). The new rules intend to ensure that persons harmed by AI systems enjoy the same level of protection as persons harmed by other technologies in the EU. The proposal for a directive is currently being discussed by the EU Parliament and Council.
       
  • Second, sector-specific regulations and guidelines are being launched, e.g. the Dutch Guideline for high-quality diagnostic and prognostic applications of AI in healthcare[1].
  • Third, a great many standards are currently in the making by professional organizations, e.g. International Standardization Organization (ISO), National Institute of Standards and Technology (NIST), Institute of Electrical and Electronics Engineers (IEEE).

While these efforts are not finalized yet, the fog is starting to clear, and the future playing field for AI is starting to become clearer. Nevertheless, while these initiatives are being introduced, the evolution of Generative AI is already increasing concerns from citizens and experts alike, putting into question whether even more regulatory initiatives are needed. Still, even the currently planned regulations throw a great many challenges to companies: how do you position your activities in light of these regulations? Do they apply? How do you deal with the many technical and organizational difficulties?

1. Guideline for high-quality diagnostic and prognostic applications of AI in healthcare


What questions should Boards consider asking?

-----

While AI is merely a tool, it is a transformative one akin to the introduction of computers during the last half of the 20th century. As a Board, you must receive clarity on a number of topics:

What is the vision of management?

  • What will/is AI be(ing) used for?
  • How deep will it be embedded in the processes? What needs of the customers will be met? What are the portions or components of the value chain that will be affected?
  • What is the timeline attached to this?
  • What is the ethical baseline of the company with regards to AI, and is this in line with our mission and values?

What is the exact ambition level?

  • Do we want to be frontrunners in the market?
  • Do we want to be able to create our own solutions or merely integrate vendor-solutions?
  • How much of our revenue will be affected by AI? How much will be driven by AI?

How is management preparing for this?

  • What are the main challenges that were identified for the organization, and have these been independently reviewed?
  • How is the organizational structure adapted to support the ambition level?
  • Will AI expertise be centralized in one team, will it be spread over separate teams close to the individual business lines, or will a federated model be applied?
  • Is AI set up as a cost center, a profit center, etc. and is this in line with the vision and ambition level?
  • Are Audit, Risk and Compliance functions engaged with AI projects to ensure control requirements are appropriately considered and embedded from the start?
  • What is the company’s strategy in seeking external support on AI (vendor solutions, consultancy)?
  • What will be the impact of AI on our people, employee value proposition and future workforce needs? Is management considering programs for retraining / upskilling?

How does management intend to deal with the risks of this new technological opportunity?

  • Does the organization have and apply a framework around “trusted” AI, addressing issues like bias, security, transparency, etc.?
  • Are risks (in the broadest sense) clearly identified?
  • Is the risk mitigation strategy in line with the ambition level, has it been independently reviewed and is it implemented through a governance model that includes risk assessment and internal controls?
  • Is there a validation approach for the algorithms and an audit strategy for the entire AI ecosystem?

Where do we currently stand?

  • What are concrete examples of standing AI solutions that deliver value to the company today? If there aren’t any: why not? What are the obstacles?

What actions can the Board consider?

-----

  1. Evaluate the level of knowledge about AI, individually and across the Board. Strong expertise is typically not needed within the Board, but sufficient knowledge is required to challenge the vision and risk appetite of the executives and ensure appropriate governance processes are in place.
  2. Set up dedicated training for the Board, tailor-made to the specific use cases and challenges faced by the company. This can involve external experts, but if the company has a rich internal usage of AI, internal experts should also clarify the current way of working and the way forward. Frontrunners should set up extensive full-day training for their Board, discussing technological approaches, opportunities, risk angles, etc. This often provides a healthy mix of external views and internal strategies and tactics.
  3. Clarify the vision and ambition level of senior executives.
  4. Ask questions (see above) of the executive level and strongly challenge their answers.
  5. Ensure that qualitative and appropriate risk governance over AI is in place.
  6. Monitor the progression forward, and follow up on a selected list of concrete summaries with regards to value creation through AI and evaluation of AI-related risk.
  7. Use your external network to pull in speakers that can provide calibration of and challenges to the internal AI development and create inspirational moments.

About the Board Leadership Center

-----

KPMG’s Board Leadership Center (BLC) offers non-executive and executive board members – and those working closely with them – a place within a community of board-level peers. Through an array of insights, perspectives and events – including topical seminars and more technical Board Academy sessions – the BLC promotes continuous education around the critical issues driving board agendas.