Publications Additional risks and different measures (part 5)

Additional risks and different measures (part 5)

Share this article

By Tim Beswick

We are often asked whether becoming an AI-driven enterprise requires something different than becoming a data-driven enterprise. In this series of blog posts, Anderson MacGyver shares her point of view on this topic. For those who want to start from the beginning, you can read part 1: How do we become an AI-driven enterprise?, part 2: Data-to-AI-to-Value journey, part 3: theme 1: The generative / general-purpose AI model buzz, and part 4: theme 2: Business process redesign requiring even more attention for people change. Now, let’s dive into part 5: the third underestimated theme. 

3. Additional risks and different measures 

Additional compliance requirements 

It is likely that organizations that are already on the journey of leveraging data to create business value, are already aware and making advancements in governing. Governing data is aimed at, amongst other drivers, ensuring compliance with applicable legislation. 

When additionally pursuing utilization of AI to unlock business value, you need to consider the EU AI Act. Given its intent and due to the broad definition of an AI system in the EU AI Act, it is inevitable you will have to make this legislation part of your norms and additional measures are likely to be required by most organizations. 

At the bare minimum, it requires all organizations to have oversight and transparency with regards to their usage of AI. The EU AI Act classifies AI systems into four different risk levels: unacceptable, high, limited and minimal risk. Each class has different regulations and requirements for organizations developing or using AI systems. Even if you expect to fall into the lowest risk categories only, you need at least oversight and transparency regarding all AI systems that you use. Without this you are not able to assess in what category your AI’s fall and with that if and which regulations apply to each of your AI systems. 

Having oversight and transparency is therefore a bare minimum and requires a mechanism to identify, administer and classify your AI systems. Having this oversight and transparency may lead to the conclusion that your AI systems fall into risk categories where significant additional measures are required

Additional or other measures 

In many organizations management of for instance privacy, security, regulatory, ethics and operational risks rely for at least a part on humans. In many cases an AI fulfils part of the role that a human traditionally fulfilled. This implies that possibly the human is no longer there to fulfil the measures that have been stipulated to manage the risk. 

Let us again look at the example of driving a car to make this more tangible. 

To achieve an acceptable risk around operating a car we rely on measures that are attached to humans like being healthy, being sober, the driver not being excluded by the insurance companies and having a valid driving license. 

So, what happens if (parts of) operating the vehicle shift to an AI? To keep it simple, let us ignore the legal implications as AI under current law does not have personality. 

Can we simply consider that AI to be a replacement of the human and apply the same measures to control risk? If so, what defines a healthy AI and who attests to this, probably not your GP? Should we register and classify an AI’s historical behaviour to enable exclusion? Does an AI need to do a driving test? Or do we need to go back to the drawing board and reassess risk and implement additional and / or completely different measures? For instance, in this example of autonomous driving, limit the autonomy, and with that the role of the AI, by retaining a human factor in the process? Or in the future accept that AI control cars and implement an overarching control layer that supersedes the individual cars?

We discuss the last theme separately in the next blog post. So, stay tuned!

Follow us on LinkedIn to be notified when we publish a new blog: Anderson MacGyver LinkedIn

Back to previous page