Skip to content

Operationalising ethics in AI

Posted 20 Mar 2024

Just like any other function within a business, adoption and use of AI involves risk management. Unlike many other functions, though, the potential repercussions and impacts of AI are wide-ranging, extending to ethical as well as legal and practical issues.

The earlier a company can understand the potential impacts of AI and put best practices and policies in place, the better. These will depend on the business and its application, but issues of inaccuracy and bias have already been well-documented, as well as the associated risks to physical safety, human rights, and even political fairness. 

It’s always cheaper – and easier – to bake risk management into operational processes during development, rather than to deal with remedial actions and problem-solving further down the line. This isn’t just a question of embedding standards and setting up governance systems; this also involves company culture and attitudes. As well as embracing the benefits of AI and its value to the business, employees need to understand and recognise the implications and pitfalls of their application, so that they can use it with the right mindset, and feel empowered to take appropriate measures to avoid any ethical conflicts. 

Of course, tech leaders don’t need to be experts on all the ethical considerations of their work, as long as they recognise that they will have blind spots. They should be actively engaging with ethics experts who can help illuminate areas of risk and forecast potential problems, not just for their own business, but also for the wider industry and society in general. Working with these experts, the development team needs to be regularly testing their systems, to ensure that ethical standards are continually met and sustained.

I need everyone to give me their best ideas. Shot of a group of businesspeople sitting together in a meeting.

There are already multiple indicators, standards and frameworks available, many of which have been established and approved by leaders in the industry.

For example, the AI Standards Hub in the UK is led by The Alan Turing Institute in partnership with the British Standards Institution (BSI) and the National Physical Laboratory (NPL) which is supported by the UK Government. Digital Catapult has also published the Ethics Framework, Digital Transformation Strategy Framework and Data Strategy and Governance Framework to create more tools and resources to navigate and ensure ethical AI use. The Innovate UK BridgeAI programme, in collaboration with Trilateral Research, released a report which highlights the importance of responsible and trustworthy AI. The International Organisation for Standardisation has also established an AI standard, BS ISO/IEC 42001, developed by experts from 50 countries, including the UK. So there’s usually no need for a business to build an evaluation system from scratch. What is important is that their chosen set of standards and frameworks works for the needs of their industry, and is appropriate for their use case.

Throughout the development stage, companies should use their chosen framework to undertake regular assessments and checks to see how the technology is performing against their ethical standards. Doing this regularly helps to identify emerging problems and root causes, so that they can be addressed before dependent parts of the system are built.

When problems arise that can’t easily be eliminated from the system, there are options for managing these risks in line with ethical standards – here are three examples.

  1. Transparency
    • Ensuring that intended users are aware of the limitations of the product so that they can apply their risk management processes (for example, not relying on it to inform certain decisions).
  2. User interface design
    • The user experience of the product is designed to drive actions that mitigate risk or prevent actions that may lead to risk (for example, preventing certain requests from being processed).
  3. User policies
    • Preventing the use of the product by high-risk users – for example, only allowing verified businesses to open accounts. Recent examples include Synthesia’s policy regarding avatar consent, Microsoft’s account vetting for Azure OpenAI, and Snap’s addition of parental controls for using its AI assistant.
An image of Chanell Daniels, Responsible AI Manager at Digital Catapult next to the quote

Once the product or service goes live, it is important for the company to regularly monitor how it is being used and facilitate ways for users to raise concerns. When concerns are raised, they should be put through the company’s governance process, so that those with accountability for ensuring the product is meeting required ethical standards are regularly informed and equipped to make decisions. Businesses need to allocate resources to make sure that problems can be managed and necessary changes implemented throughout the product’s lifespan.

1686730278011

Chanell Daniels

Responsible AI Manager

Learn more about our activities in BridgeAI

Learn more

About Innovate UK BridgeAI

Innovate UK BridgeAI empowers UK businesses in high-growth sectors, driving productivity and economic growth through the adoption of Artificial Intelligence. We bridge the gap between developers and end-users, fostering user-driven AI technologies.

With a focus on ethics, transparency, and data privacy, we aim to build trust and confidence in the development of AI solutions. Strengthening AI leadership, supporting workforces, and promoting responsible innovation, BridgeAI shapes a collaborative and AI-enabled future.

BridgeAI is an Innovate UK funded programme, delivered by a consortium including Innovate UK, Digital Catapult, The Alan Turing Institute, STFC Hartree Centre and BSI.