HomeBussinessEnsuring trust in AI to unlock £6.5 billion over next decade

Ensuring trust in AI to unlock £6.5 billion over next decade

Date:

Related stories

UK unveils shake-up of consumer compensation rules as financial scandals rise

The system for handling complaints against banks, insurers and...

The £134 sleeper train ride that ends on one of the Europe’s best islands

Tourists can travel across the whole of Italy and...

The world’s largest island begging for more tourists – but has a major problem

Desperate to share its breathtaking views of frozen landscapes,...

Nissan to warn jobs at risk as UK EV targets push car industry to ‘crisis point’

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of...

Daily horoscope: November 16, 2024 astrological predictions for your star sign

Uranus brings a spark of rebellion today, urging you...
spot_imgspot_img

  • UK’s AI assurance market set to grow six-fold by 2035 unlocking more than £6.5 billion
  • New support for businesses unveiled to help develop and use trustworthy AI products and services
  • UK AI Safety Institute signs new agreement with Singapore, deepening international AI safety collaboration

The UK’s market for ensuring the trustworthiness of AI systems is poised to grow six-fold over the next decade – unlocking more than £6.5 billion as the UK government uses AI to kickstart growth. Using AI is central to the government’s plans for reforming the country’s public services and wider economy, going hand in hand with ensuring public trust in the innovations which will ultimately deliver these reforms.   

Assurance ensures AI systems work as intended – in turn boosting public trust in the technology with a particular focus on making sure they are fair, transparent and protect individual privacy.  

Around 524 firms currently make up this slice of the UK’s AI sector, employing more than 12,000 people and generating more than £1 billion. These businesses provide organisations with the tools they need to develop or use AI safely – which is in greater demand as AI is increasingly used by businesses and organisations across the country.

On the back of a new report published today (Wednesday 6 November), the government is unveiling targeted support for businesses across the country to ensure they can develop and deploy safe, trustworthy AI to kickstart growth and improve productivity.  

Key to this will be a new AI assurance platform, giving British businesses access to a one-stop-shop for information on the actions they can take to identify and mitigate the potential risks and harms posed by AI. This will focus on capitalising on the growing demand for AI assurance tools and services, also partnering with industry to develop a new roadmap, which will help navigate international standards on AI assurance.

Secretary of State for Science, Innovation, and Technology, Peter Kyle, said: 

AI has incredible potential to improve our public services, boost productivity and rebuild our economy but, in order to take full advantage, we need to build trust in these systems which are increasingly part of our day to day lives.

The steps I’m announcing today will help to deliver exactly that – giving businesses the support and clarity they need to use AI safely and responsibly while also making the UK a true hub of AI assurance expertise.

The platform brings together guidance and new practical resources which sets out clear steps such as how businesses can carry out impact assessments and evaluations, and reviewing data used in AI systems to check for bias, ensuring trust in AI as it’s used in day-to-day operations.  

Further support will see businesses, particularly small and medium-sized enterprises ( SMEs), able to use a self-assessment tool to implement responsible AI management practices in their organisations and make better decisions as they develop and use AI systems. Industry feedback through a public consultation also launching today will ensure it can be at its most effective.   

The safe development and deployment of AI is also central to the UK’s vision for AI on the global stage, driven by the AI Safety Institute (AISI) – the world’s first government body dedicated to AI safety. In the last two months, the Institute has launched their Systemic AI Safety Grants programme, with up to £200,000 of funding available for researchers across academia, industry and civil society and seen both its Chief Technology Officer Jade Leung and Chief Scientist Geoffrey Irving named in TIME Magazine’s ‘100 Most Influential People in AI’ list. 

The UK AI Safety Institute continues to work closely with international partners, including supporting the first meeting of the International Network of AI Safety Institutes members in San Francisco later this month. Further strengthening its global reach, the AI Safety Institute has today announced a new AI safety partnership with Singapore. Signed by Peter Kyle and Singapore’s Minister for Digital Development and Information Josephine Teo in London this afternoon, the agreement will see the two institutes work together closely to drive forward research and work towards a shared set of policies, standards, and guidance.  

Singapore Minister for Digital Development and Information, Josephine Teo, said: 

We are committed to realising our vision of AI for the Public Good for Singapore, and the world. The signing of this Memorandum of Cooperation (MoC) with an important partner, the United Kingdom, builds on existing areas of common interest and extends them to new opportunities in AI.

Of particular significance is our joint support of the international Network of AI Safety Institutes (AISI). Through strengthening the capabilities of our AISIs, we seek to enhance AI safety, so that our peoples and businesses can confidently harness AI and benefit from its widespread adoption.

This will advance AI safety and strengthen a common approach to the responsible development and deployment of advanced AI models across the globe. The partnership builds on commitments made between the two countries at the AI Safety Summit last November, and the ambitions of the International Network of AI Safety Institutes to align their work on research, standards and testing. 

AI Safety Institute Chair Ian Hogarth said: 

An effective approach to AI safety requires global collaboration. That’s why we’re putting such an emphasis on the International Network of AI Safety Institutes, while also strengthening our own research partnerships.

Our agreement with Singapore is the first step in a long-term ambition for both our countries to work closely together to advance the science of AI safety, support best practices and norms to promote the safe development and responsible use of AI systems.

Today’s announcements come as the Science Secretary addresses the opening day of the Financial Times Future of AI Summit, gathering government, business, and technology leaders together for talks on how companies are investing in AI while navigating the technology’s potential risks. 

Further information

Published today:

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories

spot_img