HomeTechGovernment's trailblazing Institute for AI Safety to open doors in San Francisco

Government’s trailblazing Institute for AI Safety to open doors in San Francisco

Date:

Related stories

The four English counties named among the best places in the world to visit

Colchester Castle (Image: Getty)An area which boasts a unique...

How to get your sports fix every day this Christmas

The festive season means there is plenty of sport...

Giovanni Pernice wins Italian TV dance show after leaving UK amid Strictly probe

Former Strictly Come Dancing professional Giovanni Pernice has won...
spot_imgspot_img

  • UK AI Safety Institute set to expand across the Atlantic to broaden technical expertise and cement position as global authority on AI Safety.     
  • expansion unveiled as AI Safety Institute publishes first ever AI safety testing results on publicly-available models and agrees new collaboration with Canada.  
  • comes ahead of the co-hosted AI Seoul Summit, demonstrating the UK AI Safety Institute’s continued leadership in global AI safety.

The UK government’s pioneering AI Safety Institute is set to broaden its international horizons by opening its first overseas office in San Francisco this summer, Technology Secretary Michelle Donelan has announced today (Monday 20th May). 

The expansion marks a pivotal step that will allow the UK to tap into the wealth of tech talent available in the Bay Area, engage with the world’s largest AI labs headquartered in both London and San Francisco, and cement relationships with the United States to advance AI safety for the public interest.  

The office is expected to open this summer, recruiting the first team of technical staff headed up by a Research Director. 

It will be a complementary branch of the Institute’s London HQ, which continues to grow from strength to strength and already boasts a team of over 30 technical staff. The London office will continue to scale and acquire the necessary expertise to assess the risks of frontier AI systems. 

By expanding its foothold in the US, the Institute will establish a close collaboration with the US, furthering the country’s strategic partnership and approach to AI safety, while also sharing research and conducting joint evaluations of AI models that can inform AI safety policy across the globe.

Secretary of State for Science and Technology Michelle Donelan said:  

This expansion represents British leadership in AI in action. It is a pivotal moment in the UK’s ability to study both the risks and potential of AI from a global lens, strengthening our partnership with the US and paving the way for other countries to tap into our expertise as we continue to lead the world on AI safety. 

Since the Prime Minister and I founded the AI Safety Institute, it has grown from strength to strength and in just over a year, here in London, we have built the world’s leading government AI research team, attracting top talent from the UK and beyond. 

Opening our doors overseas and building on our alliance with the US is central to my plan to set new, international standards on AI safety which we will discuss at the Seoul Summit this week.

The expansion comes as the UK AI Safety Institute releases a selection of recent results from safety testing of five publicly available advanced AI models: the first government-backed organisation in the world to unveil the results of their evaluations.  

Whilst only being a small part of the Institute’s wider approach, the results show the significant progress the Institute has made since November’s AI Safety Summit as it builds up its capabilities for state-of-the-art safety testing.  

The Institute assessed AI models against four key risk areas, including how effective the safeguards that developers have installed actually are in practice. As part of the findings, the Institute’s tests have found that: 

  • Several models completed cyber security challenges, while struggling to complete more advanced challenges. 
  • Several models demonstrate similar to PhD-level knowledge of chemistry and biology. 
  • All tested models remain highly vulnerable to basic “jailbreaks”, and some will produce harmful outputs even without dedicated attempts to circumvent safeguards. 
  • Tested models were unable to complete more complex, time-consuming tasks without humans overseeing them. 

AI Safety Institute Chair, Ian Hogarth said: 

The results of these tests mark the first time we’ve been able to share some details of our model evaluation work with the public. Our evaluations will help to contribute to an empirical assessment of model capabilities and the lack of robustness when it comes to existing safeguards.

AI safety is still a very young and emerging field. These results represent only a small portion of the evaluation approach AISI is developing. Our ambition is to continue pushing the frontier of this field by developing state-of-the-art evaluations, with an emphasis on national security related risks.

AI safety continues to be a key priority for the UK as it continues to drive forward the global conversation on the safe development of the technology. 

This effort was kickstarted by November’s AI Safety Summit at Bletchley Park, and momentum continues to grow as the UK and the Republic of Korea gear up to co-host the AI Seoul Summit this week. 

As the world prepares to gather in Seoul this week, the UK has committed to collaborating with Canada, including through their respective AI Safety Institutes, to advance their ambition to create a growing network of state backed organisations focused on AI safety and governance. Confirmed by UK Technology Minister Michelle Donelan and Canada Science and Innovation Minister François-Philippe Champagne, this partnership will serve to deepen existing links between the two nations and inspire collaborative work on systemic safety research.   

As part of this agreement, the countries will aim to share their expertise to bolster existing testing and evaluation work. The partnership will also enable secondment routes between the two countries, and work to jointly identify areas for research collaboration. 

Notes for editors

The Institute safety tests have been carried out this year on five publicly available large language models (LLMs) which are trained on large amounts of data. The models tested have been anonymised. 

The results provide a snapshot of model capabilities only, and do not designate systems as “safe” or “unsafe”. The tests which have been carried out represent a small portion of the evaluation techniques AISI is developing and using, as outlined in the Institute’s approach to evaluations which was published earlier this year.

Today’s publication can be found on the AI Safety Institute website.

Today also marks the Institute Chair Ian Hogarth’s latest progress update, which can be found here on the AI Safety Institute website.

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories

spot_img