HomeTechGoogle, Meta Criticise U.K. and E.U. AI Regulations

Google, Meta Criticise U.K. and E.U. AI Regulations

Date:

Related stories

The four English counties named among the best places in the world to visit

Colchester Castle (Image: Getty)An area which boasts a unique...

How to get your sports fix every day this Christmas

The festive season means there is plenty of sport...

Giovanni Pernice wins Italian TV dance show after leaving UK amid Strictly probe

Former Strictly Come Dancing professional Giovanni Pernice has won...
spot_imgspot_img

Google and Meta have both openly criticised European regulation of artificial intelligence this week, suggesting it will quash the region’s innovation potential.

Representatives from Facebook’s parent company along with Spotify, SAP, Ericsson, Klarna, and more signed an open letter to Europe expressing their concerns about “inconsistent regulatory decision making.”

It says that interventions from the European Data Protection Authorities have created uncertainty about what data they can use to train their AI models. The signatories are calling for consistent and quick decisions surrounding data regulations that allow for the use of European data, similar to GDPR.

The letter also highlights that the bloc will miss out on the latest “open” AI models, which are made freely available to all, and “multimodal” models, which accept input and generate output in text, images, speech, videos, and other formats.

By preventing innovation in these areas, regulators are “depriving Europeans of the technological advances enjoyed in the U.S., China and India.” Plus, without free reign over European data, the models “won’t understand or reflect European knowledge, culture or languages.”

“We want to see Europe succeed and thrive, including in the field of cutting-edge AI research and technology,” the letter reads. “But the reality is Europe has become less competitive and less innovative compared to other regions and it now risks falling further behind in the AI era due to inconsistent regulatory decision making.”

SEE: Businesses Seek to Balance AI Innovation and Ethics, According to Deloitte

Some AI policy experts do not share the view that the E.U.’s existing AI policies are detrimental. Hamid Ekbia, director of the Autonomous Systems Policy Institute at Syracuse University, told TechRepublic in an email: “The European approach derives from a civil-rights perspective. Its key benefit is that it provides a clear classification of risk based on potential harms to consumers from use of AI. In this way, it provides protection to citizens’ rights by enforcing strict regulations for ‘high-risk systems’ — that is, those used in educational or vocational training, employment, HR, law enforcement, migration, asylum and border control management, etc.

“In my view, the E.U. law doesn’t go too far; it supports innovation through regulatory sandboxes, which create a controlled environment for the development and testing of AI systems. It also provides legal clarity for businesses. SMEs benefit from clarity in law and regulation, not from lack of it. By creating a level playing field, the E.U. law helps smaller companies.”

Google suggests copyrighted data could be allowed to train commercial models

Google has separately spoken out about the laws in the U.K. that prevent AI models being trained on copyrighted materials.

“If we do not take proactive action, there is a risk that we will be left behind,” Debbie Weinstein, Google’s U.K. managing director, told The Guardian.

“The unresolved copyright issue is a block to development, and a way to unblock that, obviously, from Google’s perspective, is to go back to where I think the government was in 2023 which was TDM being allowed for commercial use.”

TDM, or text and data mining, is the practice of copying copyrighted work. It is currently only allowed for non-commercial purposes. Plans to allow it for commercial purposes were dropped in February after being widely criticised by creative industries.

Google has also released a document called “Unlocking the U.K.’s AI Potential” this week where it makes a number of policy change suggestions, including allowing for commercial TDM, setting up a publicly-funded mechanism for computational resources, and launching a national AI skills service.

SEE: 83% of U.K. Businesses Increasing Wages for AI Skills

It also calls for a “pro-innovation regulatory framework,” which has a risk-based and context-specific approach and is managed by public regulators like the Competition and Markets Authority and the Information Commissioner’s Office, according to the Guardian.

Dr Marc Warner, CEO of Faculty AI, a company assisting the U.K. government’s AI Safety Institute, said that the debate around regulating AI demands nuance regarding the specific technology being covered. So-called ‘narrow AI’ has been used safely for decades to perform very specific tasks like predicting consumer habits, but frontier AI is much newer.

“We don’t fully understand it, can’t be sure what it will do, and therefore cannot be sure it is totally safe,” he told TechRepublic in an email.

“For existing AI systems, a lighter-touch approach remains the right path. For frontier AI, international agreements on restrictions, inspections, and investments in safety research and technology are imperative now, and in the future.”

EU’s regulations have impacted Big Tech’s AI plans

The E.U. represents a huge market for the world’s biggest tech companies, with 448 million people. However, the implementation of the rigid AI Act and Digital Markets Act has deterred them from launching their latest AI products in the region.

In June, Meta delayed the training of its large language models on public content shared by adults on Facebook and Instagram in Europe after pushback from Irish regulators. Meta AI, its frontier AI assistant, has still not been released within the bloc due to its “unpredictable” regulations.

Apple will also not be making its new suite of generative AI capabilities, Apple Intelligence, available on devices in the E.U. initially, citing “regulatory uncertainties brought about by the Digital Markets Act,” via Bloomberg.

SEE: Apple Intelligence EU: Potential Mac Release Amid DMA Rules

According to a statement that Apple spokesperson Fred Sainz provided to The Verge, the company is “concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security.”

Thomas Regnier, a European Commission spokesperson, told TechRepublic in an emailed statement: “All companies are welcome to offer their services in Europe, provided that they comply with E.U. legislation.”

Google’s Bard chatbot was released in Europe four months after its U.S. and U.K. launch, following privacy concerns raised by the Irish Data Protection Commission. It is thought that similar regulatory pushback led to the delayed arrival of its second iteration, Gemini, in the region.

This month, Ireland’s DPC launched a new inquiry into Google’s AI model, PaLM 2, as it may violate GDPR regulations. Specifically, it is looking into whether Google sufficiently completed an assessment that would identify risks associated with how it processes the personal data of Europeans to train the model.

X has also agreed to permanently stop processing personal data from E.U. users’ public posts to train its AI model Grok. The DPC took Elon Musk’s company to the Irish High Court after finding it had not applied mitigation measures, such as an opt-out option, until a number of months after it had started harvesting data.

Many tech companies have their European headquarters in Ireland, as it has one of the lowest corporate tax rates in the E.U. at 12.5%, so the country’s data protection authority plays a primary role in regulating tech across the bloc.

UK’s own AI regulations remain unclear

The U.K. government’s stance on AI regulation has been mixed, partly because of the change in leadership in July. Some representatives are also concerned that over-regulating could push away the biggest tech players.

On July 31, Peter Kyle, Secretary of State for Science, Innovation, and Technology, told executives at Google, Microsoft, Apple, Meta, and other major tech players that the incoming AI Bill will focus on the large ChatGPT-style foundation models created by just a handful of companies, according to the Financial Times.

He also reassured them that it would not become a “Christmas tree bill” where more regulations are added through the legislative process. He added that the bill would primarily focus on making voluntary agreements between companies and the government legally binding and turn the AI Safety Institute into an “arm’s length government body.”

As seen with the E.U., AI regulations delay the rollout of new products. While the intention is to keep consumers safe, regulators risk limiting their access to the latest technologies, which could bring tangible benefits.

Meta has taken advantage of this lack of immediate regulation in the U.K. by announcing it will train its AI systems on public content shared on Facebook and Instagram in the country, which it is not currently doing in the E.U.

SEE: Delaying AI’s Rollout in the U.K. by Five Years Could Cost the Economy £150+ Billion, Microsoft Report Finds

On the other hand, in August, the Labour government shelved £1.3 billion worth of funding that had been earmarked for AI and tech innovation by the Conservatives.

The U.K. government has also continuously indicated that it plans to take a strict approach to regulating AI developers. July’s King’s Speech said that the government will “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.”

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories

spot_img