HomeTechUK authority calls for integral data protection in AI amid increasing breaches

UK authority calls for integral data protection in AI amid increasing breaches

Date:

Related stories

Tottenham vs Liverpool live updates: Premier League predictions, team news and latest score

Capacity: 62,850First used: 2019London’s biggest club stadium was built...

The four English counties named among the best places in the world to visit

Colchester Castle (Image: Getty)An area which boasts a unique...

How to get your sports fix every day this Christmas

The festive season means there is plenty of sport...

Giovanni Pernice wins Italian TV dance show after leaving UK amid Strictly probe

Former Strictly Come Dancing professional Giovanni Pernice has won...
spot_imgspot_img

The Information Commissioner’s Office (ICO), the UK authority in charge of overseeing the use and collecting of personal data, has revealed that it received reports on more than 3,000 cyber breaches in 2023. 

This figure highlights an urgent concern in the world of technology: the need for strong data protection measures, particularly in the development of AI technologies. The UK’s data watchdog has issued a warning to tech companies, demanding that data protection be ‘baked in’ at all stages of AI development to ensure the highest level of privacy for people’s personal information.

According to the watchdog, AI processes that use personal data must follow current data protection and transparency standards. This includes the usage of personal data at various phases, such as training, testing, and deploying AI systems. John Edwards, the UK Information Commissioner, will soon speak to technology leaders about the need for data protection. 

“As leaders in your field, I want to make it clear that you must be thinking about data protection at every stage of your development, and you must make sure that your developers are considering this too,” he will emphasise in his upcoming speech focused on privacy, AI and emerging technologies.

Sachin Agrawal, MD of Zoho UK, agreed that as AI revolutionises business operations, data protection must be embedded by design. Agrawal highlighted Zoho’s Digital Health Study, which found that 36% of UK businesses polled considered data privacy vital to their success. However, a concerning finding is that just 42% of these businesses fully comply with all applicable legislation and industry standards. This disparity highlights the crucial need for enhanced education so businesses can improve how they manage client data security across all data usage elements, not just AI.

He also criticised the prevalent industry practice of exploiting customer data, labelling it unethical. He promotes a more principled approach, where companies recognise customer data ownership. “We believe a customer owns their own data, not us, and only using it to further the products we deliver is the right thing to do,” he stated. This approach not only ensures compliance with the law, but it also fosters trust and deepens client relationships.

As AI technology adoption increases, the demand for ethical data practices is expected to intensify. Businesses that do not prioritise their customers’ best interests in their data policies risk losing customers to more ethical alternatives.

The importance of GDPR and data security

Given these challenges, it is clear that existing legislative frameworks, such as the GDPR, must evolve to keep pace with technological advancements.

The GDPR was introduced six years ago to standardise European privacy and data protection frameworks. With the burgeoning interest in AI, it is now seen as a vital line of defence against the uncertainties brought about by new technologies, business models, and data processing methods.

However, data privacy concerns have become more complex with the surge in generative AI applications. Companies like OpenAI have been criticised for not being transparent about their training data collection methods and how they manage privacy issues with their AI models.

For instance, regulatory bodies in Italy initially halted the launch of OpenAI’s ChatGPT over privacy concerns. However, they permitted its operation a few weeks later, only to encounter reports of privacy violations early in 2024. Privacy concerns are not confined to large AI providers; enterprises are progressively integrating newer LLMs into their own processes and data, posing unique challenges.

Addressing these concerns is crucial not just for meeting regulatory compliance but also for enhancing trust in AI technologies. Balancing rapid technological innovation with a framework that protects fundamental rights can create a trusted and confident environment for AI technology.

See also: GitHub enables secret scanning push protection by default

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: , , ,

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories

spot_img