The European Union’s lead privacy regulator has launched a probe into Google’s compliance with data protection laws regarding its use of users’ personal information for training generative artificial intelligence models.
The question is whether the tech giant intended to conduct an information security impression assessment (DPIA) to proactively consider the risks that its AI technologies might pose to individuals’ rights and freedoms, as their data was used to train the models, thereby ensuring compliance with data protection regulations?
Generative artificial intelligence instruments are notorious for generating credible yet fabricated information. This tendency, combined with the ability to provide on-demand access to sensitive information, poses a multitude of authorized threats to its creators. The Data Protection Commission in Ireland, responsible for monitoring Google’s adherence to the General Data Protection Regulation, retains the authority to impose penalties up to 4% of Alphabet’s worldwide annual revenue for substantiated violations.
Google has developed various generative AI tools, including a comprehensive portfolio of large language models (LLMs), which it has branded as (formerly). Utilizing specialized knowledge, this technology powers advanced AI chatbots, simultaneously optimizing online search capabilities. Developing and showcasing innovative consumer-facing AI tools relies on the foundation of Google’s Large Language Model (LLM), which was unveiled during its annual I/O developer conference.
The Irish Data Protection Commission (DPC) is scrutinizing how Google developed its foundational AI model, with this probe falling under Article 110 of Ireland’s Data Protection Act 2018, which domesticated the General Data Protection Regulation (GDPR).
The coaching of generative AI fashions often necessitates massive quantities of information, leading to increased scrutiny of the forms of data LLM makers have acquired – including how and where they obtained it – amid concerns regarding a range of legal issues, including copyright and privacy?
Data used for AI training purposes that involves the personal information of EU individuals is subject to the bloc’s strict data protection regulations, regardless of whether it was gathered from publicly available sources or directly obtained from users. Several Large Language Models (LLMs), including the creators of ChatGPT and the makers of a popular AI-powered chatbot, have faced concerns regarding privacy compliance, specifically in relation to user data collection.
Elon Musk’s company, X, has drawn controversy from the Data Protection Commission (DPC) over allegations it exploited individuals’ knowledge to train artificial intelligence, leading to legal action but ultimately without penalty. Although the Data Protection Commission (DPC) has determined that Digital Person Corporation’s (DPC) processing of personal knowledge in coaching its AI tool, Grok, breaches the regulatory framework?
The Data Protection Commission’s DPIA probe into Google’s GenAI marks the latest regulatory development in this sphere?
The Data Protection Commission issued a query as to whether Google had complied with any obligations required for an evaluation, as per Article 35 of the Basic Knowledge Safety Regulation, prior to processing EU/EEA personal data related to its foundational AI model, Pathways Language Model 2 (PaLM 2), in connection with its development.
It underscores the crucial role a DPIA may play in ensuring that individuals’ fundamental rights and freedoms are thoroughly considered and protected whenever processing personal data poses a significant risk.
The Data Protection Commissioner’s statutory inquiry forms a vital component of its broader efforts, in collaboration with EU/EEA peer regulators, to regulate the processing of private knowledge related to EU/EEA data topics within the development of AI models and methods. This endeavour is part of ongoing efforts by the bloc’s GDPR enforcer community striving for consensus on this crucial issue.
Google declined to comment on inquiries regarding the origins of data utilized to train its Generative AI models, instead issuing a statement through spokesman Jay Stoll, which read: “We take our obligations under the GDPR seriously and are willing to work constructively with the DPC to address their questions.”