Summary

A research conducted by Google DeepMind and numerous universities looks at how easily an outsider, without any prior knowledge of the data used to train a machine learning model, can get this information just by asking the model questions. We found that an adversary can pull out a lot of data, even gigabytes, from publicly available language models like Pythia or GPT-Neo, semi-public ones like LLaMA or Falcon, and private models like ChatGPT.

Key Findings:

- Vulnerabilities in Language Models: The research identifies vulnerabilities across various types of Language Models, ranging from open source (Pythia) to closed models (ChatGPT), and semi-open models (LLaMa). The vulnerabilities in semi-open and closed models are particularly concerning due to the non-public nature of their training data.

- Focus on Extractable Memorization: The study delves into the risks of extractable memorization, where data can be efficiently extracted from a machine learning model without prior knowledge of the training dataset.

- Enhanced Data Extraction Capabilities: The attack model developed by the researchers enables the extraction of training data at rates exceeding 150% compared to normal Language Model usage.

- Ineffectiveness of Data Deduplication: The research indicates that deduplication of training data does not significantly reduce the amount of data that can be extracted.

- Uncertainties in Data Handling: The study highlights ongoing uncertainties in how training data is processed and retained by Language Models.

Click here to read more

Seamus Larroque

CDPO / CPIM / ISO 27005 Certified

Home

Discover our latest articles

View All Blog Posts
July 16, 2025
Clinical Trial Sponsor
Clinical Trials

Data Protection Strategies for Phase III Clinical Trials

Phase III clinical trials require strict compliance with privacy and data protection laws across multiple jurisdictions, including GDPR obligations, local authorizations, and ethics committee oversight. The article outlines practical strategies such as the “funnel approach” to harmonize global frameworks, manage cross-border transfers, appoint Data Protection Officers, and ensure proper informed consent documentation. It also emphasizes the need for local representatives, jurisdiction-specific formalities, and standardized templates to maintain compliance and avoid delays in global studies.

June 10, 2025
AI
USA
Biotech & Healthtech

Addressing the Data Protection and Ethical Challenges posed by AI in Health – Part 2

Our latest analysis: U.S. vs EU—AI regulation shaping healthcare’s future.