Scalable Extraction of Training Data from (Production) Language Models
A research study identifies vulnerabilities across various types of Language Models, ranging from open source (Pythia) to closed models (ChatGPT), and semi-open models (LLaMa). The vulnerabilities in semi-open and closed models are particularly concerning due to the non-public nature of their training data.
CNIL's Recommendation : AI Providers & Legal Responsibilities
The French Data Protection Authority, the Commission Nationale de l’Informatique et des Libertés (CNIL) published a guide about the legal qualification of AI System providers.