You've successfully subscribed to our newsletter.
Something went wrong, please try again.
On November 1st, the United Kingdom (UK) published the Bletchley Declaration on the opening day of the Artificial Intelligence (AI) Safety Summit hosted at Bletchley Park, central England.This declaration, made by the UK and joined by 28 countries and international organizations, aims to promote cooperation among them regarding AI.

Why Bletchley ? It was at Bletchley Park that the Enigma code was successfully deciphered during WW2 …

International efforts are being made to examine and address the potential impact of AI systems, especially on principles such as the protection of human rights, data protection, and ethics.👉🏻 The primary focus of this declaration is on Frontier AI. These AI are defined as highly capable general-purpose AI models and specific narrow AI that could exhibit capabilities causing harm

The core concerns of the organizations revolve around the risks posed by Frontier AI, particularly in the domains of cybersecurity and biotechnology. Through international cooperation, they aim to deepen their understanding of these risks and identify actions to address them.

A regulatory approach will be considered by the participating organizations. This includes making, where appropriate, classifications and categorizations of risk based on national circumstances and applicable legal frameworks. International codes of conduct may result from this cooperation.

All AI actors are affected by this Declaration. Actors who develop Frontier AI have a significant responsibility to ensure the safety of these systems. The organizations encourage these actors to provide context-appropriate transparency and accountability in their plans to measure, monitor, and mitigate potentially harmful capabilities and associated effects, particularly to prevent misuse and issues of control, as well as the amplification of other risks.

Click here to read more


Discover our latest articles

View All Blog Posts
December 8, 2023

Scalable Extraction of Training Data from (Production) Language Models

A research study identifies vulnerabilities across various types of Language Models, ranging from open source (Pythia) to closed models (ChatGPT), and semi-open models (LLaMa). The vulnerabilities in semi-open and closed models are particularly concerning due to the non-public nature of their training data.

December 1, 2023

iliomad is launching its UK entity !

We're thrilled to share the exciting news of launching our inaugural affiliate in the UK!

November 10, 2023

CNIL's Recommendation : AI Providers & Legal Responsibilities

The French Data Protection Authority, the Commission Nationale de l’Informatique et des Libertés (CNIL) published a guide about the legal qualification of AI System providers.