Summary

The article compares U.S. and EU approaches to AI regulation in life sciences, highlighting the U.S.’s sector-specific, innovation-driven model versus the EU’s rights-based framework. It emphasizes growing U.S. state laws on AI accountability and the need for global balance between privacy, ethics, and innovation.

I. Introduction

In our previous article, we explored the expanding role and transformative potential of Artificial Intelligence (AI) within the life sciences sector, with particular emphasis on clinical trials. We illustrated this potential through specific real-world use cases, demonstrating how AI is reshaping research and development processes. Additionally, we discussed the foundational conditions that have enabled these advancements, notably the surge in data acquisition capabilities and the growing number of regulatory approvals.

Crucially, we placed a strong emphasis on the ethical considerations and data protection challenges that arise from these AI applications, particularly within the framework of EU regulations. We outlined the key principles necessary for achieving compliance with data protection requirements, offering practical guidance for navigating this evolving regulatory landscape.

Building on this foundation, Part 2 of our AI in Life Sciences article series shifts the focus to regulatory developments in the United States. Here, we examine the specific obligations placed on developers and deployers of AI solutions in the life sciences sector, with particular attention to privacy requirements and AI-specific legislative frameworks.

II. Meeting the Data Protection  Requirements under US Law

As an introductory note, it should be noted that the United States and the European Union approach personal information governance from fundamentally different philosophical and legal perspectives. In the United States, privacy is rooted in protecting individual autonomy, often framed as freedom from unwarranted government intrusion. It is closely tied to civil liberties enshrined in the U.S. Constitution, particularly the First and Fourth Amendments. American privacy law tends to focus on preventing abuses of power by the government, while offering individuals protection against corporate misuse of information in a manner that preserves economic and business innovation. In contrast, the European Union treats data protection as a fundamental human right. Enshrined in Article 8 of the Charter of Fundamental Rights of the EU, data protection is about ensuring that individuals maintain ownership and control over their personal data, emphasising dignity, transparency, and the right to informational self-determination.

This divergence is reflected in the distinct legal frameworks adopted by the two regions. In the EU, data protection is governed centrally through a comprehensive, cross-sectoral regulation - the GDPR - which applies to all businesses regardless of size, sector, or revenue thresholds. By contrast, privacy regulation in the US operates on two separate levels - federal and state - with the majority of laws being enacted at the state level (e.g., the California Consumer Privacy Act "CCPA") and being sector-specific in nature. Moreover, many of these state laws impose applicability thresholds based on business volume or revenue, or carve out exemptions for certain activities, often failing to grant specific rights to individuals impacted by those activities. Conversely, the EU's approach is holistic, rights-driven, and rooted in the principle of individual control over personal data.

Following from the above, american citizens enjoy limited and sector-dependent rights regarding their personal information. For instance, individuals may have the right to access and delete data under specific laws like CCPA, but there is no universal right across all industries or types of data.

Building on this  discussion, this section focuses specifically on the U.S. privacy frameworks relevant to the life sciences sector. More specifically, the regulation of personal health data is primarily governed by federal laws, supplemented by significant contributions from state legislation. Three key federal regulations emerge as especially important:

US – HIPAA Compliance

In the United States, the Health Insurance Portability and Accountability Act (HIPAA) establishes national standards for the protection of certain health information. Specifically, the HIPAA Privacy Rule regulates the use and disclosure of "protected health information" (PHI) held by "covered entities" - including health plans, healthcare clearinghouses, and healthcare providers conducting certain electronic transactions - and their "business associates." PHI broadly includes individually identifiable health information transmitted or maintained in any form or medium.

HIPAA imposes stringent requirements on how PHI is collected, stored, shared, and secured, aiming to ensure the confidentiality, integrity, and availability of sensitive health data. It sets conditions under which PHI can be used for purposes such as treatment, payment, and healthcare operations, as well as when additional authorisation is necessary, such as for marketing or research activities.

HIPAA has important implications for the use of AI in healthcare. Any AI application that processes PHI must comply with HIPAA’s Privacy, Security, and Breach Notification Rules - a task that can prove challenging given the inherent data-sharing and dynamic data flow requirements of AI systems.

For example, consider an AI-powered health analytics firm that develops an algorithm to predict patient risks for chronic diseases based on electronic PHI (e-PHI). One major compliance challenge arises from HIPAA’s Security Rule, which requires continuous encryption of e-PHI both in transit and at rest. In AI environments, where data is constantly entering, being processed, and exiting systems in real time, ensuring consistent, end-to-end encryption and maintaining security protocols can become complex and resource-intensive.

Unlike the GDPR’s broader approach to personal data, HIPAA takes a more narrow and structured approach by specifically identifying 18 types of identifiers (such as names, addresses, Social Security numbers, and biometric data) that, when linked to health information, constitute PHI. If all 18 identifiers are removed according to HIPAA’s de-identification standards, the data is considered de-identified and is no longer subject to HIPAA’s protections. This clear, prescriptive delineation helps organisations determine whether HIPAA requirements are triggered.

Nevertheless, in practice, particularly during AI development, it can be exceedingly difficult to guarantee that developers or systems have no access to residual identifiers or that data remains fully de-identified throughout the AI lifecycle. As a result, the safest and often necessary approach is to assume HIPAA’s applicability during AI model training, testing, and deployment.

In addition, HIPAA establishes specific contractual obligations between "Covered Entities" (such as health plans, healthcare clearinghouses, and healthcare providers) and their "Business Associates" (third parties performing services involving PHI on behalf of Covered Entities). These contracts, known as Business Associate Agreements (BAAs), are essential to clearly allocate responsibilities for safeguarding PHI in AI development and deployment activities.

Therefore, when deploying AI solutions within the U.S. healthcare sector, it is essential to ensure full end-to-end compliance with HIPAA’s Privacy and Security Rules, covering every stage of data processing and system operation.

However, HIPAA’s scope is not universal. Its requirements apply only when PHI is involved and processed by covered entities (i.e., health plan, a healthcare clearinghouse, or a health care provider who transmits health information in electronic form in connection with the provision of medical or health services) or their business associates (i.e., an individual or entity, other than a member of a covered entity’s workforce, who creates, receives, maintains, or transmits protected health information (“PHI”) in order to deliver services to a covered entity). When health-related information is collected, used, or disclosed outside this context - particularly in fields like scientific research - HIPAA protections may not apply. More specifically, HIPAA generally does not apply to scientific research because its rules cover only "Covered Entities" and their "Business Associates," not independent research organisations or companies. Additionally, if the data used in research is properly de-identified according to HIPAA’s strict standards (removing all 18 specified identifiers), it no longer qualifies as Protected Health Information (PHI) and falls outside HIPAA’s scope.

US – HITECH Act

The Health Information Technology for Economic and Clinical Health (HITECH) Act promotes the adoption and meaningful use of health information technology. Substantially, it addresses the privacy and security concerns associated with the electronic transmission of health information, particularly underlining the conditions under which data should be encrypted, which is crucial for AI platforms handling e-PHI. For example, a cloud service provider, acting as a business associate, uses AI to optimise data storage and access patterns for multiple healthcare providers' electronic health records. Under HITECH, this provider must adhere to the same stringent data protection standards as the healthcare providers themselves, facing high penalties for non-compliance. The complexity of these AI systems can make it difficult to fully document and prove compliance, particularly when handling large and diverse data sets.

US – ADPPA

The American Data Privacy and Protection Act (ADPPA) is a proposed federal data privacy legislation that aims to establish a comprehensive data protection framework across the United States. The bill sets out requirements that vary depending on the size and type of entities processing personal data, reflecting a risk-based approach similar to that of the GDPR. Although the ADPPA received bipartisan support and marked significant progress toward a federal privacy standard, it ultimately stalled in Congress and was not enacted into law. Its future remains uncertain, with renewed legislative efforts and broader federal privacy reform still heavily influenced by political dynamics.

US – State laws

In addition to the federal developments discussed above, individual U.S. states have been especially active in passing their own data privacy laws - a trend that is already reshaping the privacy landscape for AI applications. The state-level initiatives introduce varying compliance obligations that directly impact the AI privacy considerations outlined earlier.

Many data privacy regulations adopt a risk-based approach, applying only when organisations process significant volumes of personal data or derive substantial revenue from such activities. Under this model, companies offering AI-powered health solutions - often handling large datasets - are  likely to be subject to these obligations. In contrast, smaller entities or those managing more limited datasets may face lighter regulatory burdens, reflecting a proportional alignment between the scale of data processing activities and the level of oversight required.

III. Regulatory Responses to AI in Health in the US

"Hard law" Approach

While data privacy remains a critical concern, states are also moving beyond privacy to directly address the broader challenges posed by AI technologies. The proactive regulatory activity marks a shift toward comprehensive frameworks that seek to govern the responsible development and deployment of AI systems across sectors, including in the health sector.

In this context, instead of waiting for federal action, States are adopting a remarkably active stance on regulations responding to concerns around AI systems, collectively aiming to govern the responsible development and deployment of AI systems, with particular emphasis on mitigating risks and ensuring transparency, accountability, and fairness.

The main common characteristics between the States’ approaches, are their consumer-oriented and cross-sectoral character, as well as their focus on regulating what is defined as “high-risk AI system” or “automated decision-making tool”. The definition of high-risk AI systems under US law is closer to the one of stand-alone high-risk AI systems under the EU AI Act, by encompassing AI systems which, when deployed, make or are a substantial factor in making a consequential decision, in the sense of a decision significantly impacting individuals’ livelihood and life opportunities in various areas, including healthcare.

At the same time, newly enacted legislation - most prominently the Colorado Artificial Intelligence Act and California Assembly Bill 2930 - aims to combat algorithmic discrimination and promote fairness, transparency, oversight, and accountability.

To achieve these goals, these laws establish common obligations for developers and deployers of high-risk AI systems, including:

  1. Providing Transparency:
  2. Ensuring that the workings, decision-making processes, and potential impacts of AI systems are clear and understandable to individuals and the public. This includes providing information about the intended use of the AI system, a general description of how it functions, and guidance on how individuals can exercise their rights.
  3. Documentation Requirements:
  4. Requiring developers to disclose certain documentation to deployers - including information about the system’s intended use, known or foreseeable risks of algorithmic discrimination, and limitations - to enable deployers to meet their legal obligations.
  5. Risk and Impact Assessments:
  6. Obliging deployers to conduct impact assessments before deploying high-risk AI systems, particularly where these systems are used to make consequential decisions about individuals.
  7. Audits:
  8. Implementing regular audits to evaluate the AI system’s performance, particularly regarding its impact on fairness, non-discrimination, and compliance with governance policies.
  9. AI Governance Programs:
  10. Establishing governance programs that consist of structured policies, procedures, and controls designed to oversee and manage the development, deployment, and use of AI systems. These programs must be "reasonable" considering factors such as the size and complexity of the organisation, the nature and risks associated with the AI system, the volume and sensitivity of the data processed, and alignment with best practices like the NIST AI Risk Management Framework.

Furthermore, to ensure transparency and fairness in AI operations, these legislations grant individuals specific rights, most notably:

  1. The Right to Notice and Explanation:
  2. Requiring AI developers - entities that create or substantially modify AI systems - and AI deployers - entities that use AI systems to make consequential decisions affecting individuals - to provide clear information about the functionality of the system and the logic behind decisions.
  3. The Right to Correction and Opt-Out:
  4. Enabling individuals to challenge decisions made by high-risk AI systems and, where appropriate, to opt out of solely automated decisions that have a significant impact on their lives.

Although AI systems approved, certified, or cleared by the FDA; AI systems used by HIPAA-regulated entities for non-high-risk healthcare recommendations; and AI systems employed solely for public interest or scientific research (without making consequential decisions about individuals) are excluded from the scope of these state AI laws, many health-related AI applications remain subject to regulation. State laws may still apply to a wide range of health-related AI use cases, including consumer health apps ****(e.g., fitness trackers, mental health chatbots), employer health screening tools, health insurance underwriting and claims processing algorithms, AI-driven wellness and lifestyle management platforms, AI tools predicting health risks for marketing purposes, and health-focused analytics services processing behavioural or wearable device data outside of HIPAA-regulated contexts. Given this broader reach, organisations developing or deploying AI solutions in the health sector - particularly outside traditional clinical or regulatory frameworks - must carefully assess their exposure to emerging state-level AI compliance requirements. To conclude, as with the state privacy laws, the US-state AI laws are also consumer-oriented, focusing primarily on protecting individual users and excluding various regulated activities from its scope, whereas the EU AI Act adopts a broader, risk-based approach that emphasises systemic oversight and governance across sectors, rather than centering solely on consumer protection.

While states have taken the lead in regulating AI at the local level, federal initiatives have also sought to shape national AI governance, particularly through executive action aimed at establishing principles for the safe and responsible development of AI technologies. In October 2023, President Joe Biden signed Executive Order 14110, titled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." This order established eight guiding principles for AI design and deployment within federal agencies, emphasising safety, security, transparency, and the protection of privacy and civil liberties. Although primarily directed at federal agencies, it signaled the administration's broader regulatory priorities in AI governance.

However, in January 2025 President Donald Trump revoked Executive Order 14110 and subsequently issued Executive Order 14179, titled "Removing Barriers to American Leadership in Artificial Intelligence". This new order aims to promote AI development by reducing regulatory constraints, emphasising innovation, economic competitiveness, and national security. It mandates the creation of an AI Action Plan within 180 days to bolster America's global AI leadership. The revocation of EO 14110 and the introduction of EO 14179 represent a significant shift in U.S. federal AI policy, moving from a framework focused on risk mitigation and ethical considerations to one prioritising rapid innovation and reduced oversight. While this change may accelerate AI development, it also raises concerns about the potential for insufficient safeguards against risks such as bias, privacy violations, and other unintended consequences.

"Soft Law" Approach

In addition to the aforementioned points, the US soft law approach to AI regulation offers valuable guidance on both current and anticipated expectations for the development and deployment of AI systems under US jurisdiction, shaping industry standards and practices through flexible frameworks and non-binding principles.

The Blueprint for an AI Bill of Rights represents a non-binding yet influential framework which targets AI systems with the potential to meaningfully impact rights, opportunities, or access to critical services. The framework establishes five core principles:

  1. Safe and Effective Systems: Ensuring that AI systems undergo thorough pre-deployment testing and ongoing monitoring to avoid harm or inappropriate use.
  2. Protection against Algorithmic Discrimination: Safeguarding against unjustified disparate treatment based on characteristics such as race, gender, or age through equity assessments and continuous testing.
  3. Data Privacy: Prioritising user consent and protecting sensitive data, especially in areas like healthcare.
  4. Notice and Explanation: Providing clear, accessible explanations of AI system operations, outcomes, and the role of automation in decision-making.
  5. Human Alternatives and Fallback: Offering individuals the option to opt out of automated decisions and ensuring access to human oversight where necessary, particularly in sensitive domains like healthcare.

Moreover, though not legally binding, the National Institute of Standards and Technology (NIST) AI Risk Management Framework provides best practices for AI governance. It supports developers and deployers in managing AI-related risks by offering a structured approach to identifying, evaluating, and mitigating potential harms. The framework is referenced in various state laws, such as the Colorado Artificial Intelligence Act, which requires developers and deployers to adhere to it as part of their AI governance programs.

IV. Conclusion

While the European Union adopts a highly legalistic and rights-driven approach to regulating AI and data privacy - with a strong focus on transparency, individual rights, and informed consent - the United States favours a more pragmatic, sector-specific regulatory framework. U.S. regulation emphasises risk management, data security, and innovation enablement, especially regarding the safeguarding of sensitive health information. This pragmatic approach often results in more flexible but also more fragmented protections, creating practical challenges around consistent compliance, especially in sectors like healthcare where sensitive personal data is central to AI development and deployment.

Despite these differences, the two regulatory models can be seen as complementary. The EU model helps ensure that individual autonomy, fairness, and transparency are respected throughout the AI lifecycle, particularly in how individuals are informed and empowered regarding automated decision-making. Meanwhile, the U.S. focus on technical safeguards, cybersecurity, and operational resilience provides critical layers of protection against evolving risks such as data breaches, algorithmic exploitation, and systemic vulnerabilities in AI systems. Together, these approaches offer a broader, more holistic framework to address the full spectrum of ethical, legal, and operational challenges associated with the use of health data in AI — from protecting individuals' rights to ensuring the robustness and security of the underlying systems.

Athanasia Dogouli

Compliance Associate

Home

Discover our latest articles

View All Blog Posts
March 12, 2025
Clinical Trials
Biotech & Healthtech
Data Transfers
Regulations & Guidelines
Clinical Trial Sponsor

Navigating Privacy Requirements for Clinical Trials Across Jurisdictions: Focus on China

China’s data protection regulations play a crucial role in clinical trials, requiring sponsors and researchers to comply with multiple laws, including the PIPL, GCP-2020, and cross-border data transfer rules. Unlike other jurisdictions, China imposes strict consent requirements, risk assessments, and regulatory filings, making compliance a key factor when selecting trial locations and managing participant data.