Health Technologies

AI as a medical device – What are the regulatory considerations? – Health Tech World

By Laura Friedl-Hirst. Founder and Managing Director of LFH Regulatory

Artificial intelligence (AI) is currently revolutionising the healthcare industry, by introducing innovative solutions in diagnostics, treatment, and patient care.

However, the rapid integration of AI into medical devices presents significant regulatory challenges.

In Europe, the EU AI Act 2024/1689 and the UK AI Roadmap for Medical Devices are key frameworks shaping the future of AI in healthcare.

Additionally, the standard IEC 62304- “Medical device software – software life cycle processes”, plays a crucial role in ensuring the safety and reliability of software used in medical devices.

This article explores how these frameworks intersect and their implications for the development and future regulation of AI-driven medical devices.

The EU AI Act and Medical Devices

The AI Act is a legal Framework on Artificial Intelligence created for the development, deployment and use of AI within the European Union (EU).

It provides legal certainty and ensures the protection of fundamental rights. The AI Act aims to promote development and innovation of safe and trustworthy AI, and uptake of AI, across the EU in private and public sectors.

Laura Friedl-Hirst

Innovation encourages the use of AI regulatory sandboxes, enabling a controlled environment of development, validation and testing in real world conditions.

Who does the AI Act apply to?

The AI Act applies to providers, deployers and economic operators of AI within and currently supplying into the EU. It does not apply to the UK or US markets.

It is essential to check which specific AI legislation or rules are applicable for each region during regulatory planning.

What does this mean for AI Medical Devices?

Let’s first look at the definition of AI. It is defined under Article 3 of the Act as follows:

“‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

All AI devices are classified based on their risk levels:

  • Unacceptable risk: Prohibited. Examples include Social scoring systems and manipulative AI. For further examples and explanations, please refer Chapter II, Article 5 of the AIA.
  • High risk AI: This is most regulated.  Examples include Biometrics, Critical infrastructure and Medical Devices.
  • Limited risk AI: Subject to lighter transparency obligations.  Developers and deployers must ensure that end users are aware that they are interacting with AI. Examples include Chatbots and Deepfakes.
  • Minimal Risk:  Unregulated. Examples include AI enabled video games and spam filters, which are under review and changing with generative AI regulatory updates.

The AI Act is a horizontal legislation, meaning it will be used in addition to the EU Medical Device Regulation (MDR) 2017/745, and both regulations will need to be considered for medical devices that contain an AI function determined under the medical device definition.

All the EU MDR requirements will still need to be followed, but additional AI Act ones will also need to be considered and implemented during your regulatory planning.

Below is a useful table to help outline the similar requirements, and the differences between both regulations:

Here is a timeline explaining the journey from approval of the Regulation through to transition:

What about General Purpose AI (GPAI)?

GPAI means an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.

The first General-Purpose AI Code of Practice will detail the AI Act rules for providers of general-purpose AI models and general-purpose AI models with systemic risk.

All GPAI model providers must provide technical documentation, instructions for use, comply with the Copyright Directive, and publish a summary about the content used for training.

GPAI models that present a systemic risk must also conduct model evaluations, adversarial testing, track and report serious incidents and ensure cybersecurity protections.

UK Software and AI as a Medical Device Change Programme – Roadmap

The UK AI Roadmap for Medical Devices outlines the government’s strategy for fostering innovation while ensuring patient safety.

Key elements of the roadmap include:

  1. Regulatory Sandboxes: The UK plans to establish regulatory sandboxes to allow developers to test AI-based medical devices in a controlled real-world environment. This approach encourages innovation while maintaining regulatory oversight.
  2. Inclusive innovation: The MHRA acknowledges that Software as a Medical Device (SaMD) and AI as a Medical Device (AIaMD) must function effectively across all populations for their intended use and address the needs of diverse communities. There will need to be expansion on broader efforts to address health inequalities in medical device regulation, ensuring that specific initiatives for SaMD and AIaMD are built on this foundation. This will involve focusing on unique challenges posed by AIaMD that go beyond those of traditional medical devices.
  3. Adaptive Regulatory Approach and Reviewing Legislation: The UK aims to create a flexible regulatory framework that can adapt to the rapid pace of AI advancements. This includes updating existing regulations, such as the Medical Devices Regulations 2002, to address the unique challenges posed by AI.
  4. Collaboration with Industry: The UK government is working closely with industry stakeholders, healthcare providers, and regulatory bodies to develop guidelines and standards for AI in medical devices.

IEC 62304 and Its Role in AI-Based Medical Devices

The IEC 62304 standard, titled “Medical Device Software – Software Life Cycle Processes,” is a critical framework for ensuring the safety and reliability of software in medical devices.

It applies to all software used in medical devices, including AI-driven systems. The key aspects of IEC 62304 are:

  • Software safety classification
  • Lifecycle Processes
  • Risk Management

For AI-based medical devices, IEC 62304 provides a foundation for ensuring that the software components are safe, reliable, and compliant with regulatory requirements

One thing to note is that IEC 62304 is in the process of getting a much-needed update with approval start date 22nd May 2026 and publication start date 12th August 2026.

The updated standard will apply to the following types of health technologies:

  • Software that’s part of a medical device
  • Software embedded in specific health hardware
  • Software as a Medical Device (SaMD)
  • Software-only products designed for health management, maintenance, or care delivery
  • Health software using AI or machine learning (ML)

New Software Process Rigor Levels:

Proposed to change from current software safety classification in IEC 62304 to using Software Process Rigor levels:

Other Notable Changes:

  • Changes to the software development process
    • Changes include updates to the plan, Software Requirements Analysis, and Software Architecture Design
  • Changes to risk management
    • In line with the introduction of Software Process Rigor Levels.
    • The two-level classification has been proposed to simplify the software classification process. Also simplifying whether the software contributes to a hazardous situation according to the risk management process for the product.
  • Maintenance updates
    • Updates to the of definitions for software maintenance.
  • Updates to general requirements
    • Example: Removal of the Quality System general requirements.  IEC 62304 is not a product level standard, so aspects should remain in the manufacturer’s quality system applied to their products.
  • Modifications for legacy software
    • This element is to be moved to an informative annex.
    • For legacy IEC 62304 software, updating and reviewing the software development plan to assess the impact change.

Linking the UK MDR, EU MDR, EU AI Act, UK AI Roadmap, and IEC 62304

All the Regulations and standards mentioned in this article are intertwined. They collectively address the regulatory and technical challenges of AI in medical devices.

Navigating the Future of AI as Medical Devices 

  • Embracing a Culture of Responsibility

The rapid progress of AI in healthcare offers incredible opportunities, but it also requires a strong sense of accountability.

We must cultivate a culture where ethical AI development is central to every decision we make. This means ensuring transparency, explainability, and fairness in AI algorithms, particularly when they directly affect patient care. 

  • Balancing Innovation and Regulation

One of the toughest challenges faced is finding the right balance between driving innovation and adhering to regulation.

The UK AI Roadmap’s approach allows for experimentation and iteration while safeguarding patient safety. It provides regulatory environments that foster innovation without hindering progress. 

  • The Role of Standards in Building Trust

Standards like IEC 62304 are essential. By adhering to these standards, it demonstrates to patients, healthcare providers, and regulators that products are developed with precision and care.

Compliance should not be seen as a hurdle but as a strategic advantage. A device that meets IEC 62304 standards isn’t just safer; it’s also more competitive in a global market where trust is everything.

Standards for AI regulatory compliance are still a work in progress, and many more will be published in the coming months.

Other relevant standards should be considered during any AI medical device journey such as ISO 27001 for information security, ISO/IEC 23053:2022 – framework for AI systems using ML, etc.

These types of standards should be included into a regulatory strategy when developing a medical device with an AI function.

  • Preparing for the Future

The regulatory landscape for AI in medical devices is still evolving, and you must stay ahead of the curve.

This means investing in continuous learning for your teams, keeping up with emerging standards, and building agile systems that can adapt to new requirements.

The EU AI Act and UK AI Roadmap are just the starting point; you need to keep up to date and be prepared for what’s next.

  • A Call for Global Alignment

While the EU and UK are progressing, the global nature of healthcare calls for greater alignment.

Governments writing the regulations, Standard Organisations, Notified Bodies and Competent Authorities should push for harmonised standards and regulations that enable international collaboration and trade.

This is especially critical for AI-based medical devices, which often operate in global markets. Standards like IEC 62304 provide a common foundation, but broader regulatory alignment is needed to reduce fragmentation and accelerate innovation worldwide.

Conclusion

There is an opportunity and a responsibility to shape the future of AI in medical devices.

By prioritising ethical practices, collaboration, innovation and a focus on patient safety, we can ensure that AI technologies not only advance healthcare but also earn the trust of those who depend on them.

Let’s not just meet these regulations as a tick box exercise – let’s use them as a foundation to build a future where AI-driven medical devices are synonymous with safety, innovation and excellence.

About LFH Regulatory

LFH Regulatory is a trusted partner in navigating the complex world of medical device and In Vitro Diagnostic Regulations.

Founded in 2019 by Laura Friedl-Hirst, the company has rapidly grown into a dynamic team of passionate professionals. Together, they work closely with clients to make the process of regulation and compliance as stress-free as possible – Regulation made Simple.

What truly sets LFH Regulatory apart is its commitment to building strong, long-term relationships with clients.

The company believes that collaboration and open communication are vital in understanding each client’s unique challenges and goals.

By working closely with businesses and products, LFH Regulatory provides tailored solutions that remove the stress from Medical Device Regulation and In Vitro Diagnostic Regulation.

You may also like

Health Technologies

Accelerating Strategies Around Internet of Medical Things Devices

  • December 22, 2022
IoMT Device Integration with the Electronic Health Record Is Growing By their nature, IoMT devices are integrated into healthcare organizations’
Health Technologies

3 Health Tech Trends to Watch in 2023

Highmark Health also uses network access control technology to ensure computers are registered and allowed to join the network. The