What the FDA’s New Guidance Signals: Regulatory Restraint in the AI Era

What the FDA’s New Guidance Signals: Regulatory Restraint in the AI Era

Public authorities are under growing pressure to respond to rapid technological change without slowing it down. In health care, that tension is especially visible in artificial intelligence, where products can range from low-stakes lifestyle tools to high-risk software that directly influences diagnosis or treatment. Against this backdrop, U.S. Food and Drug Administration (FDA) Commissioner Marty Makary used the 2026 Consumer Electronics Show (CES) to frame a deregulatory message:

“The government doesn’t need to be regulating everything,” and regulators should “get out of the way” where oversight is not warranted.

The CES Announcement: Two Guidance Documents, One Theme

On January 6, 2026, the FDA released two guidance documents focused on clarifying when digital health tools fall outside the agency’s device oversight. The stated aim, echoed in Makary’s public remarks, is to reduce unnecessary regulatory burden while keeping a clear pathway for “medical-grade” products that make clinical claims or pose higher risks.

Although the announcement emphasized “AI,” the practical impact is less about endorsing any specific model and more about drawing regulatory boundaries that affect many AI-enabled products, especially wearables and decision-support tools.

Guidance 1: General Wellness Products and Wearables

The first document, “General Wellness: Policy for Low Risk Devices” (January 2026), updates the FDA’s approach to low-risk “general wellness products,” including certain wearable devices and lifestyle-focused software. The guidance explains that products intended to maintain or encourage a healthy lifestyle, and that are unrelated to diagnosing, curing, mitigating, preventing, or treating disease, may fall outside FDA device regulation under the Federal Food, Drug, and Cosmetic Act as amended by the 21st Century Cures Act.

In practice, the policy direction described publicly is that non-medical-grade wearables providing general health information can proceed without the full weight of premarket review, while products that market themselves as clinically accurate, “medical grade,” or intended for disease-related decisions remain more likely to be treated as regulated devices.

Guidance 2: Clinical Decision Support Software

The second document, “Clinical Decision Support Software” (January 2026), addresses software that supports health care professionals in making clinical decisions. It focuses on clarifying which clinical decision support (CDS) functions are excluded from the statutory definition of a “device” and provides examples distinguishing non-device CDS, device CDS, and functions that may be subject to enforcement discretion.

This area matters because modern CDS can be powered by AI and can shape how clinicians interpret symptoms, labs, images, risk scores, or treatment options. The FDA’s revised framing seeks to reduce uncertainty for developers and users by clarifying when oversight applies and when it does not.

Why this Matters for Oncology and Cancer Care Pathways

Oncology is an especially relevant setting for these changes because the care journey often extends beyond the clinic. Wearables and patient-facing software can support symptom monitoring, activity and sleep tracking, detection of physiologic changes during treatment, and survivorship wellness efforts. Clearer regulatory boundaries may lower barriers for iterative, consumer-facing tools that aim to support healthier behavior without claiming to diagnose cancer, manage chemotherapy dosing, or substitute for clinician judgment.

At the same time, oncology is also a domain where “decision support” can be consequential, including tools that help clinicians assess adverse event risk, triage symptoms, or interpret complex clinical data. For higher-stakes CDS particularly when outputs could reasonably be used to guide treatment regulatory clarity is valuable not only to developers, but also to hospitals that must evaluate safety, accountability, and clinical governance before adoption.

The Policy Trade-Off: Speed and Clarity vs Safety and Trust

A lighter-touch approach can improve predictability for innovators and reduce time-to-market for low-risk tools. It can also help investors and health systems distinguish between wellness products and medical devices, a distinction that has often been blurred by marketing language and consumer expectations.

However, reducing oversight does not remove core challenges associated with AI in health contexts. Even when tools are positioned as “informational,” real-world use can drift toward clinical reliance, especially if interfaces present outputs with medical-sounding confidence. This makes transparency about intended use, limitations, and appropriate clinical escalation essential. It also underscores why the FDA continues to emphasize a separate lane for products that are genuinely medical-grade or that present meaningful safety risks.

Written by Nare Hovhannisyan, MD