Amazon launched Health AI, an agentic assistant that answers patient questions, explains medical records, and books appointments. Microsoft launched Copilot Health, bringing together health records and wearable data in a secure AI space. Epic rolled out AI Charting, which drafts clinician notes and recommends orders directly within the EHR. Seven health systems adopted it on day one.
This is not pilot programs. This is production.
Healthcare AI is deploying at unprecedented speed. Ambient AI scribes listen to patient visits and generate clinical documentation in real time. Diagnostic algorithms flag tumors on CT scans. Predictive models identify patients at risk of sepsis or suicide. Smart wards use IoT sensors and AI analytics to monitor patients continuously.
And most hospitals cannot answer basic questions about any of it: Who approved this tool for clinical use? What data was it trained on, and does that data reflect our patient population? How do we know it’s still performing accurately? Who’s responsible when it makes a mistake?
The governance frameworks hospitals need—where they exist at all—are reactive, fragmented, and years behind the technology. This is not a future risk. It is a present liability.
The Governance Vacuum
When New South Wales Health decided to govern AI use across its public hospital system, it didn’t find a ready-made framework to adopt. It had to build one from scratch.
The NSW Health AI Framework, released this month, establishes a risk-based approval model and a new AI Advisory Service to review proposed projects. It covers seven priority areas: consumers, workforce, privacy and security, governance and regulation, safety, ethics and quality, research and development, and industry. As CIO Richard Taggart explained, “While AI presents great opportunities and benefits for patients and clinicians, it requires careful consideration and management of the potential risks around safety, ethics, privacy, security, and regulation.”
NSW Health is the exception. Most hospitals are operating on faith.
Christopher Congeni, a partner at Amundsen Davis law firm in Cleveland, put it bluntly to the Akron Beacon Journal: “Health care is very, very regulated, and that presents challenges because we’re still trying to figure out how to regulate AI.” Hospitals, physician groups, and private practices are all in the “risk-assessment stage,” he said, and minimizing risk requires comprehensive compliance plans that most don’t yet have.
The stakes are enormous. As we covered in our Diagnostic AI article, the FDA has cleared over 1,300 AI-enabled medical devices—but less than 2% were supported by randomized clinical trials, and there is no federal liability framework defining who is responsible when they fail. That gap is now manifesting in clinical practice.
The Risks Are Already Materializing
Bias in training data. Naomi Scheinerman, assistant professor of bioethics at Ohio State University, warned that AI models often reflect “disproportionate representation in the data of dominant, majoritarian groups.” When algorithms are trained on non-representative populations, they can amplify existing health disparities. Steve Worrell, CEO of Riverain Technologies, which creates algorithms used by the Cleveland Clinic and University Hospitals, acknowledged that ensuring diversity in training data is essential: “It’s really important when you train these systems that you have adequate representation of different patient populations.”
Privacy breaches from unsecured AI use. As physicians Francisco Torres and Purab Patel wrote this week, when clinicians input patient data into AI platforms without adequate safeguards, they create vulnerabilities. Many AI programs store input data or use it to enhance the model. “Therefore, when patients’ data is input into an unsecure platform, the data may be stored or examined without proper oversight.” They advise physicians to avoid inputting protected health information unless the program is specifically authorized and tested for clinical use—a standard that rules out most consumer-facing AI tools.
Upcoding risk from AI charting. Epic’s AI Charting tool and ambient scribes like Abridge and Suki promise to reduce documentation burden. But as BDO managing director Julie McGuire noted, they also create compliance exposure: “If healthcare organizations are using AI tools for billing and documentation, they need proper oversight before sending over a bill or a claim to avoid upcoding.” When AI generates the note, human reviewers may not catch subtle inflations of medical necessity.
Black box liability. Devora Shapiro, associate professor of medical ethics at Ohio University, raised a deeper concern: “There is a question of whether the use of artificial intelligence in practice over the long-term makes individuals, both in medicine, potentially, and in other areas, other professions, a little bit less quick with their critical thinking skills, with their precision and their attention.” If physicians come to rely on AI without understanding its limitations, they lose the ability to supervise it effectively. And as we explored in our AI Literacy Compliance Requirements article, untrained reviewers don’t provide oversight—they provide the appearance of it.
The Regulatory Patchwork
Hospitals deploying AI must navigate a fragmented regulatory landscape with no single source of truth.
FDA: Some AI tools are medical devices requiring clearance; others are not. Epic’s AI Charting, a native EHR tool, likely isn’t subject to FDA oversight. Diagnostic algorithms that flag abnormalities on imaging almost certainly are. As McGuire advised, “Make sure you know the tools that you’re using, and that you’re following the ever-changing FDA regulations we’re seeing, no matter who the administration is.”
HIPAA: The privacy rule applies to protected health information regardless of whether AI is involved. But HIPAA doesn’t address AI-specific risks like model training on patient data, algorithmic bias, or explainability requirements. As Venson Wallin, managing director at BDO USA, stressed, “It is not a one-size-fits-all environment, and just because you may meet HIPAA requirements does not mean you are ‘safe.'”
State laws: Colorado’s AI Act takes effect in 2026, requiring risk assessments for high-risk systems. Illinois regulates biometric information. California’s ICRAA adds state-level consumer reporting requirements. Hospitals operating across state lines face a compliance patchwork.
International frameworks: The EU AI Act’s high-risk provisions, which include many healthcare applications, take effect in August 2026. For US hospitals with European patients or research partnerships, compliance is not optional.
Emerging standards: SOC 2, ISO 42001, and model card frameworks are becoming de facto requirements, even where not legally mandated. As we detailed in our AI Vendor Evaluation Framework, the vendors winning enterprise contracts are those that build auditability into their architecture from day one.
What Governance Looks Like in Practice
The market for clinical AI governance tools is projected to grow from $1.84 billion in 2025 to $55 billion by 2035—a 40.46% CAGR, according to Precedence Research. That explosive growth reflects a simple reality: hospitals cannot manage this manually. But tools alone aren’t enough. Governance requires structure, discipline, and accountability.
Here’s what effective healthcare AI governance includes:
AI oversight committees. Multidisciplinary boards consisting of clinicians, data scientists, legal experts, privacy officers, and ethicists. These committees review proposed AI tools before deployment and establish criteria for ongoing monitoring.
Model validation before deployment. Not just vendor claims, but independent testing on local patient populations. As University Hospitals’ Leonardo Kayat Bittencourt described, his organization runs AI tools in “shadow mode” for weeks or months, monitoring performance before any official implementation decisions are made.
Continuous monitoring for drift and bias. AI models degrade over time as patient populations shift and clinical practices evolve. Governance platforms now offer automated monitoring tools that track performance metrics and flag anomalies in real time.
Updated patient consent forms. Most consent forms don’t mention AI. As McGuire advised, organizations should update them to indicate that AI tools may be used in diagnosis, documentation, or treatment planning.
Vendor vetting. Where does patient data go? Is it used for model training? What happens if the vendor is acquired or goes bankrupt? These questions must be answered before contracts are signed, not after.
Human oversight that’s actually trained. Dr. Po-Hao Chen, vice chair for AI at the Cleveland Clinic, emphasized that AI never makes a diagnosis alone: “A person oversees its process and makes the final decision.” But oversight requires training, and most clinicians haven’t received any. As we covered in our AI Literacy article, the EU AI Act now requires organizations to ensure “a sufficient level of AI literacy” among staff—a standard that applies to healthcare as much as any other sector.
The Bottom Line
NSW Health built its own framework because none existed. Most hospitals haven’t gotten that far.
The technology is already in use. Amazon and Microsoft are marketing AI agents directly to patients. Epic’s AI Charting is live in seven health systems. Ambient scribes are recording conversations in exam rooms across the country. The governance question is not whether to adopt AI—it’s whether to adopt it responsibly.
The market for governance tools is growing at 40% annually because hospitals are realizing they can’t manage this manually. But tools alone aren’t enough. Governance requires culture, training, and accountability. It requires answering the hard questions before something goes wrong, not after.
Hospitals that invest now—in oversight committees, validation protocols, monitoring systems, and clinician training—will avoid the liability wave coming for early adopters who skipped governance. Those that don’t will learn the hard way that in healthcare, flying blind is never safe.
For more on related topics, see our coverage of Diagnostic AI Liability, AI Vendor Evaluation, and AI Literacy Requirements.