As AI grows, the fate of healthcare data standards like FHIR and USCDI is anything but settled.
Don’t have time to read the full brief now? Here’s the gist:
Will advances in Artificial Intelligence (AI) replace the need for standards in healthcare delivery, public health improvement, and healthcare data sharing?
No, far from becoming obsolete, healthcare data standards are evolving into the critical infrastructure for AI innovation. Their relationship is symbiotic, not competitive. Think of standards as the grammar of healthcare data and AI as the poet using that grammar to create something new.
Standards address many AI challenges, too many to list here. See Table 1 below for some examples.
Their core function—ensuring consistent, safe, and interoperable data—will remain indispensable. Over the next 18 months, they will deepen their role in enabling trustworthy AI deployment. Beyond this, they will integrate with AI to enable advanced capabilities (e.g., real-time data curation)
You should simultaneously shore up foundational interoperability (FHIR + USCDI), build AI governance & privacy engineering (NIST SP 800-226), pilot AI tools under emerging guidance (HL7 AI/ML Data Lifecycle; FDA draft), and engage in the Trusted Exchange Framework and Common Agreement (TEFCA)–enabled federated networks. This phased approach ensures both immediate compliance and future-proof readiness for a truly hybrid AI + standards healthcare ecosystem.
For a role-specific breakout of phased recommendations, see What Should I and My Team Do to be Prepared for the Future?
In our role as frontline implementers of healthcare data and interoperability standards to solve current public health and healthcare data sharing needs, we are sometimes challenged by those who proclaim, “Standards will soon be irrelevant given the increasing capabilities of LLMs1 and agentic AIs2!” Are they right? This is an important question and one that requires a thoughtful response.
In the spirit of the moment, we asked an AI first. DeepSeek’s response: "Think of standards as the grammar of healthcare data—and AI as the poet. The poet needs grammar to be understood but can also enrich the language." Self-serving, sure! But it’s an expressive metaphor, and one that we will scrutinize in more detail in this brief.
The U.S. healthcare industry has made considerable headway in the adoption of standards over the last four decades, particularly since the influx of federal investment and attention after the passage of the Health Information Technology for Economic and Clinical Health (HITECH) Act in 2009. It is a story of meaningful progress, even as the full promise of standards-based interoperability—true data liquidity—has always felt far off.
Today, standards remain a potent tool for aligning some of the industry’s conflicting priorities, including the following:
While the standards have changed and challenges remain (e.g., uneven adoption), they provide the common language and architecture allowing stakeholders—providers, insurers, public health agencies—to collaborate despite divergent goals.3
But, what of their role in an AI-powered future?
With their public introduction in 2022, LLMs sparked an AI renaissance. Since their introduction, they have made significant contributions to healthcare delivery and public health; however, AI in other forms have been contributing to healthcare delivery, public health, and healthcare data sharing since the 1960s.
In the U.S. healthcare system, AI is increasingly driving transformation by enabling more personalized, efficient, and data-driven care. One of AI’s strengths is its ability to help providers tailor treatments to individual patients. By analyzing vast datasets—including electronic health records, genetic profiles, and lifestyle factors—AI unlocks precision medicine approaches that move beyond the historical reliance on population averages. Clinicians will be able to make better-informed decisions about treatments that are most likely to be effective for a specific patient, improving outcomes and reducing trial-and-error prescribing. AI tools can predict how a patient will respond to different medications or interventions, identify comorbidities that might complicate treatment, and even suggest alternative therapies based on similar patient profiles—just for starters.
AI is also alleviating administrative and operational burdens that contribute to provider burnout and inefficiencies in care delivery. In U.S. hospitals and clinics, generative AI is automating clinical documentation, code diagnoses and procedures, and assisting in prior authorization requests. Companies like Heidi Health and Notable are deploying AI-powered medical scribes that integrate with electronic health record (EHR) systems such as Epic and Cerner, helping doctors spend more time with patients and less on paperwork. Radiologists are leveraging AI to draft imaging reports and automate follow-up communication, streamlining workflows while maintaining diagnostic accuracy.
On a broader scale, AI is enhancing public health efforts and improving healthcare data sharing in ways that benefit population-level care. In the U.S., AI-powered analytics platforms are being used to detect emerging health trends, predict disease outbreaks, and stratify patients by risk to proactively target care. Remote patient monitoring tools that incorporate AI—such as wearable devices and conversational agents—enable real-time tracking of chronic conditions, reducing unnecessary hospitalizations and improving patient engagement. These tools also contribute to a more interoperable ecosystem by generating structured, actionable data that can be securely shared across health systems. By accelerating data integration and decision-making, AI is helping U.S. healthcare become more proactive and responsive to both individual and public health needs.
As AI continues to improve, demonstrating the ability to meet or exceed human capabilities, it is important to ask if it will continue to need standards.
The answer is yes. AI has made great strides in improving population health and healthcare delivery, but it is insufficient to meet our growing healthcare needs. AI requires standards to advance.
At the macro-level, consider the challenge of scaling AI solutions to meet healthcare’s expanding and shifting data models. Without standard ontologies4, such as those underpinning the United States Core Data for Interoperability (USCDI), every new AI integration would require custom training for each healthcare system’s data model and constant re-validation as it changes. This would quickly become costly and impact more than just healthcare. Given the energy required to train new models, it would likely impact energy usage and global sustainability.
Said simply – each AI prompt is not free; standards can control the AI cost curve by avoiding unnecessary model retraining.
Despite improvements in performance, many AI solutions are not ready to operate independently. Consider the use of LLMs to map clinical notes to medical coding. Researchers evaluating general purpose LLMs’ ability to correctly code ICD and Current Procedural Terminology (CPT) codes discovered that models performed poorly, with an exact-match accuracy under 50%. A later study demonstrated that a frontier model (ChatGPT 4) could achieve 99% accuracy for ICD-10 coding for a specialty (nephrology), a significantly restricted use. Another study demonstrated that the accuracy of coding with general purpose models could be improved by using a tree-search method5 that exploits the hierarchical structure of the ICD-10 ontology.
Current commercial AI-coding products overcome these limitations by keeping humans in the loop; standards reduce the number of human-in-the-loop events.
LLMs have shown promise for achieving a generalized level of intelligence and many developments are underway to realize that promise; however, LLMs struggle with ambiguous real-world data and assignments that require structured thinking. To address these gaps, model developers have implemented approaches that require a model to reason its way through a task, also known as Large Reasoning Models (LRMs), that mimic human planning. However, a recent study by Apple has shown that even LRMs experience a collapse in the ability to plan independently above a certain level of complexity.
Standards enforce structured context that LLMs can’t reliably infer, as demonstrated by the tree-search coding approach in the prior section, and provide structure to help LRMs navigate complex tasks.
Without standards to create an audit trail (e.g., FHIR’s Provenance resource) and enable compliance with legal and regulatory requirements, if an LLM misinterprets something that causes harm, who’s liable? Observability and audit standards provide legal defensibility for AI outputs. Without such standards, AI becomes a black box that increases the liability of AI providers and practitioners.
To address these issues and others, government and regulatory bodies are developing standards for AI safety. For example, the National Institute of Standards and Technology (NIST) is leading efforts to establish new data-sharing standards (e.g., for anonymization and governance) specifically to enable AI development. These build upon existing frameworks like Health Insurance Portability and Accountability Act (HIPAA) and Health Level Seven (HL7) 4. Similarly, the Food and Drug Administration (FDA) requires demographic diversity in AI training datasets to prevent bias, implicitly endorsing standards like USCDI to ensure representativeness, and the Assistant Secretary for Technology Policy (ASTP)/Office of the National Coordinator for Health Information Technology (ONC)'s HTI-2 rule (2025) mandates FHIR/USCDI v4 compliance by 2028, reinforcing standards' role in AI audits.
Alignment to standards reduces the risk and liability of adverse AI usage.
AI is poised to dramatically accelerate the standards development lifecycle, capable of identifying the need for a new standard and producing draft specifications in a fraction of the time that manual processes require. However, this technological acceleration does not eliminate a core function of standardization: building consensus.
An aspect of the process is not technical, but human. Reconciling the divergent business interests, competing clinical philosophies, and entrenched workflows of highly motivated stakeholder groups remains a fundamentally human task that AI cannot replace.
Far from becoming obsolete, the human-led consensus process will be elevated. By automating administrative groundwork—like debating the proper format for a date of death (to use an example the author once watched for 60 minutes)—AI will free human experts to focus their limited resources on higher-value strategic challenges. This shift enables standards organizations to finally address complex, thorny issues that were previously too resource-intensive.
AI will not replace human consensus building, but it will uplevel it.
As we have illustrated in the previous sections, the future of healthcare interoperability is a hybrid ecosystem, with AI and standards evolving together to better meet the needs of healthcare and public health.
Over the next 12-to-18 months, expect to see the following developments:
While looking beyond 18 months is challenging in a dynamic period such as this one, we expect to see the following developments in the future:
While this period is dynamic, you can do the following now to prepare yourself and your team to take advantage of the emerging AI Era.
In summary, each leader should simultaneously shore up foundational interoperability (FHIR + USCDI), build AI governance & privacy engineering (NIST SP 800-226), pilot AI tools under emerging guidance (HL7 AI/ML Data Lifecycle; FDA draft), and engage in TEFCA–enabled federated networks. This phased approach ensures both immediate compliance and future-proof readiness for a truly hybrid AI + standards healthcare ecosystem.
The rise of artificial intelligence, including advanced large language models, will not render healthcare data and interoperability standards obsolete; instead, their importance will grow. As we have demonstrated in this brief, the relationship between AI and standards is symbiotic, with standards providing the foundation for AI's impressive capabilities. While AI offers transformative power in tailoring care, alleviating provider burnout, and accelerating interoperability, it faces significant challenges that standards are uniquely positioned to address. These include the "scalability trap" of training models on non-standard data, the "good enough" problem of AI accuracy, the risk of "context collapse" in complex situations, and the significant legal and regulatory liabilities of blackbox AI systems.
Standards will evolve to become the critical infrastructure for trustworthy AI, moving from static rules to dynamic, AI-integrated frameworks that enable advanced capabilities like real-time data curation and agentic AI mediators. This evolution is already underway, with regulatory mandates for FHIR and USCDI pushing the industry toward a hybrid ecosystem. To navigate this new era, healthcare leaders must not wait. The imperative is to act now by shoring up foundational interoperability with standards such as FHIR and USCDI, establishing robust AI governance and privacy engineering, strategically piloting AI tools under emerging guidance, and engaging with federally-enabled networks to prepare for a future where standards and AI collaboratively drive healthcare innovation.
Standards transformed care delivery by establishing uniform clinical protocols and accountability. Early efforts like the Hospital Standardization Program (1918) mandated surgical checklists and record-keeping, reducing errors. Later, the Joint Commission (1951) codified safety requirements (e.g., infection control), while diagnostic coding (ICD) and procedure coding (CPT) enabled precise treatment documentation. Medicare’s UB-92 billing standard (1965) forced administrative consistency, linking reimbursement to structured data. These frameworks shifted healthcare from fragmented practices to evidence-based systems, where adherence to standards became synonymous with quality.
Standards enabled population-level insights by harmonizing disease tracking and prevention. The adoption of ICD (1893→) created a universal language for mortality/morbidity data, revealing epidemics like influenza hotspots. Post-1965, Medicare claims data became a de facto public health tool, mapping chronic disease prevalence. The CDC’s Essential Services framework (1999) standardized core functions (e.g., outbreak investigation), while Public Health Information Network (PHIN) standards (2000s) ensured labs could electronically report notifiable diseases. Crucially, standards allowed disparate health departments to aggregate data during crises, from HIV to COVID-19, transforming raw data into actionable intelligence.
Standards broke down data siloes by creating technical and legal guardrails for exchange. HL7 (1987) established the first clinical messaging format, allowing labs to send results to hospitals. HIPAA (1996) was pivotal: its Privacy/Security Rules standardized patient data protection (e.g., encryption), while Transaction Standards (ANSI X12) unified billing across insurers. The Consolidated Clinical Document Architecture (C-CDA) document standard (2012) later allowed summaries to follow patients between providers. Critically, these standards balanced access with confidentiality, ensuring data could flow securely—whether for treatment or research.
Regulation amplified standards’ impact by mandating adoption. HIPAA compelled compliance with privacy/transaction rules. HITECH (2009) used financial incentives to push EHR certification standards (e.g., C-CDA for care summaries). The 21st Century Cures Act (2016) then enforced FHIR APIs to combat information blocking. Each regulation turned voluntary standards into nationwide requirements, accelerating interoperability and embedding standards into healthcare’s legal fabric.
The following examples, while not comprehensive, illustrate the potential for AI to transform healthcare.6
AI systems now process vast medical literature to support evidence-based decisions. For example, tools such as ChatRWD integrate LLMs with medical databases to provide clinicians with sourced answers. In trials, it delivered relevant responses to 58% of complex medical queries, compared to <10% for standard LLMs like ChatGPT.
AI systems support early outbreak detection by processing diverse data sources to recognize critical signals, as illustrated by BlueDot's Pandemic Alert System, which detected COVID-19 outbreaks by analyzing airline data, climate records, and news reports. During monkeypox (Mpox), similar models predicted global spread patterns weeks in advance.
AI-powered tools help democratize health information access. Huma's Remote Monitoring Platform is one example. It combines wearable sensors with LLMs to provide real-time patient feedback, helping to reduce hospital readmissions by 30% and consultation time by 40%.
AI helps reduce documentation burden and improve resource allocation. For example, Microsoft's AI scribe saves an estimated 66 minutes a day per provider by automating documentation. Similarly, Oracle Cerner integrated AI algorithms into electronic health records to predict patient admission risks, reducing administrative errors and improving bed allocation efficiency.
AI advances are enhancing image interpretation and early disease detection. In one example, an AI model detected 94% of lung nodules in CT scans (vs. radiologists' 65%) and identified 64% of epilepsy lesions missed by humans. In another, a model analyzed 500,000+ health records to flag early Alzheimer's or kidney disease risks years before symptoms manifest.
We used the following sources to develop this brief: