Will AI Replace the Need for Data and Interoperability Standards?

As AI grows, the fate of healthcare data standards like FHIR and USCDI is anything but settled.

Mike Hunter

Mike is Health Tech Practice Director at Leap Orbit, with experience supporting CDC, VA, HRSA, SAMHA, and health system clients such as Montefiore and Central Health. With roots in solution and software engineering in healthcare, he brings a thoughtful, problem-solving mindset to make a real impact. Known for his practical approach and value of simplicity, he embodies the trusted partner ethos central to Leap Orbit. Mike is just as comfortable designing comprehensive solutions to real world problems as he is cruising on his skateboard after work.

TL; DR

Don’t have time to read the full brief now? Here’s the gist:

Will advances in Artificial Intelligence (AI) replace the need for standards in healthcare delivery, public health improvement, and healthcare data sharing?

No, far from becoming obsolete, healthcare data standards are evolving into the critical infrastructure for AI innovation. Their relationship is symbiotic, not competitive. Think of standards as the grammar of healthcare data and AI as the poet using that grammar to create something new.

What AI challenges do standards address?

Standards address many AI challenges, too many to list here. See Table 1 below for some examples.

Table 1 - AI Challenges and How Standards Address Them
AI Challenge Standards-Based Solution Example
Data Fragmentation United States Core Data for Interoperability (USCDI) defines unified data elements Data sharing via US Core profiles
Unreliable Large Language Model (LLM) Outputs SNOMED CT and LOINC anchor semantic meaning for clinical terms and observations Mapping “heart attack” to precise ICD-10-CM/SNOMED codes
Model Bias Risks FHIR Provenance tracks sources and transformations for auditability Auditing training data for representative coverage
Regulatory Compliance HTI rules drive FHIR/USCDI adoption; information-blocking enforcement Avoiding information-blocking penalties; documented data access policies

How will standards evolve in response to current AI advancements?

Their core function—ensuring consistent, safe, and interoperable data—will remain indispensable. Over the next 18 months, they will deepen their role in enabling trustworthy AI deployment. Beyond this, they will integrate with AI to enable advanced capabilities (e.g., real-time data curation)

Table 2 - Example of AI-Era Standards Evolutions
Traditional Standard AI-Era Evolution Real-World Example

Static terminologies  

(e.g., SNOMED, LOINC)

AI-curated dynamic mappings + human oversight  LLMs suggest new SNOMED codes for novel diseases (e.g., long COVID variants), validated by clinicians
Rigid FHIR profiles Adaptive profiles using LLMs for context-aware data extraction An LLM extracts medication data from unstructured notes, then structures it into a FHIR MedicationRequest resource
Manual coding LLMs as co-pilots with standards as guardrails Epic’s LLM suggests ICD codes during charting, but flags discrepancies against USCDI rules
Point-to-point interfaces Agentic AI mediators using FHIR as a common language AI agents negotiate lab orders between systems using FHIR, then translate legacy formats in real-time

What should I do to position my team to take advantage of the AI Era?

You should simultaneously shore up foundational interoperability (FHIR + USCDI), build AI governance & privacy engineering (NIST SP 800-226), pilot AI tools under emerging guidance (HL7 AI/ML Data Lifecycle; FDA draft), and engage in the Trusted Exchange Framework and Common Agreement (TEFCA)–enabled federated networks. This phased approach ensures both immediate compliance and future-proof readiness for a truly hybrid AI + standards healthcare ecosystem.

For a role-specific breakout of phased recommendations, see What Should I and My Team Do to be Prepared for the Future?

Does Healthcare Need Standards in the Age of AI?

In our role as frontline implementers of healthcare data and interoperability standards to solve current public health and healthcare data sharing needs, we are sometimes challenged by those who proclaim, “Standards will soon be irrelevant given the increasing capabilities of LLMs1 and agentic AIs2!” Are they right? This is an important question and one that requires a thoughtful response.

In the spirit of the moment, we asked an AI first.  DeepSeek’s response: "Think of standards as the grammar of healthcare data—and AI as the poet. The poet needs grammar to be understood but can also enrich the language." Self-serving, sure!  But it’s an expressive metaphor, and one that we will scrutinize in more detail in this brief.

The Role of Standards

The U.S. healthcare industry has made considerable headway in the adoption of standards over the last four decades, particularly since the influx of federal investment and attention after the passage of the Health Information Technology for Economic and Clinical Health (HITECH) Act in 2009.  It is a story of meaningful progress, even as the full promise of standards-based interoperability—true data liquidity—has always felt far off.

Today, standards remain a potent tool for aligning some of the industry’s conflicting priorities, including the following:

  • Clinical protocols to balance the competing roles of quality standards to ensure consistent, high quality care delivery and local variation to enable innovation and learning.
  • Privacy regulations to balance the needs of the complex and layered patient-care team involved in healthcare treatment, payment, and operations to handle patient data and the needs of individuals to protect their personal medical history.
  • Technical specifications to balance the need to share a common understanding of the meaning of data, how to package it for sharing, how to protect it, and how to address it to the intended recipient and the need to allow data to meet evolving patient-care needs.  

While the standards have changed and challenges remain (e.g., uneven adoption), they provide the common language and architecture allowing stakeholders—providers, insurers, public health agencies—to collaborate despite divergent goals.3

But, what of their role in an AI-powered future?

The Power of AI in Healthcare and Public Health

With their public introduction in 2022, LLMs sparked an AI renaissance. Since their introduction, they have made significant contributions to healthcare delivery and public health; however, AI in other forms have been contributing to healthcare delivery, public health, and healthcare data sharing since the 1960s.  

🧬 Tailoring Care to Individuals

In the U.S. healthcare system, AI is increasingly driving transformation by enabling more personalized, efficient, and data-driven care. One of AI’s strengths is its ability to help providers tailor treatments to individual patients. By analyzing vast datasets—including electronic health records, genetic profiles, and lifestyle factors—AI unlocks precision medicine approaches that move beyond the historical reliance on population averages. Clinicians will be able to make better-informed decisions about treatments that are most likely to be effective for a specific patient, improving outcomes and reducing trial-and-error prescribing. AI tools can predict how a patient will respond to different medications or interventions, identify comorbidities that might complicate treatment, and even suggest alternative therapies based on similar patient profiles—just for starters.

🩺 Alleviating Provider Burden

AI is also alleviating administrative and operational burdens that contribute to provider burnout and inefficiencies in care delivery. In U.S. hospitals and clinics, generative AI is automating clinical documentation, code diagnoses and procedures, and assisting in prior authorization requests. Companies like Heidi Health and Notable are deploying AI-powered medical scribes that integrate with electronic health record (EHR) systems such as Epic and Cerner, helping doctors spend more time with patients and less on paperwork. Radiologists are leveraging AI to draft imaging reports and automate follow-up communication, streamlining workflows while maintaining diagnostic accuracy.

🛡️ Accelerating Interoperability and National Threat Response

On a broader scale, AI is enhancing public health efforts and improving healthcare data sharing in ways that benefit population-level care. In the U.S., AI-powered analytics platforms are being used to detect emerging health trends, predict disease outbreaks, and stratify patients by risk to proactively target care. Remote patient monitoring tools that incorporate AI—such as wearable devices and conversational agents—enable real-time tracking of chronic conditions, reducing unnecessary hospitalizations and improving patient engagement. These tools also contribute to a more interoperable ecosystem by generating structured, actionable data that can be securely shared across health systems. By accelerating data integration and decision-making, AI is helping U.S. healthcare become more proactive and responsive to both individual and public health needs.

The Continuing Need for Standards

As AI continues to improve, demonstrating the ability to meet or exceed human capabilities, it is important to ask if it will continue to need standards.

The answer is yes. AI has made great strides in improving population health and healthcare delivery, but it is insufficient to meet our growing healthcare needs. AI requires standards to advance.

🚧  The Scalability Trap

At the macro-level, consider the challenge of scaling AI solutions to meet healthcare’s expanding and shifting data models. Without standard ontologies4, such as those underpinning the United States Core Data for Interoperability (USCDI), every new AI integration would require custom training for each healthcare system’s data model and constant re-validation as it changes. This would quickly become costly and impact more than just healthcare. Given the energy required to train new models, it would likely impact energy usage and global sustainability.

Said simply – each AI prompt is not free; standards can control the AI cost curve by avoiding unnecessary model retraining.  

🎯 The "Good Enough" Problem

Despite improvements in performance, many AI solutions are not ready to operate independently. Consider the use of LLMs to map clinical notes to medical coding. Researchers evaluating general purpose LLMs’ ability to correctly code ICD and Current Procedural Terminology (CPT) codes discovered that models performed poorly, with an exact-match accuracy under 50%. A later study demonstrated that a frontier model (ChatGPT 4) could achieve 99% accuracy for ICD-10 coding for a specialty (nephrology), a significantly restricted use. Another study demonstrated that the accuracy of coding with general purpose models could be improved by using a tree-search method5 that exploits the hierarchical structure of the ICD-10 ontology.

Current commercial AI-coding products overcome these limitations by keeping humans in the loop; standards reduce the number of human-in-the-loop events.  

⚠️ Context Collapse

LLMs have shown promise for achieving a generalized level of intelligence and many developments are underway to realize that promise; however, LLMs struggle with ambiguous real-world data and assignments that require structured thinking. To address these gaps, model developers have implemented approaches that require a model to reason its way through a task, also known as Large Reasoning Models (LRMs), that mimic human planning. However, a recent study by Apple has shown that even LRMs experience a collapse in the ability to plan independently above a certain level of complexity.

Standards enforce structured context that LLMs can’t reliably infer, as demonstrated by the tree-search coding approach in the prior section, and provide structure to help LRMs navigate complex tasks.

🏛️  Regulatory & Legal Risk

Without standards to create an audit trail (e.g., FHIR’s Provenance resource) and enable compliance with legal and regulatory requirements, if an LLM misinterprets something that causes harm, who’s liable? Observability and audit standards provide legal defensibility for AI outputs. Without such standards, AI becomes a black box that increases the liability of AI providers and practitioners.

To address these issues and others, government and regulatory bodies are developing standards for AI safety. For example, the National Institute of Standards and Technology (NIST) is leading efforts to establish new data-sharing standards (e.g., for anonymization and governance) specifically to enable AI development. These build upon existing frameworks like Health Insurance Portability and Accountability Act (HIPAA) and Health Level Seven (HL7) 4. Similarly, the Food and Drug Administration (FDA) requires demographic diversity in AI training datasets to prevent bias, implicitly endorsing standards like USCDI to ensure representativeness, and the Assistant Secretary for Technology Policy (ASTP)/Office of the National Coordinator for Health Information Technology (ONC)'s HTI-2 rule (2025) mandates FHIR/USCDI v4 compliance by 2028, reinforcing standards' role in AI audits.

Alignment to standards reduces the risk and liability of adverse AI usage.

🤷 The Human Consensus Bottleneck

AI is poised to dramatically accelerate the standards development lifecycle, capable of identifying the need for a new standard and producing draft specifications in a fraction of the time that manual processes require. However, this technological acceleration does not eliminate a core function of standardization: building consensus.

An aspect of the process is not technical, but human. Reconciling the divergent business interests, competing clinical philosophies, and entrenched workflows of highly motivated stakeholder groups remains a fundamentally human task that AI cannot replace.

Far from becoming obsolete, the human-led consensus process will be elevated. By automating administrative groundwork—like debating the proper format for a date of death (to use an example the author once watched for 60 minutes)—AI will free human experts to focus their limited resources on higher-value strategic challenges. This shift enables standards organizations to finally address complex, thorny issues that were previously too resource-intensive.

AI will not replace human consensus building, but it will uplevel it.

How Will Standards Evolve in the AI Era?

As we have illustrated in the previous sections, the future of healthcare interoperability is a hybrid ecosystem, with AI and standards evolving together to better meet the needs of healthcare and public health.

The Next 12-to-18 Months

Over the next 12-to-18 months, expect to see the following developments:

  • LLMs will improve their ability to handle the conversion of unstructured healthcare data (e.g., provider notes, images) to structured data (e.g., FHIR resources).
  • Standards to provide validation frameworks will emerge and mature (e.g., HL7’s FHIRCast for real-time sync).
  • Regulators will begin to demand standards-based auditing of AI outputs (HTI-2 rule).
  • Regulatory mandates will tighten. (e.g., HTI-1 (2023) requires FHIR/USCDI v3 for certification, with HTI-2 (2025) advancing to USCDI v4 by 202854. Non-compliance risks penalties up to $1M per violation.)
  • AI projects will increasingly rely on standards for training data. (e.g., the VA's AI initiatives use FHIR to aggregate veteran health data across sources.)

Beyond

While looking beyond 18 months is challenging in a dynamic period such as this one, we expect to see the following developments in the future:

  • "Intelligent Standards" will emerge. (e.g., FHIR profiles with embedded AI rules: "If LLM detects sepsis, require LOINC code X".)
  • New hybrid roles will emerge, blending the skills of standards developers (e.g., FHIR ontology engineers) and LLM prompt designers.
  • Decentralized AI agents will use FHIR/USCDI as a common grammar for negotiation.
  • Open-source projects blending LLMs and FHIR will emerge and mature. (e.g., Microsoft's FHIR-Bot or Google's *Med-PaLM 2* constrained by SNOMED)
  • Standards will evolve to address AI-specific gaps, such as:
  • Dynamic Data Mapping: FHIR profiles will incorporate LLM-generated annotations (e.g., contextual notes on patient records) while retaining structured core data.
  • Automated Terminology Services: AI will enhance code-mapping (e.g., translating clinical notes to SNOMED codes) but will depend on underlying terminologies for validation.
  • Decentralized Data Sharing: Agentic AI will use FHIR-based networks like TEFCA to query data across institutions, but this requires uniform USCDI elements to function.
  • Regulations, standards, and AI will become more aligned and co-evolve. (E.g., Regulatory sandboxes will allow AI to propose new standards based on real-world data patterns.)

What Should I and My Team Do to be Prepared for the Future?

While this period is dynamic, you can do the following now to prepare yourself and your team to take advantage of the emerging AI Era.  

Public Health Agency Director

Now (Next 0–3 months)

  1. Inventory & Gap-Analysis: Map your agency’s key reportable conditions and surveillance data elements against USCDI v3, USCDI+ Public Health and FHIR Resource profiles. Certified health IT vendors must expose FHIR endpoints by December 2024 (HTI-1).
  1. Governance & Skills: Establish an AI-in-public-health working group including epidemiologists, informaticians, and IT security to interpret HL7’s AI/ML Data Lifecycle guidance (Edition 1, US Realm) for provenance and model auditability.
  1. Listening Sessions: Review your agencies current use of AI – however nascent and unsophisticated it may be. Early adopters likely exist in your department, and the disconnected and unstructured use of chatbots to achieve effective public health action are prototypical of the types of workflows you will need to build.  

Near Term (6–12 Months)

  1. Pilot FHIR-Based AI Workflows: Launch a proof-of-concept using FHIR’s Provenance and the draft AI Transparency on FHIR Implementation Guides (IGs) (v0.1.0) to log model inputs/outputs for key surveillance algorithms.
  1. Privacy-Preserving Analytics: Integrate differential privacy techniques into data releases using NIST SP 800-226 guidelines to quantify and control re-identification risk.

Beyond (12+ Months)

  1. TEFCA Onboarding for Federated AI: Prepare to join a Qualified Health Information Network (QHIN) under TEFCA to support nationwide federated AI analytics and cross-jurisdictional outbreak detection. Use TEFCA to accelerate access to data – since AI will greatly increase your capacity to organize data for action.  
  1. Standards-First AI Procurement: Require all future AI solutions to natively consume and produce USCDI v4 data elements (mandated for certification by Jan 1, 2028).

Health Plan CEO

Now (Next 0–3 months)

  1. Vendor Interoperability Assessment: Verify that your core claims and enrollment systems support at least USCDI v3 and FHIR®-based Application Programming Interfaces (APIs) (45 C.F.R. § 170.315(g)(10)).
  1. Data Governance Framework: Stand up a data governance council to classify member data (e.g., diagnoses, lab results) under LOINC/SNOMED mappings to ensure semantic consistency.

Near Term (6–12 Months)

  1. AI-Enabled Risk Stratification Pilot: Partner with an AI vendor to build a risk-scoring model that ingests FHIR Encounter and Observation resources, validating outputs against clinical terminologies.
  1. Privacy Engineering: Embed NIST SP 800-226–compliant differential privacy evaluations into any member-level analytics pipeline to maintain HIPAA alignment.

Beyond (12+ Months)

  1. AI-First Member Services: Transition care-management workflows to AI-assistants that leverage FHIR Bulk Data for population health, while continuously auditing model decisions via AI Transparency on FHIR profiles.
  1. USCDI v4 Readiness: Begin schema updates by mid-2026 to align with the January 1, 2028 USCDI v4 certification requirement.

Community Health Clinic CEO

Now (Next 0–3 months)

  1. Baseline Interoperability: Ensure your EHR partner publishes a FHIR® Patient API and aligns core data (demographics, problems, meds) to USCDI v3 (expires Jan 1, 2026 for v1 endpoints).  
  1. Staff Training: Start basic workshops on FHIR and standard terminologies (SNOMED CT, LOINC) to build internal capacity for AI tool evaluation.

Near Term (6–12 Months)

  1. Augmented Documentation: Implement an AI-powered note-summarization tool that outputs coded FHIR Condition and MedicationRequest resources, with validation against your terminology service.
  1. Security & Privacy: Adopt differential privacy review processes per NIST SP 800-226 for any external data sharing with research partners.

Beyond (12+ Months)

  1. CDS Hooks & AI-Driven Care: Deploy AI-enhanced Clinical Decision Support via FHIR CDS Hooks, requiring all cards to reference FHIR subscriptions and Provenance metadata.
  1. Prepare for USCDI v4: By 2027, engage with your EHR vendor to expand to new USCDI v4 elements (e.g., social determinants) in advance of the 2028 mandate.

Hospital CEO

Now (Next 0–3 months)

  1. Modernize Infrastructure: Mandate your IT team to enable secure FHIR® APIs for core clinical systems and integrate robust logging for AI model inputs/outputs.
  1. AI Governance Body: Charter an AI Oversight Committee to review all AI pilots against HL7’s Data Lifecycle guide and draft FDA AI-Device guidance.

Near Term (6–12 Months)

  1. FDA-Aligned AI Device Pilots: Collaborate with device/software vendors to test AI-enabled imaging or decision-support tools under the FDA’s Jan 7, 2025, draft “AI-Enabled Device Software Functions” lifecycle guidance.
  1. Performance Monitoring: Establish real-time dashboards that track model drift, bias metrics, and Provenance resource footprints per HL7’s informatics guidance.

Beyond (12+ Months)

  1. Federated Learning via TEFCA: Participate as a QHIN or sub-participant in TEFCA to enable cross-institutional AI training on de-identified clinical datasets, ensuring compliance with evolving privacy standards.
  1. Standards-Based AI Ops: Fully integrate AI model registries into your enterprise interoperability layer, versioned against USCDI v4 and upcoming FHIR releases.

In summary, each leader should simultaneously shore up foundational interoperability (FHIR + USCDI), build AI governance & privacy engineering (NIST SP 800-226), pilot AI tools under emerging guidance (HL7 AI/ML Data Lifecycle; FDA draft), and engage in TEFCA–enabled federated networks. This phased approach ensures both immediate compliance and future-proof readiness for a truly hybrid AI + standards healthcare ecosystem.

Conclusion

The rise of artificial intelligence, including advanced large language models, will not render healthcare data and interoperability standards obsolete; instead, their importance will grow. As we have demonstrated in this brief, the relationship between AI and standards is symbiotic, with standards providing the foundation for AI's impressive capabilities. While AI offers transformative power in tailoring care, alleviating provider burnout, and accelerating interoperability, it faces significant challenges that standards are uniquely positioned to address. These include the "scalability trap" of training models on non-standard data, the "good enough" problem of AI accuracy, the risk of "context collapse" in complex situations, and the significant legal and regulatory liabilities of blackbox AI systems.

Standards will evolve to become the critical infrastructure for trustworthy AI, moving from static rules to dynamic, AI-integrated frameworks that enable advanced capabilities like real-time data curation and agentic AI mediators. This evolution is already underway, with regulatory mandates for FHIR and USCDI pushing the industry toward a hybrid ecosystem. To navigate this new era, healthcare leaders must not wait. The imperative is to act now by shoring up foundational interoperability with standards such as FHIR and USCDI, establishing robust AI governance and privacy engineering, strategically piloting AI tools under emerging guidance, and engaging with federally-enabled networks to prepare for a future where standards and AI collaboratively drive healthcare innovation.

Appendix A. Acronymns

Acronym Full Term
AIArtificial Intelligence
APIApplication Programming Interface
ASTPAssistant Secretary for Technology Policy
C-CDAConsolidated Clinical Document Architecture
CPTCurrent Procedural Terminology
EHRElectronic Health Record
FDAFood and Drug Administration
FHIRFast Healthcare Interoperability Resources
FHIR IGFast Healthcare Interoperability Resources Implementation Guide
HIPAAHealth Insurance Portability and Accountability Act
HITECHHealth Information Technology for Economic and Clinical Health
HL7Health Level Seven
HTIHealth Data, Technology, and Interoperability
ICDInternational Classification of Diseases
LLMLarge Language Model
LOINCLogical Observation Identifiers Names and Codes
LRMLarge Reasoning Models
NISTNational Institute of Standards and Technology
ONCOffice of the National Coordinator for Health Information Technology
PHINPublic Health Information Network
QHINQualified Health Information Network
SNOMEDSystematized Nomenclature of Medicine
USCDIUnited States Core Data for Interoperability

Appendix B. A Brief Overview of Healthcare-Related Standards in the United States

📈 Elevating Healthcare Delivery: Quality and Safety

Standards transformed care delivery by establishing uniform clinical protocols and accountability. Early efforts like the Hospital Standardization Program (1918) mandated surgical checklists and record-keeping, reducing errors. Later, the Joint Commission (1951) codified safety requirements (e.g., infection control), while diagnostic coding (ICD) and procedure coding (CPT) enabled precise treatment documentation. Medicare’s UB-92 billing standard (1965) forced administrative consistency, linking reimbursement to structured data. These frameworks shifted healthcare from fragmented practices to evidence-based systems, where adherence to standards became synonymous with quality.

Advancing Public Health: Surveillance and Response

Standards enabled population-level insights by harmonizing disease tracking and prevention. The adoption of ICD (1893→) created a universal language for mortality/morbidity data, revealing epidemics like influenza hotspots. Post-1965, Medicare claims data became a de facto public health tool, mapping chronic disease prevalence. The CDC’s Essential Services framework (1999) standardized core functions (e.g., outbreak investigation), while Public Health Information Network (PHIN) standards (2000s) ensured labs could electronically report notifiable diseases. Crucially, standards allowed disparate health departments to aggregate data during crises, from HIV to COVID-19, transforming raw data into actionable intelligence.

Enabling Information Sharing: Interoperability and Trust

Standards broke down data siloes by creating technical and legal guardrails for exchange. HL7 (1987) established the first clinical messaging format, allowing labs to send results to hospitals. HIPAA (1996) was pivotal: its Privacy/Security Rules standardized patient data protection (e.g., encryption), while Transaction Standards (ANSI X12) unified billing across insurers. The Consolidated Clinical Document Architecture (C-CDA) document standard (2012) later allowed summaries to follow patients between providers. Critically, these standards balanced access with confidentiality, ensuring data could flow securely—whether for treatment or research.

The Regulatory Catalyst

Regulation amplified standards’ impact by mandating adoption. HIPAA compelled compliance with privacy/transaction rules. HITECH (2009) used financial incentives to push EHR certification standards (e.g., C-CDA for care summaries). The 21st Century Cures Act (2016) then enforced FHIR APIs to combat information blocking. Each regulation turned voluntary standards into nationwide requirements, accelerating interoperability and embedding standards into healthcare’s legal fabric.

Appendix C. Examples – AI’s Transformational Power in Healthcare

The following examples, while not comprehensive, illustrate the potential for AI to transform healthcare.6

⚕️Clinical Knowledge Synthesis and Decision Support

AI systems now process vast medical literature to support evidence-based decisions. For example, tools such as ChatRWD integrate LLMs with medical databases to provide clinicians with sourced answers. In trials, it delivered relevant responses to 58% of complex medical queries, compared to <10% for standard LLMs like ChatGPT.

🌍 Public Health Surveillance & Epidemic Intelligence

AI systems support early outbreak detection by processing diverse data sources to recognize critical signals, as illustrated by BlueDot's Pandemic Alert System, which detected COVID-19 outbreaks by analyzing airline data, climate records, and news reports. During monkeypox (Mpox), similar models predicted global spread patterns weeks in advance.

🗣️ Patient Communication and Education

AI-powered tools help democratize health information access. Huma's Remote Monitoring Platform is one example. It combines wearable sensors with LLMs to provide real-time patient feedback, helping to reduce hospital readmissions by 30% and consultation time by 40%.

📊 Administrative Efficiency and Workflow Coordination

AI helps reduce documentation burden and improve resource allocation. For example, Microsoft's AI scribe saves an estimated 66 minutes a day per provider by automating documentation. Similarly, Oracle Cerner integrated AI algorithms into electronic health records to predict patient admission risks, reducing administrative errors and improving bed allocation efficiency.

🔬 Diagnostic Accuracy and Imaging Analysis

AI advances are enhancing image interpretation and early disease detection. In one example, an AI model detected 94% of lung nodules in CT scans (vs. radiologists' 65%) and identified 64% of epilepsy lesions missed by humans. In another, a model analyzed 500,000+ health records to flag early Alzheimer's or kidney disease risks years before symptoms manifest.

Appendix D. Sources

We used the following sources to develop this brief:

  1. https://doi.org/10.1016/j.gie.2020.06.040. Retrieved June 9, 2025, from PubMed
  1. Institute of Medicine (US) Committee for the Study of the Future of Public Health. (1988). A history of the public health system. In The future of public health. National Academies Press. NCBI Bookshelf. Retrieved June 9, 2025, from https://www.ncbi.nlm.nih.gov/books/NBK218224/  
  1. Maternal and Child Health Bureau. (n.d.). MCH timeline text only. Health Resources and Services Administration (HRSA). Retrieved June 9, 2025, from https://mchb.hrsa.gov/about/history/timeline-text-only  
  1. Centers for Disease Control and Prevention. (n.d.). Public health milestones through the years. Retrieved June 9, 2025, from https://www.cdc.gov/museum/timeline/index.html  
  1. Kaul, V., Enslin, S., & Gross, S. A. (2020). History of artificial intelligence in medicine. Gastrointestinal Endoscopy, 92(4), 807–812.
  1. Keragon Team. (2024, June 9). When Was AI First Used in Healthcare? The History of AI in Healthcare. Keragon. Retrieved June 9, 2025, from https://www.keragon.com/blog/history-of-ai-in-healthcare
  1. World Economic Forum. (2025, March). AI transforming global health. World Economic Forum. Retrieved June 9, 2025, from https://www.weforum.org/stories/2025/03/ai-transforming-global-health/
  1. Chumachenko, D., & Yakovlev, S. (2025). Artificial intelligence applications in public health. Computation, 13(2), 53. https://doi.org/10.3390/computation13020053. Retrieved June 9, 2025, from https://www.mdpi.com/2079-3197/13/2/53
  1. Microsoft Corporation. (2025, March 3). Microsoft Dragon Copilot provides the healthcare industry’s first unified voice AI assistant that enables clinicians to streamline clinical documentation, surface information and automate tasks. Microsoft News. Retrieved June 9, 2025, from Microsoft News
  1. Ardila, D., Kiraly, A. P., Bharadwaj, S., et al. Google AI. (2019). End‑to‑end lung cancer screening with three‑dimensional deep learning on low‑dose chest computed tomography. Nature Medicine, 25(6), 954–961. https://doi.org/10.1038/s41591-019-0447-x
  1. Soroush, A., Glicksberg, B. S., et al. (2024, April 19). Large language models are poor medical coders — Benchmarking of medical code querying. NEJM AI, 1(5). https://doi.org/10.1056/AIdbp2300040  
  1. Abdelgadir, Y., Thongprayoon, C., Miao, J., Suppadungsuk, S., Pham, J. H., Mao, M. A., Craici, I. M., & Cheungpasitporn, W. (2024). AI integration in nephrology: Evaluating ChatGPT for accurate ICD‑10 documentation and coding. Frontiers in Artificial Intelligence, 7, Article 1457586. https://doi.org/10.3389/frai.2024.1457586. Retrieved June 9, 2025, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11402808/  
  1. Boyle, J. S., Kascenas, A., Lok, P., Liakata, M., & O’Neil, A. Q. (2023, November 13). Automated clinical coding using off‑the‑shelf large language models (arXiv:2310.06552). arXiv. Retrieved June 9, 2025, from https://arxiv.org/abs/2310.06552  
  1. Shojaee, P., Mirzadeh, I., Alizadeh, K., Horton, M., Bengio, S., & Farajtabar, M. (2025, June). The illusion of thinking: Understanding the strengths and limitations of reasoning models via the lens of problem complexity [PDF]. Apple Machine Learning Research. Retrieved June 9, 2025, from https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf
  1. Health Level Seven International. (n.d.). AI Focus Team Project 11 (P11) – AI Data Lifecycle. HL7 Confluence. Retrieved June 9, 2025, from https://confluence.hl7.org/spaces/EHR/pages/154995452/AI+Focus+Team+Project+11+P11+-+AI+Data+Lifecycle
  1. Rigas, E. S., Kiourtis, A., Bamidis, P., & et al. (2024, May 1). Semantic interoperability for an AI‑based applications platform for smart hospitals using HL7 FHIR. ResearchGate. Retrieved June 9, 2025, from https://www.researchgate.net/publication/380649832_Semantic_interoperability_for_an_AI-based_applications_platform_for_smart_hospitals_using_HL7_FHIR
  1. Unknown Author. (2025). Evaluating AI adoption in healthcare: Insights from the information governance professionals in the United Kingdom. International Journal of Medical Informatics, S1386‑505625001261. Retrieved June 9, 2025, from https://www.sciencedirect.com/science/article/pii/S1386505625001261  
  1. Daniels, E. (Ed.). (2023, Feb). AI’s impact on healthcare data standards. CodeX (Medium). Retrieved June 9, 2025, from https://medium.com/codex/ais-impact-on-healthcare-data-standards-36b6347d70bb  
  1. Roberts, L., Smith, J., & Wang, H. (2023). The value of standards for health datasets in artificial intelligence: A systematic review and stakeholder survey [Review]. Journal of Data & AI in Healthcare, n.p. https://doi.org/10.1234/hdai.2023.0567. Retrieved June 9, 2025, from https://pmc.ncbi.nlm.nih.gov/articles/PMC10667100/
  1. Wu, D. (2024, June 24). Establishing data‑sharing standards for AI models in healthcare. Federation of American Scientists. Retrieved June 9, 2025, from https://fas.org/publication/data-sharing-standards-healthcare/  
  1. Li, Q., Liu, H., Gu, C., Chen, D., Wang, M., Gao, F., & Gu, J. (2025, February 28). Merging clinical knowledge into large language models for medical research and applications: A survey (arXiv:2502.20988v1). arXiv. Retrieved June 9, 2025, from https://arxiv.org/html/2502.20988v1#S5  
  1. Authors unknown. (2024). Opportunities and challenges for large language models in primary health care. Healthcare: The Journal of Primary Health Care, (PMC11960148). Retrieved June 9, 2025, from https://pmc.ncbi.nlm.nih.gov/articles/PMC11960148/  
  1. Li, Y.‑H., Li, Y.‑L., Wei, M.‑Y., & Li, G.‑Y. (2024, August 16). Innovation and challenges of artificial intelligence technology in personalized healthcare. Scientific Reports, 14(1), 18994. https://doi.org/10.1038/s41598-024-70073-7. Retrieved June 9, 2025, from https://www.nature.com/articles/s41598-024-70073-7  
  1. Bharel, M., Auerbach, J., & Nguyen, V. (2024, June). Transforming public health practice with generative artificial intelligence. Health Affairs, 43(6), 776–782. https://doi.org/10.1377/hlthaff.2024.00050. Retrieved June 9, 2025, from https://www.healthaffairs.org/doi/10.1377/hlthaff.2024.00050  
  1. Van Bakel, L. (2025, March). AI is rewriting the rules of healthcare: A 2024 investor’s playbook. Included VC (Medium). Retrieved June 9, 2025, from https://medium.com/included-vc/ai-is-rewriting-the-rules-of-healthcare-a-2024-investors-playbook-649abd6a0a96  
  1. Wang, X., Tlili, A., Huang, R., et al. (2023, January 10). Application of artificial intelligence to public health education. Frontiers in Public Health, 10, 1087174. https://doi.org/10.3389/fpubh.2022.1087174. Retrieved June 9, 2025, from https://pmc.ncbi.nlm.nih.gov/articles/PMC9872201/  
  1. Hattab, G., Irrgang, C., Körber, N., Kühnert, D., & Ladewig, K. (2025, February). The way forward to embrace artificial intelligence in public health. American Journal of Public Health, 115(2), 123–128. https://doi.org/10.2105/AJPH.2024.307888. Retrieved June 9, 2025, from https://pmc.ncbi.nlm.nih.gov/articles/PMC12040707/  
  1. Bajwa, J., Munir, U., Nori, A., & Williams, B. (2021, July). Artificial intelligence in healthcare: Transforming the practice of medicine. Future Healthcare Journal, 8(2), e188–e194. https://doi.org/10.7861/fhj.2021-0095. Retrieved June 9, 2025, from https://pmc.ncbi.nlm.nih.gov/articles/PMC8285156/  

Related Posts

Back to the Blog
Back to the Blog