AI may improve healthcare—but when it starts making decisions about who gets pain relief, we need to ask harder questions.
Artificial intelligence is making decisions about your healthcare. From predictive analytics to enhancing diagnostic accuracy, AI tools offer incredible promise. But when it comes to prescription drug monitoring—especially for controlled substances—the stakes are high, and the implications are far-reaching.
In this article, we’re exploring not just what AI can do, but what it should do.
AI-powered decision support tools are already influencing who qualifies for prescriptions. These prescription drug management tools analyze vast datasets to flag potential misuse, identify prescribing patterns, and assess addiction risk. On paper, this can reduce inappropriate prescribing and improve public health outcomes.
But in practice, the issue is much more complex.
Patients may be flagged as "high-risk" based on limited or biased historical data. This can trigger recommendations to reduce or deny pain medication or surgical procedures—sometimes regardless of a physician’s clinical judgment. The result? Care that feels less personalized.
Importantly, recent CDC research emphasizes that risk scores should be used as conversation starters, not conclusions. AI tools should open the door to dialogue between patients and providers—not replace physician decision-making.
Physicians remain the most qualified to make nuanced treatment decisions, especially in cases involving pain management, complicated patient histories, or co-occurring conditions. AI should empower providers by surfacing complete, accurate, and interpretable data—not replace their expertise.
Unfortunately, some of the most widely adopted tools don’t always operate that way.
Some providers report feeling pressure to comply with AI-generated risk scores out of fear of legal liability. What happens if they override the recommendation? Could they face investigation, litigation, or risk losing their medical license if a flagged patient later develops substance use issues? These are real concerns, especially given the lack of regulation and transparency around how risk scores are calculated.
Patients misclassified as high-risk can face serious consequences, including:
Worse, these delays can drive patients toward illicit substances when they feel there’s no legal or accessible path to pain relief. This, in turn, increases:
Bias in AI models is a serious concern. It can be opaque to the end-user what inputs are being used to generate an output. For instance, would a prescriber feel more or less confident in a risk score if she knew that a patient’s credit score was one of the inputs? What if an input was the fact that a patient lived in a high-crime ZIP code? Using such data, particularly when it’s not transparent to the ultimate clinical decision maker, runs the risk of deepening existing inequities. Developers must actively build safeguards and monitor for fairness, as the CDC recommends.
Prescription Drug Monitoring Programs (PDMPs) have long been a cornerstone in the nation’s opioid response. By tracking controlled substance prescriptions across states, PDMPs help identify patterns of misuse, prevent doctor shopping, and support safer prescribing.
But as highlighted in our recent blog post, PDMPs are facing increasing scrutiny: Are they delivering value, or adding administrative burden without clear impact?
This is where AI and PDMPs intersect.
AI-powered tools rely heavily on the quality and completeness of PDMP data to make accurate risk assessments. Without high-quality data from PDMPs, even the most advanced AI systems can deliver skewed results—amplifying bias, misclassifying patients, and driving unintended consequences like inappropriate care denials.
At Leap Orbit, we believe AI has a rightful place in the future of prescribing—so much so that we built a product on that belief—but it must be used responsibly to gain trust from patients and prescribers. This responsibility includes:
Engaging those affected by AI tools can build trust and improve outcomes. Ongoing evaluation and monitoring are essential to ensure tools remain fair, accurate, and effective over time.
When AI is used to clearly present the right data, at the right time, in a way that’s clinically relevant, everyone benefits: patients, providers, and the healthcare system at large.
Leap Orbit’s RxGov platform is designed with these principles in mind. We support ethical prescribing with tools that increase transparency, provide holistic patient insights, and respect clinical judgment. Because real transformation doesn’t come from black-box risk scores. It comes from trust, data integrity, and smarter systems that put people first.