Artificial Intelligence has the potential to improve every aspect of health care. AI applications can accelerate scientific discovery, help physicians and nurses make better decisions, improve medical advice for patients, and reduce the sometimes-crushing burden of paperwork. But history suggests that the U.S. health sector struggles to put innovations like AI into practice, due in part to what economists call “switchover disruptions,” the costly phase-in period for new technologies that can upend profitable operations. To reduce switchover disruptions for AI and accelerate adoption, health care innovators must build trust in AI with three critical constituencies: providers, patients, and the public.
There are three things that innovators can do to build the requisite trust:
1. Change the narrative about the purpose of AI.
Instead of designing the new technologies to substitute for human decision-making, innovators should aim towards new tools that complement and augment the expertise of providers. For example, AI applications have the potential to support the patient-provider relationship by relieving providers of rote tasks — such as typing information into an electronic health record (EHR) — and enabling them to spend more of their limited time and attention on their patients and on higher-order tasks such as problem-solving and communication.
Some providers are even experimenting with AI as a tool to help them communicate more compassionately with patients. The purpose of these tools should be to enable providers to do more for more patients in more places than would be possible without them.
2. Pay careful attention to how AI applications are implemented.
Prior to implementation, AI applications — like all new diagnostic and therapeutic innovations — should demonstrably improve outcomes and provide better experiences for patients and providers. Payers, health systems, and providers need to come to a common understanding about when it is appropriate to use an AI application, how it should be used, and how potential side effects will be identified and mitigated.
For example, AI-driven online symptom checkers, predictive models, and diagnostic programs must be carefully curated by physicians to reduce the risks of hallucinations (invented facts) or diagnostic bias based on race or other characteristics. Payers and health systems should also rely on input from clinicians to adapt AI applications to clinical and administrative workflows.
3. Assure patients and the public that AI applications serve their needs without threatening their rights.
To address these concerns innovators should look to emerging frameworks such as the European Commission’s Ethics Guidelines for Trustworthy AI or the Biden Administration’s Blueprint for an AI Bill of Rights. These frameworks offer design principles for trustworthy AI such as: AI systems should be safe and effective. AI algorithms should be unbiased and promote equitable healthcare outcomes. Data privacy should be maintained. Patients should be informed when an automated system is being used, and they should be able to opt out of automated systems where appropriate.
The contrasting examples of two earlier transformative technologies — EHRs and minimally invasive gallbladder surgery — illustrate why it is necessary, and urgent, to reduce switchover disruptions for AI in health care.
In 1991, a report by the Institute of Medicine (IOM) of the National Academy of Sciences identified EHRs (then known as computer-based patient records) as an essential technology for health care. But by 2007 only 4% of physicians and less than 2% of hospitals reported having a fully functional EHR. This was true at a time when most other sectors of the economy were rapidly digitizing and despite studies showing that EHRs were associated with lower costs and improved quality of care.
It wasn’t until the Obama administration included billions of dollars of subsidies for EHRs in its stimulus program during the great recession of 2009 — nearly two decades after the IOM report — that EHRs began to take off.
In contrast, minimally invasive surgical removal of the gallbladder — a method that transformed one of the most common surgical procedures — took just a few years from its first use in the United States in 1988 to nearly complete adoption.
Switchover disruptions were high for EHRs and low for the new surgical procedure. Why?
The introduction of EHRs required large initial expenditures on software and the purchase of computers for every clinical setting. Even more costly was training employees on the new system and the drop in productivity as they climbed the learning curve. Additional cost and disruption came from the redesign of clinical and administrative workflows needed to capture information for the EHR and to put that information to meaningful use.
The switchover to EHRs also involved hidden costs stemming from challenges to existing power relationships and professional identities. Many physicians saw EHRs as evidence of their increasing subordination to the demands of administrators and payers, particularly as the portion of their time devoted to feeding information into the system increased. Apart from the system modules that expedited billing and receiving, most physicians were not clamoring for EHRs and did not see them as solving a pressing problem. Many liked and trusted their paper records, and EHRs seem to have worsened the problem of physician burnout and early retirement.
Minimally invasive gallbladder surgery was also a big change from previous technology and required significant investment in costly new tools, training, and processes. But surgeons and hospitals were already in the business of removing gallbladders, and the changes were primarily limited to the surgical suite.
Changing to a new and better surgical technique did not challenge existing power relationships and professional identities. Many surgeons wanted to learn the new methods. In addition, the idea of minimally invasive surgery was attractive to payers, patients, and the public at large, which can greatly ease the transition to a new technology.
Some AI applications come with relatively low switchover disruptions. For example, AI can be used to analyze medical records to predict which patients are at elevated risk for falls in the hospital. High-risk patients can then be flagged in the EHR. Anyone encountering the patient can then take steps to reduce the risk of a fall. This application is easily incorporated into existing workflows and can even eliminate steps such as, for example, a daily huddle for care teams to evaluate fall risks.
However, much of the current excitement about AI comes from large language models (LLMs), like ChatGPT, that have the potential to automate decision-making about diagnoses and treatments.
These AI applications are likely to come with large switchover disruptions, threatening to devalue the hard-won human expertise — and even eliminate the jobs — of doctors, nurses, and other providers. Fear of this kind of automation creates resistance to change. The resistance is amplified by the tendency of LLMs to “hallucinate” (i.e., invent facts). Checking for hallucinations adds another complication to the already overly full workload of providers.
In addition, recent surveys reveal that most Americans are uncomfortable with the prospect of AI being used in their own health care. Most doubt that AI will improve health outcomes and worry that it may worsen the patient-provider relationship. These concerns of patients and the public are another potential source of resistance.
Fortunately, AI is a new technology and attitudes are not yet written in stone. There is time to act. However, high switchover disruptions reduce the incentives for firms to adopt innovations, particularly in markets — like those for physician and hospital services and health insurance — that are highly concentrated and protected from external competition by regulatory and other barriers. Without action, the health sector may delay or forego valuable AI applications much as it did with EHRs.
The United States is a world leader in the development of AI. But technology isn’t destiny. People choose how and when to put technology to use. It would be sadly ironic if the U.S. health sector lagged in reaping the benefits of this transformative new technology. The key is to design and implement AI applications so that they are worthy of our trust.