If there’s one technocultural trend in healthcare that’s shaping 2024, it’s AI’s transformation — from a distant, suspicious concept to an emerging field that could help tangibly solve various existential challenges across the industry.

Furthermore, interest in adopting AI solutions is expected to grow in the near future. One survey found one-third of healthcare CIOs and IT executives plan on increasing their AI investments over the next three years.

However, a disconnect persists between a drive to jump head-first into AI implementation among executives and hesitation among patients and care teams about fully embracing everything healthcare AI has to offer.  And it’s completely logical. Naturally, individuals have sincere questions about how novel AI solutions will play out in reality. In an industry that requires meticulous attention to safety, there’s little room — or tolerance — for error.

With these concerns in mind, it’s ultimately on leadership at healthcare organizations to instill confidence in AI among stakeholders across the enterprise. Let’s explore some often-cited sources of apprehension and how to ease fears among patients and clinicians.

Common AI-related concerns among patients and providers

Distrust in healthcare AI — both from patients and providers — stems from a complex interplay of factors, ranging from concerns about privacy and data security to uncertainties about reliability and ethical considerations.

  • Privacy and data security concerns: Patients are understandably cautious about sharing their medical information, especially with technologies that they might not fully understand. Results from one survey released in late 2023 revealed 63% of patients feared their personal health information could be compromised with increased AI use. Another study by the American Medical Association found 87% of physicians cite privacy as their main concern regarding AI adoption. There's apprehension regarding where data might end up, how it will be leveraged, and whether it will be adequately protected from breaches or unauthorized access.
  • Lack of transparency: AI algorithms often function as "black boxes" — meaning their inner workings are not readily understandable by most users. This lack of transparency can be alarming for both patients and providers, who may question how decisions are being made and whether biases or faulty information are being inadvertently encoded into algorithms. 
  • Reliability and accuracy: While AI has shown tremendous potential in certain healthcare applications, there are still concerns about its reliability and accuracy, particularly in critical decision-making processes. Some patients and providers worry about the consequences of relying too heavily on AI-driven diagnosis or treatment recommendations, especially if errors or misinterpretations occur. 
  • Fear of job displacement: One qualitative analysis discovered healthcare workers across organizations might fear that AI technologies will eventually replace human workers — leading to job loss or diminished roles. This anxiety can lead to resistance and skepticism towards AI initiatives, as providers may perceive them as threats to their livelihoods and professional autonomy.
  • Limited understanding and education: Many patients and providers might not fully grasp the capabilities and limitations of AI in healthcare. Misconceptions and misinformation abound, further fueling distrust and skepticism. Addressing this lack of understanding through education and awareness initiatives is essential for fostering trust in AI-driven healthcare solutions.

Making inroads to gain trust during healthcare AI adoption

As discussed, patients and providers have multiple rational reasons why they might be hesitant to fully embrace using AI technology in the healthcare context. But with the positive opportunities AI offers largely outweighing potential pitfalls, it’s essential for leaders to implement core strategies for effectively choosing the right solutions and generating buy-in from stakeholders.

Invest in AI technology responsibly

Although AI is often discussed as a unified field of technology, it’s essential to understand that this area of innovation is anything but a monolith. And that applies to both purpose and function. 

For instance, conversational AI in healthcare can be used for everything from automated note taking to patient engagement. Functionally, this technology can be generative (using deep learning to create content from data on which it’s trained), rule-based (drawing on a preset database to create content, such as chatbots that follow a limited set of response paths), or somewhere between the two.

Whatever the use, the foundation of cultivating trust in your organization’s AI solutions lies in choosing platforms and products built with safety in mind. Weigh the advantages and risks of adopting specific applications. For example, a generative AI patient engagement tool might sound streamlined in theory. However, in practice, it might engage patient populations with unsuited or problematic dialogue if left without consistent oversight to account for issues commonly associated with generative technology — such as hallucinations (incorrect or misleading outputs) and biases.

Although retrieval-based alternatives might not be able to converse on as wide a range of topics as generative solutions, they ensure patients only receive responses fetched from preset information — such as the case with Memora’s platform that only sends clinically relevant messages. The more thoroughly your organization considers not just the intended outcomes of using AI, but also how solutions have been developed to reinforce secure user experiences, the more likely you are to choose a platform that satisfies your patient base while helping your care teams feel confident in their ability to function with safety in mind. 

Learn how Memora's platform was responsibly built to perform within client-validated pathways and engage patients with clinician-curated content.  

Understand where the data comes from

Data is the heartbeat of AI. “Garbage in, garbage out” — bad inputs yield bad outputs and vice versa — is a core principle of developing accurate, reliable, and effective AI platforms. With 40% of organizations across different industries experiencing inaccuracies or hallucinations in AI outputs due to data issues — costing them millions of dollars — the bottom-line ramifications of implementing technologies based on faulty or compromised data have been made clear. 

Furthermore, such challenges have also affected perceptions of AI among patients and providers. In one Medscape survey, 88% of physicians expressed concern over ChatGPT and other generative AI giving individuals inaccurate health information.  And a Salesforce survey found 63% of patients fear AI could compromise data and disperse inaccurate information.

Thus, any effective AI selection process will need to include verifying where its core information originates. How can healthcare leaders do this? One way of gauging data quality in healthcare is to discuss data validation. Ideally, AI vendors in the healthcare space should have processes in place to ensure quality information informs their algorithm, maintain care-related content, and update data in tandem with the evolving industry landscape.

At Memora Health, our team of clinicians consistently curates a robust Care Program content database based on evolving clinical practice, as well as our clients’ input — and has been building out this information for years. This not only ensures patients receive care guidance custom-built for their care journeys, but also keeps our technology accountable for limiting the patient-facing resources it surfaces to predetermined parameters. 

Clearly educate your workforce on the benefits and limits of your AI program to your workforce

In healthcare — and many other industries, for that matter — workers across organizational levels are generally worried about job displacement with increased AI adoption. 

It’s important to note that current healthcare AI solutions are mostly purpose-built to intelligently automate cumbersome tasks that have been classically performed as adjacent responsibilities for staff or to simplify clunky legacy workflows. 

By and large, implementing AI technology and leveraging it will still involve robust human intervention — meaning the importance of maintaining a sufficient workforce to actually use AI-driven platforms won’t diminish any time soon. In fact, one study discovered a generative AI solution for pre-drafting clinician messages actually required more read time and oversight from healthcare workers than manual response writing — suggesting this type of intelligent technology still needs significant care team input to be effective.

How do healthcare leaders get ahead of their employees feeling as though their jobs are on the line with the advent of new and innovative technologies? Level with them about the “why” behind bringing in intelligent solutions. Point out specifically which workflows will be simplified, how intelligently automating routine tasks will make their lives easier, and, importantly, the hours they’ll save for more important responsibilities in the process. How do leaders equip their teams for success using healthcare AI, and their AI initiatives for successful adoption and appropriate use? Educate providers on what AI can and can't do, how you are overseeing its responsible use and monitoring for safety, and where to go with questions.

Memora’s conversational AI platform is specifically designed to reduce the burden of manual and routine tasks on healthcare workers so that they have more time in their day to work at the top of their license. As a result, care teams can experience more autonomy and fulfillment on the job — hopefully leading to less burnout and employee churn.

Discover how one oncology provider used Memora’s conversational AI to support oral therapy adherence.

Ask if the AI developer has a responsibility framework

As developers navigate an evolving AI ethics landscape, they’re increasingly establishing concrete frameworks for responsible technology creation. When vetting vendors, healthcare decision-makers should ask what principles are guiding their AI innovation upfront. Decision-makers should vet if those principles are generic or tuned specifically for healthcare.

At Memora Health, we have identified the following tenets as core principles to responsibly develop and design our AI-driven intelligent care enablement technology: 

  • Designed for safety & reliability. Healthcare AI systems are designed to benefit all stakeholders including clinicians and patients, predisposed toward safety by design, and validated to perform reliably and as intended. 
  • Human accountability & clinical oversight. Healthcare AI systems implement oversight systems that preserve human authentication, direction, and control, including human-AI interaction design principles and meaningful systems to empower clinician explainability, feedback, and review. 
  • Promote equity & fairness. We examine during design and deployment how opportunity, information, and access in healthcare AI systems might be allocated more fairly to promote equity across communities and stakeholders. We will seek to avoid unjust impacts on people, particularly those related to protected characteristics.
  • Safeguard privacy & security. We incorporate privacy, security, and HIPAA compliance principles and enhancements in the development and use of healthcare AI systems.

AI has the potential to not only streamline existing workflows, but to transform how we think about delivering exceptional care experiences. However, innovative platforms will only be effective when stakeholders can be confident in their ability to produce reliable, stable results. Ensuring everyone’s voices are heard, considered, and advocated for while developing intelligent technology is crucial for generating trust from patients and care teams, alike — and, ultimately, maximizing the impact of emerging solutions. 

Ready to learn more about conversational AI in healthcare? Download our whitepaper for a deeper dive.