At the recent HIMSS Global Health Conference and Expo in Orlando, I gave a talk focused on protect yourself against some of the dangers of artificial intelligence in healthcare.
The goal was to encourage healthcare professionals to think deeply about the realities of AI transformation, while also providing them with real-world examples of how to proceed safely and effectively. My goal was for everyone present to join me in moving beyond the hype and focusing on a mature understanding of how to build this exciting future.
Fortunately, my message was well received. Attendees appreciated the potential that arises when we move beyond gimmicks and the fear of missing out. It represents a higher level of leadership, where thoughtful people collaborate across functions to set clear, actionable goals to improve results.
The appetite for this post-hype approach to AI was so substantial that I felt compelled to write a brief summary of my talk and share it widely with Healthcare IT News readers.
I’ll briefly talk about the AI time bombs that have already exploded, provide ten tips to help you avoid this problem, and share two examples of organizations I’m working with that are implementing AI correctly.
What not to do
Both inside and outside the healthcare sector, hastily launched AI initiatives are already showing signs of failure.
For example, Air Canada customer facing chatbot incorrectly promised a discounted flight to a passenger. The company later attempted to claim it was not its fault, arguing that the AI was a separate legal entity “responsible for its own actions.” Unsurprisingly, a Canadian court did not accept the “it wasn’t us, it was the AI” defense, and now the airline is forced to honor the mistakenly promised discount.
Last year, the National Eating Disorders Association intended to replace its highly experienced helpline staff with Tessa, a chatbot Designed to help people seeking counseling for eating disorders. However, just days before Tessa was scheduled to launch, it was discovered that the robot began providing problematic advice, including recommendations to restrict caloric intake, weigh yourself frequently, and set rigid weight loss goals. Although Tessa never went live, this incident underscores the devastating consequences that can result from rushing to adopt AI solutions.
A recent article published in JAMA Open Network sheds light on multiple cases of biased algorithms perpetuating “racial and ethnic disparities in health and healthcare.” The authors detailed several cases of biased and harmful algorithms that have been developed and implemented, negatively impacting “access or eligibility for interventions and services, and the allocation of resources.”
And it is particularly worrying because Many of these biased algorithms are still in operation..
Simply put, AI time bombs have already detonated and will continue to do so unless proactive measures are taken to mitigate these issues.
To do
To help leaders address the risks associated with AI, I have developed ten tips to approach AI transformation in a safe and sustainable way. These tips are designed to ensure healthcare executives get the best possible return on their investments:
-
Prioritize transparency and explainability. Choose AI systems that offer transparent algorithms and explainable results.
-
Implement strong data governance. Ensuring high-quality, diverse, and accurately labeled data is critical.
-
Collaborate early with regulatory and ethical bodies. Early understanding and alignment with ethical guidelines and regulatory requirements can avoid costly reviews and ensure patient safety.
-
Foster interdisciplinary collaboration. An interdisciplinary approach ensures that the AI tools developed are practical, ethical and patient-centered.
-
Ensure scalability and interoperability. AI tools should be designed to integrate seamlessly with existing healthcare IT systems and be scalable across different departments or even institutions.
-
Invest in Education and Continuing Training. Investing in continued education and training ensures that staff can use AI effectively, interpret its results and make informed decisions.
-
Develop a patient-centered approach. Adopt AI practices that improve patient engagement, personalize healthcare delivery, and do not inadvertently exacerbate health disparities.
-
Monitor performance and impact continually. Develop mechanisms for feedback from workers and patients, allowing for the continuous refinement of AI tools to better meet the needs of stakeholders.
-
Establish clear accountability frameworks. Define clear lines of responsibility for decisions made with the help of AI.
-
Promote an ethical AI culture. Encourage discussions about the ethics of AI, promote the responsible use of AI, and ensure that decisions are made with the well-being of all stakeholders in mind.
Let these tips guide you on your AI journey. Use them to develop principles, policies, procedures and protocols to get AI working right the first time and skillfully navigate cases where things don’t go as planned. Proactively incorporating these tips at the beginning of your AI transformation will save time, money, and ultimately lives.
What are others doing?
AI transformation requires several critical components working in unison. As I mentioned in my HIMSS talk: As a Thanksgiving rite of passage, it’s time to move from the AI kids’ table (where the conversation obsessively centers around ChatGPT) to the adults’ table, where Leaders are taking active steps to lay the groundwork. foundation for a mature AI transformation.
Two of these essential elements that I have focused on, in partnership with large healthcare organizations, are taking a holistic approach to implementation and investing in a strong data-driven culture.
In a healthcare system, we developed a plan to safely deploy large language models. This plan covers several impact areas to consider, such as the economic and privacy implications of LLMs, and includes essential questions to ask in each of these domains.
The goal was to present all members of senior management with specific, interconnected questions about the risks and benefits associated with implementing LLM. This approach helps highlight trade-offs, such as speed versus security or quality versus cost, and provides this diverse group of leaders with a common language to identify opportunities and discuss risks.
At another health system, we developed ten key performance indicators to ensure their leaders, teams, and processes contribute to a data-driven, AI-ready culture of care. We also created a survey based on these KPIs to establish a basic understanding of where data culture excels and where there is room for improvement.
By focusing on understanding the data needs of its physicians and providing them with high-quality, relevant data when they need it, the organization has achieved rapid and impressive increases in “the good numbers,” such as employee engagement and patient satisfaction. .
This serves as an excellent example of how AI transformation begins long before the outbreak of emerging technologies and hype. By focusing on fundamentals like data, leaders can achieve quick results while setting their organizations up for lasting success.
What comes next
The future of healthcare requires a “Leadership first, technology last.” mindset. Executives must prioritize the needs of their people, as well as the challenges and opportunities inherent in their processes.
This approach involves using science to understand your organization in a systematic and predictable way and relying on high-quality data to generate accurate and reliable insights to guide change.
Adopting a leadership-first, technology-second mindset also means that decision-makers combine science and data with their hard-earned expertise to expertly craft solutions tailored to their specific context.
That’s why the American Medical Association defines AI as “augmented intelligence,” emphasizing its role in enhancing human intelligence rather than replacing it. Its definition highlights the importance of keeping our cognitive and emotional capabilities at the forefront of decision-making before turning to technology.
Executives who embrace these timeless human qualities will foster a mature AI-powered future.
Brian R. Spisak, PhD, is an independent consultant focusing on digital transformation in healthcare. He is also a research associate of the National Preparedness Leadership Initiative at the Harvard TH Chan School of Public Health, a faculty member of the American College of Healthcare Executives, and author of the book, Computational leadership.