Considerations of artificial intelligence (AI) in MedEd data
May 27, 2021
Last week there was a beautiful synchronicity in workshops and webinars, first with the final monthly session of 2020’s International Conference on Residency Education – ICRE featuring a debate on learning analytics, followed by a talk on ethics in product management.
The ICRE session, “Learning Analytics at the cutting edge” featured Drs. Stan Hamstra and Monica Cuddy debating the merits of aggregating resident data in competency-based medical education (CBME) at the national level, followed by Drs. Martin Pusic and Rachel Ellaway debating AI in resident selection.
While the debates were designed to be very polarized, they gave me plenty of food for thought.
I’m a data nerd who left the ivory tower for the world of MedEd technology, so it’s easy to guess what side of the fence I lean over. And while Drs. Hamstra and Pusic respectively put forth great arguments for the relative merits of national data aggregation and AI, it was the persuasive arguments of Drs. Cuddy and Ellaway that stopped me in my tracks.
CBME data aggregation
In the early days of CBME, my deans and program directors were excited to see national numbers for entrustable professional activities (EPA) achievement. I have no doubt that many similar debates were had across the country in those days, though now with four years of data under our belts, it’s important to consider what we know now compared to then.
Dr. Cuddy had three primary reasons to reconsider aggregating CBME data at the national level, namely, impacts on ethics, privacy, and validity.
When it comes to ethics, residents in a given program or school may not have been asked for consent, and even now, may not be able to provide informed consent when we don’t yet know how their data might be used in the future.
Working with IT departments and now Acuity, I know we all have a deep focus on privacy and security, but we may not be aware of individual contexts such as the ones Dr. Cuddy discussed. We don’t know what programs are small relative to others, which might threaten the privacy of individual students and especially those at smaller schools.
Finally, Dr. Cuddy argued against the validity of this type of aggregation, that taking data intended as a formative assessment for individual residents and using it to make inferences about the program or school performance is using it for a purpose for which it wasn’t intended.
While we rely on our experts to give us context, this discussion made me reconsider the types of questions we need to ask in a discovery that go far beyond the surface. In Claire Woodcock’s talk on Ethics for Product Managers, hosted by ProductTank Waterloo, she noted a critical shift in the framework every product manager is taught early on: when we develop a product, we ask is it desirable? Is it feasible? Is it usable? And now we also need to ask: should we do it? Will it have unintended harm or consequences?
While we have substantial in-house expertise, there are a plethora of contextual elements and nuances where we need the experts to guide us on unintended harm and consequences.
AI in resident selection
Transitioning to residency selection, Drs. Ellaway and Pusic debated the possibility of using AI in selecting residents.
For those of you not involved in residency selection, it’s an intensive process for programs. Hundreds of applications and interviews and, as with CBME, data overload. Dr. Eric Warm, Professor of Medicine and Associate Chair for Graduate Medical Education and Internal Medicine Residency Program Director at the University of Cincinnati introduced this topic by providing his experience as a program director in an Internal Medicine residency program; he reviews 2,000 applications and interviews 413 residents for a few dozen spots, all by himself.
It’s a staggering amount of paperwork for busy clinicians to wade through. But it’s also critical paperwork used to select the doctors a program will train – and who will eventually be responsible for the treatment of patients.
Dr. Ellaway acknowledged, like Dr. Pusic, that the current process is far from perfect, but her fear is that with AI, “It’s likely we would just replicate that and then encode it, basically lock it into a system rather than allow us to continue to debate and have those meaningful conversations.”
Ms. Woodcock discussed specific examples where there were issues of privacy and ethics, or unintended consequences, and one of the key takeaways was to ensure diversity on the team building the product. For me, this includes the experts, and in this case, would need to include diverse types of schools and stakeholders.
We need to make a more conscious effort to connect with not just customers who have a problem we’re trying to solve, but think about that pool in terms of what insight their structure might provide, for example, whether it’s the size of the school, or if they have a unique curriculum structure.
In the case of the MSPE work that we’re doing, we have been lucky to partner with schools that help us see beyond the AAMC template. With the shifts in education this past year due to Covid, we’re eager to see how residency selection, and the MSPE, might evolve and how we can support that with good data presented the right way.
Final takeaway
The questions around technology and medical education are numerous, and it reinforced how crucial our process of product discovery is. It’s also absolutely critical that issues of ethics are discussed by so many esteemed experts; after all, data doesn’t exist in a vacuum – it’s not just data, it’s people.
Thinking more specifically about AI in learning analytics, Dr. Ellaway’s key points made me think more deeply about who needs to be at the table when building products, since, as noted by Ms. Woodcock, “Every product we build has the potential to impact people who use and don’t use our product.”
I considered Dr. Ellaway’s fear that by working with tech companies, doctors could be ceding control, authority, and autonomy in key areas. However, by partnering with technology companies who have access to a wide range of schools and can see both differences and patterns with a wider lens, and who don’t have a vested interest in the outcomes beyond building a product that helps its users, I believe it will be stronger for that collaborative effort.
We – the tech companies – are experts in building software, and able to offer guidance on what we see and understand in terms of privacy and protecting data, but we are not the experts in education. The doctors, researchers, educators, and learners in medical and healthcare professions are the experts in what is appropriate, valid, and ethical in education. Working closely with a variety of experts across the globe, we rely on them as our guiding stars to build the right products to solve the right problems – and do it with a critical eye toward data stewardship.
This collaboration is critical to the work we do.