• Fred Pelzman is an associate professor of medicine at Weill Cornell, and has been a practicing internist for nearly 30 years. He is medical director of Weill Cornell Internal Medicine Associates.

Lately, as I’ve been thinking about ways to make our practice work better, I keep turning my attention to the portal messages we are all bombarded with every day and the types of situations, both clinical and administrative, that are often raised therein.

The portal has been both a blessing and a curse.

For many years, one of the major complaints at our practice was the inability to reach anybody by phone.

Our hospital-based practice is sorely understaffed, by any measure, and providers would often spend the first few minutes of every appointment apologizing for patients not being able to reach us on the phone.

The portal has helped bridge a little bit of that gap, overcome a little bit of that inequity, creating a different and sometimes more efficient way to reach us for many patients.

However, there are still bugs in the system, and still ways that we could improve on this model of healthcare delivery.

The portal provides an outstanding way for communicating test results to patients and discussing follow-up. It also provides aid in the ongoing management of chronic medical conditions like hypertension and diabetes, where patients send us their home blood pressure or blood glucose monitoring results, and we can adjust medications accordingly.

And oftentimes there are a lot of quite simple issues that patients send us through the portal that we can dispense with a quick response.

Yes, flu shots are available, you can schedule one here or at your local pharmacy.

I’ve placed a referral for your mammogram, you’ll see it ordered on the portal from your end and you will be able to schedule directly from there.

But intermixed with this, we’ve all started to see an enormous number of messages that are raising clinical care issues, a new problem that a patient wishes to address through the equivalent of an email.

Many years ago, before electronic medical records, pretty much the only way for patients to reach us about a new clinical issue was to call the office, and either leave a question with a receptionist or nurse, or schedule an appointment.

Now as care has expanded, the portal messages have created a place where patients often feel they can call and get medical advice, get a bunch of questions answered, request some tests, and accomplish what we would often do during an office visit.

Many practices are starting to charge for these more complicated portal messages, saying anytime a patient is requesting a new prescription or voicing a new medical issue that needs to be addressed, they should create either an in-person visit or a billable scheduled telephone call or video visit.

What if we could harness the power of artificial intelligence (AI) to help handle some of this? AI could serve as a medical intermediary to do some analysis of the symptoms that patients are presenting with, walk them through a series of clinical screening questions, and suggest some medical advice or a next course of action.

Something akin to those large notebooks of clinical pathways that nursing uses to walk patients on the phone down an algorithm for chest pain, headache, or abdominal pain.

I’m not saying we’re anywhere near that yet, but think of the possibilities.

I can envision a time when a patient’s portal messages will be pre-screened by some AI system, which will have what it needs to understand the clinical question that a patient is asking.

Based on the patient’s past medical and surgical history, relevant lab data, allergies, and even social history and family history, the AI system can then walk the patient through a series of follow-up questions and come up with a rudimentary plan for what comes next.

Of course, there is the perennial worry that these systems will be either far too specific or far too sensitive.

Those of us in clinical practice who have been doing this for many years know that not every headache is a subarachnoid hemorrhage, meningitis, a brain tumor, or even temporal arteritis. And not all chest pain is an acute myocardial infarction.

I worry that we might build a system that acts like a first-year medical student.

I remember during medical school someone telling an apocryphal story about a wise old attending on teaching rounds, with his team of residents, interns, and medical students, starting to hear a presentation on a patient admitted with chest pain.

After starting with only the simplest first lines of the presentation, the chief complaint, and a few sparse details, the attending turns to the medical student and says, so, what would you like to offer up as a differential diagnosis?

The medical student thinks for a moment, and then states, this is clearly a case of acute aortic dissection as a complication of relapsing polychondritis.

Stunned, the attending asks, how could you possibly know that?

The medical student replies, what else causes chest pain?

If we set the sensitivity too low, we risk an enormous differential diagnosis that takes the AI system and the patient down an endless and ever-widening pathway, opening up an enormously broad differential diagnosis.

If we set it too specifically, we risk looking at things too narrowly, having the system limit its focus to certain diagnoses and missing others.

Perhaps we can create a system that does a first pass through these clinical messages for us, runs through a bunch of questions with the patient, and then presents their conclusions to us.

We could then prompt the AI to ask some further questions, until we’re all satisfied that we know what’s up, where things are going, and what’s best to do next.

The first generation of these programs should probably start out with really simple stuff.

The portal message from a patient might say that they’d like to get their flu shot, and the system will look to see if they are eligible — that is that they haven’t had one in the past year, that they’ve never developed an allergy to the flu shot or any of its components, and that the flu shot is available at our practice. Then it would place the order for the flu shot in the system and send it to me for signing, simultaneously scheduling the patient for a vaccine visit with the nurse.

We could then move on to mammograms, routine follow-up referrals, refills of chronic medications that meet pre-specified criteria, and planned follow-up lab testing — the kinds of things that come at us in huge waves throughout the day while we’re trying to take care of our patients.

You know, doctoring.

The next generation of AI medical assistants could take on rashes, strep throat, UTI’s, diarrhea, ankle sprains, the common cold, COVID-19, and on and on.

We could probably even teach it to take on some of our more mundane administrative tasks, such as faxing results to another provider when the patient requests it, or filling out home care or work forms.

I’m sure there are folks out there working on these kinds of things, and we welcome the help and look forward to partnering with these systems in the future.

My sincere hope is that the really smart computer people who are working on this are turning to those of us who do this stuff every day to make sure they’re developing a system that not only works for patients, that not only works for them, but that works for us as well.



Source link