It’s time for your annual checkup. While making your appointment on your cell, the “virtual assistant” asks a series of questions. Based on your responses, your appointment is made. Leading into the date with your doctor, the system leaves you several reminders – “press 1 to confirm and 2 to reschedule.” The assistant is smart enough to know your medical history and not offer you “option 3 to cancel” because it knows you’re past due for the exam.
If you miss your appointment, the assistant forwards you a text explaining in layman’s terms about the healthcare ramifications of deferring your visit and requirements for periodic evaluations. It auto-logs that admonition into your digital healthcare record and notifies your attending physician with an algorithm that codes messages by acuity and importance. Based on your personal medical history, the assistant tailors the medical content of the interaction specifically to you. At your office visit, your doctor asks if you’re feeling better today because their virtual assistant analyzed your voice pattern noting you were depressed and anxious at the time you made the appointment.
If you’re a fan of Star Wars or Star Trek, you know about robotic and holographic doctors. While not the advanced complexity of a starship’s sick bay, you may be surprised to learn that technological advancements in artificial intelligence, or AI, is advancing at warp speed. The above scenario is very real and while not yet deployed into the healthcare stream as the above scenario plays, the use of chat bots as intermediary healthcare conduits and providers are. "Kintsugi" has an AI prototype that in 20 seconds of unstructured speech analyzing biomarkers and spectral inflections of voice can diagnose anxiety and depression as accurately as a physician using the Patient Health Questionnaire and General Anxiety Disorder surveys (PHQ-9/Gad7) – and it can do it in 30 languages. A study at the Mayo Clinic is attempting to analyze the relationship between voice and arteriosclerotic heart disease – Cardiac Vocal Biomarkers – where a patient’s voice is analyzed 40,000 times a second. Then we have gait socks created for use by athletes that record gait, speed, and running patterns. These can be used in geriatrics to provide data that allows AI to predict when a patient will fall based on changes in gait patterns.1
Meet Eva, a chat bot capable of human emotional expressions and responsive interactions from surprise, anger, and disbelief powered by AI. Add complex medical AI software to her interface and does she become your virtual doctor? What about your pharmacist?
Meet Melody, your virtual doctor2 app on your phone.
“Ping An Good Doctor”3 has a booth where the patient interacts with a chat bot only. The patient is diagnosed, and the machine dispenses. It can make referrals. It has 400 million subscribers. Even Amazon is getting into the telehealth business with chat bots and mobile medics.
"OnMed" has a self-contained pod with sophisticated sensors to gather data. Patients are connected to a physician and the device can dispense a variety of medications or produce a script if not a stock med.⁴ OnMed is really a technically sophisticated telehealth model but add AI and Dr. Eva, and you have a game changer. Imagine one of these in every CVS?
Imagine your phone having the ability to diagnose you with an MI or a stroke and summon EMS to your exact GPS location in seconds of onset? Included in the data stream is your medical and Rx history and demographic profile. Paramedics confirm your identity by facial recognition and an “EMS” chat bot virtual ED initiates treatment on the spot prior to transport.
The capability and extent of integration of AI into healthcare’s future seems limited only by processor power and software design. AI has been a software enhancement in radiology here in the U.S. for some time now. Once quantum computing breaks the current processing barriers, it’s foreseeable the power of AI may become “explosive.” Will it be regulated? If so, how? AI’s computational power in 2019 was pegged at doubling every 3.4 months.⁵ The doubling time of medical knowledge in 2020 was projected to be 0.2 years — just 73 days.⁶ Its risks are unclear, but it is clear it’s only going to get smarter and faster. Think about that the next time you have a question for Alexa or Siri.
Lee McMullin is a Senior Risk Management and Patient Safety Specialist for CAP. Questions or comments related to this article should be directed to LMcmullin@CAPphysicians.com.
1Deep learning-enabled triboelectric smart socks for IoT-based gait analysis
2https://www.mobihealthnews.com/content/chinese-web-company-baidu-launch…
3http://www.pagd.net/allPage/aboutUs/47?lang=EN_US
⁴https://newatlas.com/onmed-station-telepresence-doctor/58820/
⁵https://www.computerweekly.com/news/252475371/Stanford-University-finds…