“Watson”, is an artificial intelligence computer system that answers questions posed in natural language, and is a product of IBM’s R&D department. Much of the buzz surrounding Watson occurred when it handily defeated two Jeopardy contestants last week.
After the defeat, IBM announced they would be working with Nuance, Columbia University Medical Center, and the University of Maryland Medical School on health care analytics research. The goal of the collaboration is to develop a commercial offering in the next 18 to 24 months that will exploit Watson’s capabilities to aid in the diagnosis and treatment of patients.
The win on Jeopardy and the announcement of this healthcare initiative for Watson has led some in the media to feel Watson can actually replace the diagnosing and treating physicians do with their patients — a CNN anchor even postulated that “Watson could do everything but operate” [video included at the end of this post].
Does Watson have the potential to be helpful in healthcare? Yes, but only if we understand its limitations. The following exchange between Google’s CEO Eric Schmidt and Surgeon Atul Gawande highlights the problems with using computer based algorithms in medicine.
Last year at the President’s Council of Advisors on Science and Technology (PCAST) meeting, Google’s Schmidt was befuddled as to why physicians hadn’t adopted the use of computer algorithms to diagnose patients. He stated:
“So when you show up at the doctor with some set of symptoms, in my ideal world what would happen is that the doctor would type in the symptoms he or she also observes, and it would be matched against the data in this repository…….As computer scientists, this is a platform database problem, and we do these very, very well, as a general rule. And it befuddles me why medicine hasn’t organized itself around these platform opportunities.”
Dr. Atul Gawande, a Harvard University surgeon, and author of The Checklist Manifesto, responded by saying:
“I think part of the bafflement occurs because the folks who know how to make such systems don’t understand how the clinical encounter actually operates.”
He went on to state that the bigger issue with these types of algorithm searches is they produce more information than needed for a physician, who usually has 15 minutes to manage six problems. Dr. Gawande didn’t dismiss this type of computer decision support though — and finished his response to Google’s Schmidt by saying he would welcome a smartphone app that could actually help with patient care.
This type of exchange, showing a computer scientists understanding of clinical medicine, highlights why reports of Watson’s role in medicine are likely over exaggerated. Medicine cannot be reduced to a set of complex algorithms because much of the data for these algorithms cannot even be inputted. Those without training in medicine do not understand the multifaceted “behind the scenes” analysis that actually occurs when talking to a patient.
When physicians are asking patient’s their symptoms, we’re analyzing a complex amount of information that is not tangible and cannot be spoken or inputted into an algorithm: Eye contact; Subtle physical movements; How they respond to questions – does their tone change when describing a particular symtom, leading me to believe I’ll uncover more information if I ask more about this; How they smell; How they are sitting; The reaction of family members when the patient responds to a particular question; What they are wearing; Any signs of underlying trauma; and much more.
There are so many more things being analyzed that are not included in the above list — and it all occurs within seconds. And depending on each of the above and more, my questions for the History and Physical (H&P) will change, as will my treatment plan. It’s why we’re taught in medical school that the H&P is the most important part of the exam.
No matter how good you are at diagnosing and treating, unless you asked the right questions in a timely manner, all the knowledge in the world won’t be helpful. I’m sure an artificial intelligence program could produce a rudimentary H&P, but far from a focused and disease specific H&P a trained physician produces hundreds of times a month. Some would argue it’s why physicians have a minimum of 7 years of post-graduate training (medical school + residency) before we have the sole responsibility of a patient.
At the end of the day, algorithms are only a guide, and you have to use your own clinical judgement, because each patient is unique in their own way. And one of the points Dr. Gawande mentioned in his response to Google’s Schmidt speaks volumes – “Time”. You don’t necessarily have time to input all the “data” to even utilize a computer support system.
One of the reasons I love Emergency Medicine is often you don’t have the necessary time to talk to a patient for more than a few moments before starting some sort of life saving treatment or procedure. You have a finite amount of time to save a person’s life, so you better ask the right questions, and no matter how well tuned an artificial intelligence program is, adding another layer to treatment requires time — and with the acuity of some medical emergencies, minutes could be the difference between life and death.
So could Watson be used in healthcare? As a decision support tool that is combined with an electronic medical record — sure. But to replace a physician – negative.
CNN video of Watson in Medicine.