As part of our coverage of HIMSS 2010, we had the opportunity to check out a series of apps being introduced by Nuance Communications. Physicians who are familiar with Nuance generally know them as a company that provides dictation services. What many physicians may not know is that Nuance has some of the most advanced speech recognition technology on the market and ambitions that go well beyond transcribing dictation summaries and clinic visits.

In Atlanta, they announced Medical Mobile Search and Medical Mobile Recorder, in addition to the Dragon Medical Mobile dictation iPhone app. These two apps were essentially voice-enabled search, with the former searching a variety of medical databases for reference information and the latter searching previous recordings for patient names. What these apps tell us is that Nuance’s aim is to move beyond clicks and taps, and allow users to interact with computers through voice. And a recent announcement from Nuance and T-Mobile suggests that Nuance could be bringing much more to the medical world.

On May 4th, Nuance announced the integration of their Voice Control Platform on the myTouch 3G Slide from T-Mobile. The obvious disclaimer here is that the capabilities that follow were listed on a Nuance press release. Basically, the “Genius” button on device enables the user to give voice-commands which can be as complex as “Send text to John Smith. I’ve got tickets for the game tonight” or “Search for recipes for chocolate cake.” But given some the impressive software that Nuance demonstrated in Atlanta, I wouldn’t be surprised if this turns out to be mostly true.

So why is this relevant to medicine? One thing Nuance demonstrated early on was a desire to enable other developers to easily embed Nuance technology into their own products. At HIMSS, they announced the release of an SDK to allow healthcare IT vendors to integrate their Dragon Dictation software into their products, allowing seamless dictation directly into the EHR. As we reported then, Eclipsys was already piloting this feature and other EHR vendors were expected to quickly follow suit. The obvious next step here would be to integrate the Voice Control Platform – and given Nuance’s already expressed desire to integrate into EHR products, they can be expected to make this as easy as possible.

Demo videos on Nuance website suggest they are already moving in that direction with the utilization of voice commands during dictation. For example, a physician can, during dictation, say “Search WebMD for Claritin” to find the dose of Claritin and then, with the page on Claritin dosing open, continue dictating. From the looks of the demo video, this takes a fair amount of practice to get used to but looks pretty nice once you make it through that initial learning curve. Given their proven ability to recognize medical terminology, its not hard to imagine simply looking down at my iPhone and saying “Find Mr. Jones” in order to access his EHR record. Then, through a series of voice commands, I could review his labs and radiology results for the day, dictate a note with my plan for the day, and so on.

However, just like the touchscreen, this kind of interaction will not succeed on a platform designed for modalities other than voice. When watching the Nuance videos, there are awkward pauses apparent as the user is forced to resort to mouse for certain features. Just like the iPhone did with the touchscreen, the workflow of physicians interacting with an EHR will have to be considered to allow construction of a platform designed with voice as the primary mode of interaction.

In any case, if Nuance’s Voice Control Platform proves to be as good as advertised, we may not be far from an EHR for which physicians need no mouse or even a finger. Nuance’s track record certainly suggests they will make it as easy as possible on their end to get there.