Sean Fenske, Editor-in-Chief05.03.18
“Help! I’ve fallen and I can’t get up.” That phrase represents the first experience or familiarity many people have with a voice-enabled medical device. In reality, the Life Alert system is triggered by a button worn around the neck of an elderly person, enabling them to communicate with someone able to notify emergency personnel if the incident requires. It’s really a remote-controlled speaker phone with a single number on speed dial more than anything else. On one hand, it certainly addresses a need, but as an innovative, voice-enabled medical device, it probably falls short.
Meanwhile, voice recognition technology has evolved significantly. I can still remember when a voice transcription program was hardly worth using as it required just as much editing and review upon completion that the time saved up front was ultimately minimal. Now, that technology is incorporated into every smartphone, allowing users to speak a text message while driving or when their hands are otherwise occupied. Further, vocal instructions are enabled to control other aspects of a smartphone, such as making a call or performing a Google search. Of course, the technology still isn’t perfect and can result in some amusing text message results for the receiver.
The voice-enabled technology revolution has been further powered by the growth of home-based devices, such as Amazon’s Echo and Google’s Home products. In fact, Gartner forecasted the worldwide virtual personal assistant-enabled wireless speaker market would generate $3.5 billion by 2021. The organization also stated that in 2019, artificial intelligence (AI) functions will run on the device itself rather than in the cloud, alleviating privacy and security issues.
All of this offers tremendous opportunity for healthcare. Voice-enabled technology can be used by virtually any patient, young or old. Software can facilitate language translations as well. Further, AI could be leveraged to learn the speech patterns of those who are unable to speak clearly due to some type of medical condition, facilitating their use of the technology and/or providing them a voice. In addition, with the influx of the aforementioned personal assistant devices, medical applications could be developed that interface with patients in their homes to monitor how they are feeling. If connected with other technologies, such as a wearable, these devices could inquire about abnormal readings to determine if the person is in distress, requires emergency assistance, or is just performing an unusual activity that’s generating the odd readings, but is otherwise fine.
Voice-enabled medical technology could be used at the point of care as well. Doctors, nurses, and EMT professionals could use a voice interface to record their actions, keeping their hands free but providing a valuable record of what’s occurring. The system could, at the same time, serve as a back-up check, verifying a proper dose of a pharmaceutical is being administered, a treatment plan matches with a patient’s diagnosis, or a surgical procedure is being performed for the correct patient on the right part of the body.
Another possible application for voice-enabled medical technology is in the diagnosis of diseases. Waqaas Al-Siddiq, founder and CEO of Biotricity, a biometric remote monitoring solutions company, recently provided an online exclusive for the MPO website on this very topic. In that article, he explained how speech samples could be captured by a smartphone or virtual personal assistant and analyzed for disease biomarkers.
“The Mayo Clinic has partnered with an Israeli company, Beyond Verbal, to test the voices of patients with coronary artery disease—the most common type of heart disease,” said Al-Siddiq. “Using machine learning, their team identified 13 different vocal features associated with patients at risk of coronary artery disease. One of those characteristics, which was related to the frequency of the voice, was associated with a 19-fold increase in the likelihood of coronary artery disease. Their research could be applied to create a vocal test app or medical wearable that would be used as a low-cost, predictive screening tool to help identify patients most at risk for heart disease, as well as to remotely monitor patients after cardiac surgery.”
The downside to leveraging this type of technology for healthcare applications mimics the concerns true of almost all consumer-level wearable medical devices (e.g., fitness trackers). That is, they are unreliable and not at a level to offer clinical-grade accuracy. Granted, the electronics for these two types of devices differ and the shortcomings of one do not necessarily reflect a direct correlation to the shortcomings of the other, but accuracy (or lack thereof) is certainly a consideration. Developers of voice-enabled medical technologies must ensure if they are leveraging home-based devices or smartphones, they are certain the necessary level of accuracy is present before it can truly be trusted by medical professionals.
Meanwhile, voice recognition technology has evolved significantly. I can still remember when a voice transcription program was hardly worth using as it required just as much editing and review upon completion that the time saved up front was ultimately minimal. Now, that technology is incorporated into every smartphone, allowing users to speak a text message while driving or when their hands are otherwise occupied. Further, vocal instructions are enabled to control other aspects of a smartphone, such as making a call or performing a Google search. Of course, the technology still isn’t perfect and can result in some amusing text message results for the receiver.
The voice-enabled technology revolution has been further powered by the growth of home-based devices, such as Amazon’s Echo and Google’s Home products. In fact, Gartner forecasted the worldwide virtual personal assistant-enabled wireless speaker market would generate $3.5 billion by 2021. The organization also stated that in 2019, artificial intelligence (AI) functions will run on the device itself rather than in the cloud, alleviating privacy and security issues.
All of this offers tremendous opportunity for healthcare. Voice-enabled technology can be used by virtually any patient, young or old. Software can facilitate language translations as well. Further, AI could be leveraged to learn the speech patterns of those who are unable to speak clearly due to some type of medical condition, facilitating their use of the technology and/or providing them a voice. In addition, with the influx of the aforementioned personal assistant devices, medical applications could be developed that interface with patients in their homes to monitor how they are feeling. If connected with other technologies, such as a wearable, these devices could inquire about abnormal readings to determine if the person is in distress, requires emergency assistance, or is just performing an unusual activity that’s generating the odd readings, but is otherwise fine.
Voice-enabled medical technology could be used at the point of care as well. Doctors, nurses, and EMT professionals could use a voice interface to record their actions, keeping their hands free but providing a valuable record of what’s occurring. The system could, at the same time, serve as a back-up check, verifying a proper dose of a pharmaceutical is being administered, a treatment plan matches with a patient’s diagnosis, or a surgical procedure is being performed for the correct patient on the right part of the body.
Another possible application for voice-enabled medical technology is in the diagnosis of diseases. Waqaas Al-Siddiq, founder and CEO of Biotricity, a biometric remote monitoring solutions company, recently provided an online exclusive for the MPO website on this very topic. In that article, he explained how speech samples could be captured by a smartphone or virtual personal assistant and analyzed for disease biomarkers.
“The Mayo Clinic has partnered with an Israeli company, Beyond Verbal, to test the voices of patients with coronary artery disease—the most common type of heart disease,” said Al-Siddiq. “Using machine learning, their team identified 13 different vocal features associated with patients at risk of coronary artery disease. One of those characteristics, which was related to the frequency of the voice, was associated with a 19-fold increase in the likelihood of coronary artery disease. Their research could be applied to create a vocal test app or medical wearable that would be used as a low-cost, predictive screening tool to help identify patients most at risk for heart disease, as well as to remotely monitor patients after cardiac surgery.”
The downside to leveraging this type of technology for healthcare applications mimics the concerns true of almost all consumer-level wearable medical devices (e.g., fitness trackers). That is, they are unreliable and not at a level to offer clinical-grade accuracy. Granted, the electronics for these two types of devices differ and the shortcomings of one do not necessarily reflect a direct correlation to the shortcomings of the other, but accuracy (or lack thereof) is certainly a consideration. Developers of voice-enabled medical technologies must ensure if they are leveraging home-based devices or smartphones, they are certain the necessary level of accuracy is present before it can truly be trusted by medical professionals.