Waqaas Al-Siddiq, Founder and CEO, Biotricity03.20.18
With an estimated 22 million Amazon Echo Smart speakers sold in 2017, the popularity of its virtual-assistant-powered device is propelling Amazon to the forefront of the smart speaker market. Leveraging the company’s voice-controlled artificial intelligence (AI) personal assistant service Alexa, Amazon is making voice recognition a reality. Speech is a powerful commodity and its application in technology offers ripe opportunities for healthcare.
Approximately 80 percent of older adults have at least one chronic disease, and 77 percent have at least two, according to the National Council on Aging. Most people who use health apps are under the age of 45, which means the population that could benefit most are not actively using health apps. This may be because many seniors are plagued by failing eyesight, limited hand dexterity, arthritis, and a lower rate of technology adoption. Voice technology could overcome these challenges by serving as a comfortable user interface for people who rely more on speech. Still, other applications have focused on the viability of using voice samples to diagnose disease, and with chronic disease on the rise, developing new and effective diagnostic tools is paramount. Harnessing the human voice as a user interface and as a diagnostic tool for diseases could improve healthcare access, outcomes, and diagnostics for both patients and providers.
The Human Voice as a User Interface
Integration could well be the watchword for medical technology this year. While speaking to MIT Technology Review, Andrew Ng, chief scientist at Baidu, pointed out that “the fastest way for a machine to get information to you is via a screen.” This, despite a 2016 study led by Stanford researchers and his own team, which found that speech input is three times quicker than typing on mobile devices. Enter rules-based engine systems with integrated screens and voice recognition and analysis. Baidu developed its own smart-assistant device with a screen, called Little Fish. Ask it a question, and it shows you the results via a screen, using built-in cameras to point the information toward you. Just recently, San Francisco startup Sense.ly raised $8 million in a Series B round of venture funding to bring its virtual nurse technology to patients and physicians. Sense.ly asks patients to briefly “check in” with a nurse avatar and report on their health status. Patients can simply talk to the app and the information they share is compiled into a medical record that only authorized health providers can access. With time, such data will further enhance the internal rules engines that providers are creating based on years of data collection.
Already, healthcare companies are using rules-based engine systems with data sets licensed from research groups such as the Mayo Clinic or the American Heart Association to triage for diagnostic purposes. Applied Pathways recently announced the launch of Curion Go, a symptom self-triage solution leveraging content from Mayo Clinic and powered by technology from Applied Pathways. With Curion Go, people can access health guidance 24/7 by following a series of relevant triage questions based on the symptoms being experienced. Medical technology companies may follow suit, augmenting rules-based engines with integrated screens and voice technology to facilitate a more natural and interactive patient-consumer experience.
Perhaps the clearest medical application of a rules-based system with an integrated voice-activated interface is Healthtap’s Doctor AI, which the company built as a skill app on Amazon’s Alexa. When a user asks Alexa what a possible symptom(s) may likely indicate, Healthtap’s artificial intelligence technology parses data from the user’s medical records to consider probable causes, and then conveys that information to the user in a natural and conversational manner. The app will then route the patient through one of several pathways of recommended action, such as scheduling an in-person office visit with the right specialist, depending on the patient’s response to the app’s follow-up questions. Healthtap’s Doctor AI app, when used with Amazon’s Alexa, makes high-quality and early-intervention healthcare immediately available for the elderly, the disabled, and the frail. With a smart, hands-free, voice-activated interface, the device and app duo ensure that people who cannot easily use their hands or eyes get the same access to healthcare as everyone else.
Voice-activated applications can also help solve deep-rooted problems that continue to plague the healthcare industry and take valuable physician time away from patients. Physicians struggle to document patient health information, using up valuable time that otherwise could be devoted to seeing more patients, with longer appointment blocks. Edward-Elmhurst Health, a Chicago-area health system, incorporated an electronic medical record (EMR) system with integrated dictation capabilities into their workflow, enabling their physicians to add notes directly into a patient’s medical record in real-time as they were spoken, rather than hours after the patient’s visit. They ultimately helped their physicians save up to two hours per shift. Efficiency gains aside, these systems can also improve the overall patient experience. With more information recorded in real-time, nurses and physicians would have critical health information at their fingertips, expediting the delivery of treatment and helping to catch life-threatening situations before they manifest.
Diagnosing Diseases with Voice Analysis
While the human voice can deftly ask questions and issue instructions, recent research has focused on leveraging the benefits of voice analysis to diagnose diseases. Voice samples can offer a wealth of information about a person’s health. In the future, smartphones and medical wearables may be used to monitor a person’s health remotely by recording short speech samples and analyzing them for disease biomarkers. The Mayo Clinic has partnered with an Israeli company, Beyond Verbal, to test the voices of patients with coronary artery disease—the most common type of heart disease. Using machine learning, their team identified 13 different vocal features associated with patients at risk of coronary artery disease. One of those characteristics, which was related to the frequency of the voice, was associated with a 19-fold increase in the likelihood of coronary artery disease. Their research could be applied to create a vocal test app or medical wearable that would be used as a low-cost, predictive screening tool to help identify patients most at risk for heart disease, as well as to remotely monitor patients after cardiac surgery. If a patient stopped taking their medication, small changes in the voice that may only be discernible to the software would alert the physician that remedial action was necessary.
Interestingly enough, voice applications are now being integrated at the sensor level as well. Researchers at the University of Colorado Boulder and Northwestern University have developed a tiny, flexible, acoustic sensor that can be worn on the skin to monitor the heart and assist in diagnoses of heart-related conditions. The sensor can gather continuous physiological data, including acoustic sounds such as the opening and closing of heart valves. The sensor can also integrate electrodes that would record ECG signals and EMG signals to measure the electrical activity of muscles at rest and during contraction. In preliminary testing of elderly volunteers at a private medical clinic in Arizona, the device could successfully detect heart murmurs. The human voice may prove to be an effective diagnostic tool in the future; and with the human voice readily available—an economically feasible one.
Conclusion
The case for voice applications in medical technology is compelling. We have become so acclimatized to the sound of human speech that we have neglected to consider how we could implement such a common and available resource in medical technology applications. The number of different applications and their benefits are far-ranging and are becoming loud and clear (pun intended) as another tool to advance care among patients and providers alike.
Waqaas Al-Siddiq is the founder and CEO of Biotricity, a biometric remote monitoring solutions company. He is a serial entrepreneur, a former investment advisor, and expert in wireless communication technology. He has vast experience through executive roles within start-ups, mid-sized companies, and non-profits.
Approximately 80 percent of older adults have at least one chronic disease, and 77 percent have at least two, according to the National Council on Aging. Most people who use health apps are under the age of 45, which means the population that could benefit most are not actively using health apps. This may be because many seniors are plagued by failing eyesight, limited hand dexterity, arthritis, and a lower rate of technology adoption. Voice technology could overcome these challenges by serving as a comfortable user interface for people who rely more on speech. Still, other applications have focused on the viability of using voice samples to diagnose disease, and with chronic disease on the rise, developing new and effective diagnostic tools is paramount. Harnessing the human voice as a user interface and as a diagnostic tool for diseases could improve healthcare access, outcomes, and diagnostics for both patients and providers.
The Human Voice as a User Interface
Integration could well be the watchword for medical technology this year. While speaking to MIT Technology Review, Andrew Ng, chief scientist at Baidu, pointed out that “the fastest way for a machine to get information to you is via a screen.” This, despite a 2016 study led by Stanford researchers and his own team, which found that speech input is three times quicker than typing on mobile devices. Enter rules-based engine systems with integrated screens and voice recognition and analysis. Baidu developed its own smart-assistant device with a screen, called Little Fish. Ask it a question, and it shows you the results via a screen, using built-in cameras to point the information toward you. Just recently, San Francisco startup Sense.ly raised $8 million in a Series B round of venture funding to bring its virtual nurse technology to patients and physicians. Sense.ly asks patients to briefly “check in” with a nurse avatar and report on their health status. Patients can simply talk to the app and the information they share is compiled into a medical record that only authorized health providers can access. With time, such data will further enhance the internal rules engines that providers are creating based on years of data collection.
Already, healthcare companies are using rules-based engine systems with data sets licensed from research groups such as the Mayo Clinic or the American Heart Association to triage for diagnostic purposes. Applied Pathways recently announced the launch of Curion Go, a symptom self-triage solution leveraging content from Mayo Clinic and powered by technology from Applied Pathways. With Curion Go, people can access health guidance 24/7 by following a series of relevant triage questions based on the symptoms being experienced. Medical technology companies may follow suit, augmenting rules-based engines with integrated screens and voice technology to facilitate a more natural and interactive patient-consumer experience.
Perhaps the clearest medical application of a rules-based system with an integrated voice-activated interface is Healthtap’s Doctor AI, which the company built as a skill app on Amazon’s Alexa. When a user asks Alexa what a possible symptom(s) may likely indicate, Healthtap’s artificial intelligence technology parses data from the user’s medical records to consider probable causes, and then conveys that information to the user in a natural and conversational manner. The app will then route the patient through one of several pathways of recommended action, such as scheduling an in-person office visit with the right specialist, depending on the patient’s response to the app’s follow-up questions. Healthtap’s Doctor AI app, when used with Amazon’s Alexa, makes high-quality and early-intervention healthcare immediately available for the elderly, the disabled, and the frail. With a smart, hands-free, voice-activated interface, the device and app duo ensure that people who cannot easily use their hands or eyes get the same access to healthcare as everyone else.
Voice-activated applications can also help solve deep-rooted problems that continue to plague the healthcare industry and take valuable physician time away from patients. Physicians struggle to document patient health information, using up valuable time that otherwise could be devoted to seeing more patients, with longer appointment blocks. Edward-Elmhurst Health, a Chicago-area health system, incorporated an electronic medical record (EMR) system with integrated dictation capabilities into their workflow, enabling their physicians to add notes directly into a patient’s medical record in real-time as they were spoken, rather than hours after the patient’s visit. They ultimately helped their physicians save up to two hours per shift. Efficiency gains aside, these systems can also improve the overall patient experience. With more information recorded in real-time, nurses and physicians would have critical health information at their fingertips, expediting the delivery of treatment and helping to catch life-threatening situations before they manifest.
Diagnosing Diseases with Voice Analysis
While the human voice can deftly ask questions and issue instructions, recent research has focused on leveraging the benefits of voice analysis to diagnose diseases. Voice samples can offer a wealth of information about a person’s health. In the future, smartphones and medical wearables may be used to monitor a person’s health remotely by recording short speech samples and analyzing them for disease biomarkers. The Mayo Clinic has partnered with an Israeli company, Beyond Verbal, to test the voices of patients with coronary artery disease—the most common type of heart disease. Using machine learning, their team identified 13 different vocal features associated with patients at risk of coronary artery disease. One of those characteristics, which was related to the frequency of the voice, was associated with a 19-fold increase in the likelihood of coronary artery disease. Their research could be applied to create a vocal test app or medical wearable that would be used as a low-cost, predictive screening tool to help identify patients most at risk for heart disease, as well as to remotely monitor patients after cardiac surgery. If a patient stopped taking their medication, small changes in the voice that may only be discernible to the software would alert the physician that remedial action was necessary.
Interestingly enough, voice applications are now being integrated at the sensor level as well. Researchers at the University of Colorado Boulder and Northwestern University have developed a tiny, flexible, acoustic sensor that can be worn on the skin to monitor the heart and assist in diagnoses of heart-related conditions. The sensor can gather continuous physiological data, including acoustic sounds such as the opening and closing of heart valves. The sensor can also integrate electrodes that would record ECG signals and EMG signals to measure the electrical activity of muscles at rest and during contraction. In preliminary testing of elderly volunteers at a private medical clinic in Arizona, the device could successfully detect heart murmurs. The human voice may prove to be an effective diagnostic tool in the future; and with the human voice readily available—an economically feasible one.
Conclusion
The case for voice applications in medical technology is compelling. We have become so acclimatized to the sound of human speech that we have neglected to consider how we could implement such a common and available resource in medical technology applications. The number of different applications and their benefits are far-ranging and are becoming loud and clear (pun intended) as another tool to advance care among patients and providers alike.
Waqaas Al-Siddiq is the founder and CEO of Biotricity, a biometric remote monitoring solutions company. He is a serial entrepreneur, a former investment advisor, and expert in wireless communication technology. He has vast experience through executive roles within start-ups, mid-sized companies, and non-profits.