Otherwise, the discussion typically turns to the (most likely hostile) takeover by super-intelligent robots, à la “The Terminator.” Elon Musk’s well-publicized concerns about the dangers of artificial intelligence tend to revolve around this theme. The similar fear of robotic supplantation of employees—mainly manufacturing or service jobs—also comes up fairly frequently. (“If robots can physically and mentally outperform humans and think for themselves, why would anyone hire humans?”)
I can’t intelligently comment on the particulars of automating those industries. However, I can say pretty much for certain that doctors can expect job security, with no threat of advanced robot replacements for quite some time. While it’s true practicing medicine is becoming more automated—insofar as simple surgical procedures performed with robotic instruments—they still require a trained surgeon at the helm. No matter how advanced these instruments become, doctors are still necessary for the critical thinking machines currently lack to determine diagnosis and treatment decisions.
What artificial intelligence can contribute to the medical profession, however, is one heck of an efficient assistant to help physicians optimize diagnosis and treatment. Ultimately, doctors still have the final say on a patient’s diagnosis and treatment plan, but adding machine learning strategies may help to get them there faster.
“Artificial intelligence used in the hospital’s computer system will be able to help doctors make safer choices for the good of the patient,” commented Dr. Tor Oddbjørn Tveit, a senior consultant at Sørlandet Hospital HF, which is currently experimenting with a new algorithm for allergies. “It will function similar to an X-ray or a blood test. If it is to be of any value, the doctor has to interpret the X-ray or blood test—and the information from the computer—before deciding on a potential treatment.”
Machine learning in healthcare mostly focuses on the analysis of a sizeable amount of health data (i.e., “big data”) to provide clinicians evidence-based treatment options. Perhaps the most notable example of this application is Watson for Oncology, IBM Watson’s program for oncologists to determine the most effective cancer treatments. Watson’s intelligence in this situation means the advanced ability to analyze meaning and context of both structured and unstructured data in clinical documents, which could be integral to deciding on a treatment option. The patient’s medical records are combined with this external research and data—plus a healthy dose of clinical expertise from the oncologist—to offer potential treatment plans.
To flex Watson’s diagnostic potential, in 2015, Australian researchers enlisted—him, or it, not quite sure on that one—to analyze 88,000 de-identified retina images accessed through EyePACS (a “Picture Archive Communication System” linking primary care providers with eye care specialists). Over the course of Watson’s grooming, he (sticking with that pronoun) was taught to streamline manual processes, including distinguishing between left and right eye images, evaluating retina scan quality, and ranking possible indicators of glaucoma. In late February, the researchers revealed that Watson could accurately measure the ratio of the optic cup to disc, a key glaucoma marker, with up to 95 percent accuracy.
“Medical image analysis with cognitive technology has the capacity to fundamentally change the delivery of healthcare services,” Dr. Joanna Batstone, vice president and lab director at IBM Research Australia, said in a press release detailing the research results. “Medical images represent a rich source of data for clinicians to make early diagnosis and treatment of disease, from assessing the risk of melanomas to identifying eye diseases through the analysis of retinas. Cognitive technology holds immense promise for confirming the accuracy, reproducibility, and efficiency of clinicians’ analyses during the diagnostic workflow.”
Large healthcare OEMs have begun investing in cognitive computing programs, as well. Last November, GE Healthcare announced a partnership with UC San Francisco’s Center for Digital Health Innovation to develop a library of “deep learning algorithms,” or complex problem-solving formulas. The first round of algorithms will attempt to accelerate differential diagnosis in acute situations, and GE will eventually deploy them worldwide via the GE Health Cloud and smart GE imaging machines. Initially, the collaboration will employ high-volume, high-impact imaging to invent algorithms that can reliably distinguish between a normal result and one requiring follow-up or acute intervention. For example, an algorithm currently under development intends to teach machines to differentiate between normal and abnormal lung scans to speed treatment of pneumothorax (collapsed lung), a life-threatening condition.
If all of these applications sound rather broad and vague at the moment, it’s because they are. The usefulness of machine learning in healthcare is more limited than other industries because healthcare data is more complex and difficult to aggregate. The algorithms can only truly be as good as the volume and quality of data fed into them. Furthermore, embedding these algorithms into physicians’ workflows may prove to be a lengthy endeavor. Once proven viable, doctors will hopefully embrace cognitive computing as something useful for their practices, rather than a cumbersome new system to learn.