Michael Barbella, Managing Editor05.06.15
The warehouse is dark, quiet, remotely located. Inside the building, a dark-haired young man sits behind two open laptops, intently typing computer code on one as rolling data scrolls quickly down the side of his screen, rushing by in a relentless blur of letters, numbers and symbols. The young man is unaffected by this digital deluge—without skipping a beat, he continues entering code.
Correlate_Access_Model_P1501VR
Somewhere in a nearby house, two well-dressed older men exchange pleasantries. They soon delve into serious discussion.
More code is added:
<w>accForm>bo§expan-abb=”ACTIV”>ir</ex <expForm><e§er;”m med krok.”>M</hi></ex
The two men’s conversation becomes heated. The older of the pair appears agitated.
The programmer is almost done. Only a few (crucial) lines left: Active Connection
File_ Source
Therapies_ Off
Remote § <standby>vsupi/jenMart>>
RTS422256H_eptest_fibrillation-on-1.0
He hits the return key, then turns to the second computer screen. And waits.
Almost instantly, the house conversation is interrupted. The older man—visibly angered by something his younger counterpart said—experiences some chest discomfort. He is momentarily winded.
“You okay?” the younger man asks.
“I don’t know,” the older gentleman responds, half-chuckling. He quickly recovers, however, and the conversation resumes, becoming significantly more cantankerous.
Back at the warehouse, the techie carefully monitors his second computer screen, eagerly awaiting the results of his carefully written code. The screen is filled with lots of colors and numbers: There’s an electrocardiogram (in green) running in the upper left corner; a blood pressure reading below that in orange, heart rate stats in the upper right corner (in green) and body temperature (in light purple) on the bottom. All vital signs are normal.
Meanwhile, the discussion inside the house continues to deteriorate. Barbs are traded. Accusations fly. For a moment, the gray-haired man forgets about his earlier malaise.
The programmer starts typing on the first computer. And slowly, the numbers on the second monitor begin to change. Blood pressure rises to 150/100 and heart rate spikes to 124.
Once again, the house conversation is disrupted, only this time it’s not temporary. Cut off mid-sentence, the older man suddenly collapses into a chair, clutching his chest. “Call a doctor,” he tells his comrade.
The other fellow pauses. He notices his cohort struggling for breath and purposely blocks his attempt to summon help. “What are you doing, huh??” the older man asks incredulously. “What are you doing?!?!” He begins to fall to the floor, but is caught by the younger guy, who whispers, “You still don’t get it do you? I’m killing you.”
It’s murder in the most passive way, though. Calling a doctor wouldn’t have saved the victim, for his heart attack was triggered not by arterial plaque or a blood clot but rather a defibrillator purposely (and wirelessly) programmed to malfunction. The true assassin in this case is the less obvious one—namely, the anonymous techie who hacked into a life-saving medical device in some far-away repository.
Fortunately, such high-tech hijackings exist only in Hollywood (for now, anyway). In a 2012 episode of the Emmy award-winning Showtime series “Homeland,” al-Qaeda terrorists hack into the U.S. vice president’s defibrillator and induce a fatal heart attack (with the help of a war hero-turned U.S. congressman).
Television critics scoffed at the plot; one commentator even called it “preposterous.” Former Vice President Dick Cheney, however, found the storyline disturbing.
Cheney, who suffered the first of five heart attacks at age 37, received a defibrillator in 2001 but disabled its wireless feature six years later for his own protection. “I was aware of the danger that existed...I found it credible,” Cheney told television news program “60 Minutes” of the “Homeland” plot. “I know from the experience we had and the necessity for adjusting my own device that it was an accurate portrayal of what was possible.”
Almost anything is possible, really, as medical and consumer health equipment join the billions of devices that already comprise the Internet of Things: Gamblers looking for fail-safe payoffs might target a professional athlete’s health records, for example; extortionists could sabotage a hospital’s operating system or unleash malware within a medical device; and addicts might hack their own infusion pumps to increase drug flow.
“As [medical] devices become more of a part of the Internet of Things, and patients start to report more from home, maybe using a new Bluetooth stack, could they get infected with malware at some point in the future? Maybe, malware exists on healthcare networks and by adding these devices they can become exposed and ultimately susceptible,” said Steve Penn, senior director of common security framework (CSF) development and education programs for the Health Information Trust Alliance (HITRUST). The Frisco, Texas-based organization, in collaboration with healthcare, technology and information security leaders, has established a CSF available to all groups that create, access, store or exchange sensitive or regulated data.
“Is it possible to send out phishing emails or messages from a medical device that eat up the battery on that device unintentionally so that all of a sudden, the battery is going bad on a defibrillator? Could that happen sometime in the future? Maybe,” Penn continued. “There are a lot of maybes here because we know from research that these kinds of things can be done. We’re just not sure how it’s going to be used or how it’s going to be exploited. We just know that it can happen as information security researchers have shown. There are a lot of ‘what ifs’ when it comes to the Internet of Things and medical devices.”
But there are just as many certainties: Connected medical devices are revolutionizing healthcare delivery and treatment, allowing clinicians to better evaluate, track and care for patients. They also bring operational and cost efficiencies to healthcare providers. Yet these devices particularly are vulnerable to cyberattacks because—unlike IT systems—they generally are not designed with security in mind. Thus, even basic safeguards like password protections are not embedded in many products.
Including, amazingly, life-saving devices. A two-year study of all medical equipment used in Essentia Health facilities (Idaho, Minnesota, North Dakota and Wisconsin) discovered wirelessly connected infusion pumps with either no password protections or hardcoded passwords that were weak and universal to all customers. The analysis also found several defibrillator vendors that used default (“admin” or “1234”) or simple passwords for the Bluetooth stack where test shock configurations were written.
There was little or no protection for cryogenics or blood/pharmaceutical refrigerated storage systems, either. And, those guarded by hardcoded passwords were easily accessed, potentially exposing the systems to error or deliberate manipulation.
Storage systems for X-rays and other images at Essentia Health facilities were equally as vulnerable. Many images were backed up in centralized storage units that required no authentication for admission, and while some of the front-end systems required hardcoded passwords for image access, the backup was completely unprotected.
Such defensive gaps are just the tip of the cybersecurity iceberg. An estimated 300 medical devices have hard-coded password vulnerabilities, according to the U.S. Department of Homeland Security (DHS). The products include surgical and anesthesia devices, ventilators, patient monitors, laboratory and analysis equipment, defibrillators, and of course, drug infusion pumps.
“The affected devices have hard-coded passwords that can be used to permit privileged access to devices such as passwords that would normally be used only by a service technician,” read a June 13, 2013, DHS alert on medical device hard-coded passwords. “In some devices, this access could allow critical settings or the device firmware to be modified.”
The alert was issued only months after two researchers successfully hacked a Philips information management system that directly interfaced with X-ray machines and other medical products. The pair purchased a Philips Xper system from a reseller and used a computer program called Rainbow Crack to gain access to three previously configured, password-protected user accounts. The well-known program breaks cryptographic hash values that systems give to user names and passwords.
“The most vulnerable are actually medical machines—MRI [magnetic resonance imaging] machines, X-ray machines and other medical instruments that are now on hospital networks,” noted John Pescatore, director of emerging security trends at the SANS Institute, a private company based in Bethesda, Md., that specializes in information security and cybersecurity training. “Those tend to run embedded Windows software (including no longer supported versions such as Windows XP) that is hard to patch and often goes unpatched for years. Those are also attractive targets because they can often be used to directly capture electronic health records. The press attention tends to focus on implanted devices or personal fitness devices but those are actually more difficult to attack since they use a variety of operating system software and they don’t store any valuable data to an attacker. They are vulnerable mainly because most of them have been built without considering security.”
The 2013 DHS alert advised healthcare facilities to examine their systems for problems and implement controls to better protect them from unauthorized users. Sage advice, certainly, but stronger buffers are liable to attack as well.
Essentia Health’s surgical robots, for instance, hid behind firewalls but were still accessible through an off-the-shelf vulnerability scanner.
Security Out the Window
Most networked medical devices contain software and firmware that evolved similarly to other technologies—as an uneven and inconsistent mix of different versions, standards and implementation approaches, cybersecurity experts claim. The development of these digital programs was driven by patients’ needs and manufacturer preferences rather than an overarching set of security standards or best practices.
Consequently, connected devices employ various operating environments, architectures, communications methods and networking back ends. They are an amalgamation of technologies, using bits and pieces of the systems best suited for individual device size and type.
Large devices typically are more standardized, with off-the-shelf hardware and software components akin to those in a doctor’s office, medical and security specialists noted in a recent report. “An MRI, for instance, might run a UNIX subsystem on the device, with a Windows front-end for controlling and viewing images,” states the analysis prepared by the Cyber Statecraft Initiative at the Brent Scowcroft Center on International Security at the Washington, D.C.-based Atlantic Council. “Smaller devices tend to be more specialized. Since a pacemaker needs an extremely long battery life and a low-consumption processor, it would more likely use a custom operating environment.”
A device’s communication technology system is likely to be more standardized than its other components, the Atlantic Council report said. A bedside infusion pump needs long-range connectivity capability, as it potentially must link with a hospital’s Wi-Fi, nurses’ station and electronic medical records system. A pacemaker, on the other hand, operates quite effectively with Bluetooth technology, the networking vehicle that connects mobile phones to wireless earpieces and tablets to wireless keyboards.
Complicating this grab-bag of connectivity technology are older medical devices, which often use outdated or obsolete operating system software. Such archaic programs can remain in use for years, or even decades, exposing both hospitals and manufacturers to old security risks. Case in point: Conficker, an unusually virulent computer worm that has sickened Microsoft Windows operating systems for more than half a decade, infected 104 devices at the James A. Haley Veterans Hospital in Tampa, Fla., in February 2012. The infected systems included components of a GE Precision MPI X-ray machine, a Hologic Inc. mammography viewing device and a Siemens e.cam gamma camera for nuclear medicine studies.
Hospital officials never determined the cause for the Conficker outbreak, although most viruses are spread by vendors through infected thumb drives during software updates. All affected devices were repaired without delays to patient care.
Some hospitals aren’t so lucky, though. A 2010 Conficker infection at a U.S. Department of Veterans Affairs facility in New Jersey briefly closed a catheterization lab there, forcing the institution to undergo a $40,000 software reformatting fix. The remedy involved clearing computer memory of all code the Conficker worm downloaded from the Internet and saved in each computer’s memory—a treatment that cannot be administered by simple virus scans.
Malware infections have become particularly troublesome for the nation’s VA hospitals due to their older equipment. VA records show that malware has infected at least 327 devices at the department’s hospitals since 2009, contaminating products like X-ray machines, imaging equipment, eye-exam scanners, electrocardiograph stress analyzers and lab equipment.
“Some hospital systems have malware that we haven’t seen on our computers in a long time simply because they don’t have adequate protection built in,” said Melissa Masters, director of electrical, software and systems engineering at Battelle, a global nonprofit research and development organization headquartered in Columbus, Ohio. “Many of the devices in hospitals are older. They are connected to networks, but these devices might be so old that they don’t have the proper protection and that makes them vulnerable to very old (and new) malware. Medical devices have a very long use life—some might be in use for five, 10, 15, sometimes 20 years. I’ve seen devices that have been in use for 25 years. Obviously, when those products were designed, cybersecurity was not really a concern like it is today. They wouldn’t have the proper architecture and requirements needed for security to be baked in at the point of design, so they are vulnerable.”
The most endangered medical devices are those running variants of Windows, a common hacker target. The Beth Israel Deaconess Medical Center in Boston, Mass., (another Conficker victim) uses 664 such devices, according to National Institute of Standards and Technology data from October 2012. Of that total, 90 percent (600) still run the original Windows XP program from 2001, which no longer receives technical support from Microsoft. Seventeen machines have no security support, including an MRI loyalist to Windows 95 (support for that system expired Dec. 31, 2001).
A software update would be the most logical solution here, but the hospital—certainly not the only dependent of old operating systems—frequently has quibbled with manufacturers over the proper regulatory channels for modifications. U.S. Food and Drug Administration (FDA) regulations, however, place the onus on vendors to ensure their systems are secured with encryption and authentication before selling them to customers and to fix older programs already in the field. The guidelines, finalized last fall, also include a cybersecurity clause that allows post-market devices to be patched without requiring FDA recertification.
“... the FDA recommends that medical device manufacturers and healthcare facilities take steps to assure that appropriate safeguards are in place to reduce the risk of failure due to cybersecurity threats, which could be caused by the introduction of malware into the medical equipment or unauthorized access to configuration settings in medical devices and hospital networks,” the agency’s guidelines state. “Manufacturers are responsible for remaining vigilant about identifying risks and hazards associated with their medical devices, including risks related to cybersecurity, and are responsible for putting appropriate mitigations in place to address patient safety and assure proper device performance.”
Hence, manufacturers must act as both security watchdogs and repair technicians, building resistance to potential malware infections and/or hacks in new devices while also patching the holes in their legacy products. It’s a daunting task fraught with challenges.
Perhaps the most taxing issue to address in device cybersecurity is the lack of universal solutions. The FDA requires companies to incorporate cybersecurity functionality into new product designs and even recommends the kind of capabilities that should be addressed, including layered authentication levels and timed usage sessions (the latter ensures the device is connected to the network only as long as necessary). But the agency gives firms the freedom to devise their own security strategies based in part on the device, its intended use, overall vulnerability concerns, and patient risk.
Those strategies, accordingly, are likely to be as diverse as the products they protect: A defibrillator, for example, will require more stringent security than a blood pressure monitoring device.
Manufacturers also must balance security needs with emergency access and credential-management controls. It is important for life-saving medical devices to be both tamper-resistant yet still accessible in critical situations, security gurus note. Such an ideal is difficult to achieve, since strict safeguards potentially can block emergency medical treatment, while hard-coded passwords (typically listed in the device user manual) are easily hackable.
Device security is exceptionally challenging for older models. A quick fix like anti-virus software can help mitigate certain risks but also can introduce its own dangers. One-third of Rhode Island’s 19 hospitals cancelled elective surgeries and stopped treating minor injuries in emergency rooms after an automatic anti-virus software update in 2010 accidentally misclassified a critical Windows DLL (dynamic link library) as malicious. The problem with anti-virus software is that by definition, it is a post-market afterthought to rectify design flaws.
“Medical device manufacturers have to start thinking like IT organizations and adopt a cradle to grave mentality. They need to say, ‘I own this device from cradle to grave and I have to make sure it is secure and stays up to date for its entire life cycle,’ “ HITRUST’s Penn said. “All those things that have become part of IT development and application development now have to be built into medical device design. Manufacturers need to take ownership of this and not leave it up to hospitals to figure it out on their own. These [devices], for all intents and purposes, are computers and should be treated that way.”
A Collaborative Effort
If only the “Homeland” writers had consulted Barnaby Jack.
The late cybersecurity expert/computer programmer, who died of a drug overdose in July 2013, had only one objection to the show’s defibrillator hack storyline: The device serial number.
“I watched the TV show ‘Homeland’ for the first time a few months ago,” Jack wrote in a February 2013 blog. “This particular episode had a plot twist that involved a terrorist remotely hacking into the pacemaker of the vice president of the United States ... My first thought after watching this episode was ‘TV is so ridiculous! You don’t need a serial number!’ “
Jack proved as much through his research into radio-frequency medical implants. During his tenure at computer security firm IOActive, Jack helped create software for research purposes that can wirelessly scan for new model implantable cardioverter defibrillators (ICDs) and pacemakers without the need for a serial or model number. The software allows a programmer to rewrite the device firmware, modify settings and parameters, and in the case of ICDs, deliver high-voltage shocks remotely.
Regardless of its intended purpose, however, the software created by Jack and other “white hat” hackers concern U.S. regulators for the security vulnerabilities they expose in medical devices as well as the harm they potentially can inflict upon patients. The medtech industry has mostly avoided criminal hackers’ crosshairs thus far—”Homeland”-style attacks remain blessedly exiled to Hollywood—but regulators fear the sector is living on borrowed time.
Considering the recent spate of cyberattacks on healthcare systems, time may indeed be running out. Over the past year, hackers targeted Boston Children’s Hospital with a distributed-denial-of-service attack, and stole non-medical patient data from 4.5 million Community Health Systems records. And, the DHS is now investigating roughly two dozen cases of reported cybersecurity flaws in medical devices made by Hospira Inc., Medtronic plc and St. Jude Medical Inc.
“I view it as [us being] in an entire village of houses with no locked doors,” Kevin Fu, a computer scientist focused on medical devices and cybersecurity at the University of Michigan, told Scientific American after the FDA released its draft guidance in 2013. “It doesn’t take a rocket scientist to think we should have some risk mitigation strategies in place, because usually the bad guys are a couple steps ahead of the good guys.”
Regulators, however, are attempting to reverse the order by encouraging manufacturers to consider cybersecurity risks as part of the design and development process. Security is difficult to add on retroactively, experts contend, and is most effective when designed in.
It is virtually impossible for medical devices to be truly threat-free in the ever-expanding world of mobile health. But their vulnerabilities can be managed and significantly reduced through better collaboration among industry, regulators, IT professionals and medical practitioners, a change in the regulatory approval paradigm, and more feedback from patients, security experts contend.
“No one person or organization maintains sole responsibility for securing medical devices,” said Scott Erven, who spearheaded the Essentia Health security study and now works as an associate director of IT consulting for Protiviti, a global business consulting and internal audit firm. “It takes collaboration between the various stakeholders. Those include providers, device manufacturers, and regulatory agencies such as the FDA. Each stakeholder has to continue to mature its security practices in order to ensure patient safety remains a top priority. We don’t have all the answers for every challenge facing the industry. But what we can learn from other industries is not to go on doing what we have already determined has failed. By avoiding known failures, we can accelerate progress on embedding security in medical devices.”
Or risk the consequences.
Correlate_Access_Model_P1501VR
Somewhere in a nearby house, two well-dressed older men exchange pleasantries. They soon delve into serious discussion.
More code is added:
<w>accForm>bo§expan-abb=”ACTIV”>ir</ex <expForm><e§er;”m med krok.”>M</hi></ex
The two men’s conversation becomes heated. The older of the pair appears agitated.
The programmer is almost done. Only a few (crucial) lines left: Active Connection
File_ Source
Therapies_ Off
Remote § <standby>vsupi/jenMart>>
RTS422256H_eptest_fibrillation-on-1.0
He hits the return key, then turns to the second computer screen. And waits.
Almost instantly, the house conversation is interrupted. The older man—visibly angered by something his younger counterpart said—experiences some chest discomfort. He is momentarily winded.
“You okay?” the younger man asks.
“I don’t know,” the older gentleman responds, half-chuckling. He quickly recovers, however, and the conversation resumes, becoming significantly more cantankerous.
Back at the warehouse, the techie carefully monitors his second computer screen, eagerly awaiting the results of his carefully written code. The screen is filled with lots of colors and numbers: There’s an electrocardiogram (in green) running in the upper left corner; a blood pressure reading below that in orange, heart rate stats in the upper right corner (in green) and body temperature (in light purple) on the bottom. All vital signs are normal.
Meanwhile, the discussion inside the house continues to deteriorate. Barbs are traded. Accusations fly. For a moment, the gray-haired man forgets about his earlier malaise.
The programmer starts typing on the first computer. And slowly, the numbers on the second monitor begin to change. Blood pressure rises to 150/100 and heart rate spikes to 124.
Once again, the house conversation is disrupted, only this time it’s not temporary. Cut off mid-sentence, the older man suddenly collapses into a chair, clutching his chest. “Call a doctor,” he tells his comrade.
The other fellow pauses. He notices his cohort struggling for breath and purposely blocks his attempt to summon help. “What are you doing, huh??” the older man asks incredulously. “What are you doing?!?!” He begins to fall to the floor, but is caught by the younger guy, who whispers, “You still don’t get it do you? I’m killing you.”
It’s murder in the most passive way, though. Calling a doctor wouldn’t have saved the victim, for his heart attack was triggered not by arterial plaque or a blood clot but rather a defibrillator purposely (and wirelessly) programmed to malfunction. The true assassin in this case is the less obvious one—namely, the anonymous techie who hacked into a life-saving medical device in some far-away repository.
Fortunately, such high-tech hijackings exist only in Hollywood (for now, anyway). In a 2012 episode of the Emmy award-winning Showtime series “Homeland,” al-Qaeda terrorists hack into the U.S. vice president’s defibrillator and induce a fatal heart attack (with the help of a war hero-turned U.S. congressman).
Television critics scoffed at the plot; one commentator even called it “preposterous.” Former Vice President Dick Cheney, however, found the storyline disturbing.
Cheney, who suffered the first of five heart attacks at age 37, received a defibrillator in 2001 but disabled its wireless feature six years later for his own protection. “I was aware of the danger that existed...I found it credible,” Cheney told television news program “60 Minutes” of the “Homeland” plot. “I know from the experience we had and the necessity for adjusting my own device that it was an accurate portrayal of what was possible.”
Almost anything is possible, really, as medical and consumer health equipment join the billions of devices that already comprise the Internet of Things: Gamblers looking for fail-safe payoffs might target a professional athlete’s health records, for example; extortionists could sabotage a hospital’s operating system or unleash malware within a medical device; and addicts might hack their own infusion pumps to increase drug flow.
“As [medical] devices become more of a part of the Internet of Things, and patients start to report more from home, maybe using a new Bluetooth stack, could they get infected with malware at some point in the future? Maybe, malware exists on healthcare networks and by adding these devices they can become exposed and ultimately susceptible,” said Steve Penn, senior director of common security framework (CSF) development and education programs for the Health Information Trust Alliance (HITRUST). The Frisco, Texas-based organization, in collaboration with healthcare, technology and information security leaders, has established a CSF available to all groups that create, access, store or exchange sensitive or regulated data.
“Is it possible to send out phishing emails or messages from a medical device that eat up the battery on that device unintentionally so that all of a sudden, the battery is going bad on a defibrillator? Could that happen sometime in the future? Maybe,” Penn continued. “There are a lot of maybes here because we know from research that these kinds of things can be done. We’re just not sure how it’s going to be used or how it’s going to be exploited. We just know that it can happen as information security researchers have shown. There are a lot of ‘what ifs’ when it comes to the Internet of Things and medical devices.”
But there are just as many certainties: Connected medical devices are revolutionizing healthcare delivery and treatment, allowing clinicians to better evaluate, track and care for patients. They also bring operational and cost efficiencies to healthcare providers. Yet these devices particularly are vulnerable to cyberattacks because—unlike IT systems—they generally are not designed with security in mind. Thus, even basic safeguards like password protections are not embedded in many products.
Including, amazingly, life-saving devices. A two-year study of all medical equipment used in Essentia Health facilities (Idaho, Minnesota, North Dakota and Wisconsin) discovered wirelessly connected infusion pumps with either no password protections or hardcoded passwords that were weak and universal to all customers. The analysis also found several defibrillator vendors that used default (“admin” or “1234”) or simple passwords for the Bluetooth stack where test shock configurations were written.
There was little or no protection for cryogenics or blood/pharmaceutical refrigerated storage systems, either. And, those guarded by hardcoded passwords were easily accessed, potentially exposing the systems to error or deliberate manipulation.
Storage systems for X-rays and other images at Essentia Health facilities were equally as vulnerable. Many images were backed up in centralized storage units that required no authentication for admission, and while some of the front-end systems required hardcoded passwords for image access, the backup was completely unprotected.
Such defensive gaps are just the tip of the cybersecurity iceberg. An estimated 300 medical devices have hard-coded password vulnerabilities, according to the U.S. Department of Homeland Security (DHS). The products include surgical and anesthesia devices, ventilators, patient monitors, laboratory and analysis equipment, defibrillators, and of course, drug infusion pumps.
“The affected devices have hard-coded passwords that can be used to permit privileged access to devices such as passwords that would normally be used only by a service technician,” read a June 13, 2013, DHS alert on medical device hard-coded passwords. “In some devices, this access could allow critical settings or the device firmware to be modified.”
The alert was issued only months after two researchers successfully hacked a Philips information management system that directly interfaced with X-ray machines and other medical products. The pair purchased a Philips Xper system from a reseller and used a computer program called Rainbow Crack to gain access to three previously configured, password-protected user accounts. The well-known program breaks cryptographic hash values that systems give to user names and passwords.
“The most vulnerable are actually medical machines—MRI [magnetic resonance imaging] machines, X-ray machines and other medical instruments that are now on hospital networks,” noted John Pescatore, director of emerging security trends at the SANS Institute, a private company based in Bethesda, Md., that specializes in information security and cybersecurity training. “Those tend to run embedded Windows software (including no longer supported versions such as Windows XP) that is hard to patch and often goes unpatched for years. Those are also attractive targets because they can often be used to directly capture electronic health records. The press attention tends to focus on implanted devices or personal fitness devices but those are actually more difficult to attack since they use a variety of operating system software and they don’t store any valuable data to an attacker. They are vulnerable mainly because most of them have been built without considering security.”
The 2013 DHS alert advised healthcare facilities to examine their systems for problems and implement controls to better protect them from unauthorized users. Sage advice, certainly, but stronger buffers are liable to attack as well.
Essentia Health’s surgical robots, for instance, hid behind firewalls but were still accessible through an off-the-shelf vulnerability scanner.
Security Out the Window
Most networked medical devices contain software and firmware that evolved similarly to other technologies—as an uneven and inconsistent mix of different versions, standards and implementation approaches, cybersecurity experts claim. The development of these digital programs was driven by patients’ needs and manufacturer preferences rather than an overarching set of security standards or best practices.
Consequently, connected devices employ various operating environments, architectures, communications methods and networking back ends. They are an amalgamation of technologies, using bits and pieces of the systems best suited for individual device size and type.
Large devices typically are more standardized, with off-the-shelf hardware and software components akin to those in a doctor’s office, medical and security specialists noted in a recent report. “An MRI, for instance, might run a UNIX subsystem on the device, with a Windows front-end for controlling and viewing images,” states the analysis prepared by the Cyber Statecraft Initiative at the Brent Scowcroft Center on International Security at the Washington, D.C.-based Atlantic Council. “Smaller devices tend to be more specialized. Since a pacemaker needs an extremely long battery life and a low-consumption processor, it would more likely use a custom operating environment.”
A device’s communication technology system is likely to be more standardized than its other components, the Atlantic Council report said. A bedside infusion pump needs long-range connectivity capability, as it potentially must link with a hospital’s Wi-Fi, nurses’ station and electronic medical records system. A pacemaker, on the other hand, operates quite effectively with Bluetooth technology, the networking vehicle that connects mobile phones to wireless earpieces and tablets to wireless keyboards.
Complicating this grab-bag of connectivity technology are older medical devices, which often use outdated or obsolete operating system software. Such archaic programs can remain in use for years, or even decades, exposing both hospitals and manufacturers to old security risks. Case in point: Conficker, an unusually virulent computer worm that has sickened Microsoft Windows operating systems for more than half a decade, infected 104 devices at the James A. Haley Veterans Hospital in Tampa, Fla., in February 2012. The infected systems included components of a GE Precision MPI X-ray machine, a Hologic Inc. mammography viewing device and a Siemens e.cam gamma camera for nuclear medicine studies.
Hospital officials never determined the cause for the Conficker outbreak, although most viruses are spread by vendors through infected thumb drives during software updates. All affected devices were repaired without delays to patient care.
Some hospitals aren’t so lucky, though. A 2010 Conficker infection at a U.S. Department of Veterans Affairs facility in New Jersey briefly closed a catheterization lab there, forcing the institution to undergo a $40,000 software reformatting fix. The remedy involved clearing computer memory of all code the Conficker worm downloaded from the Internet and saved in each computer’s memory—a treatment that cannot be administered by simple virus scans.
Malware infections have become particularly troublesome for the nation’s VA hospitals due to their older equipment. VA records show that malware has infected at least 327 devices at the department’s hospitals since 2009, contaminating products like X-ray machines, imaging equipment, eye-exam scanners, electrocardiograph stress analyzers and lab equipment.
“Some hospital systems have malware that we haven’t seen on our computers in a long time simply because they don’t have adequate protection built in,” said Melissa Masters, director of electrical, software and systems engineering at Battelle, a global nonprofit research and development organization headquartered in Columbus, Ohio. “Many of the devices in hospitals are older. They are connected to networks, but these devices might be so old that they don’t have the proper protection and that makes them vulnerable to very old (and new) malware. Medical devices have a very long use life—some might be in use for five, 10, 15, sometimes 20 years. I’ve seen devices that have been in use for 25 years. Obviously, when those products were designed, cybersecurity was not really a concern like it is today. They wouldn’t have the proper architecture and requirements needed for security to be baked in at the point of design, so they are vulnerable.”
The most endangered medical devices are those running variants of Windows, a common hacker target. The Beth Israel Deaconess Medical Center in Boston, Mass., (another Conficker victim) uses 664 such devices, according to National Institute of Standards and Technology data from October 2012. Of that total, 90 percent (600) still run the original Windows XP program from 2001, which no longer receives technical support from Microsoft. Seventeen machines have no security support, including an MRI loyalist to Windows 95 (support for that system expired Dec. 31, 2001).
A software update would be the most logical solution here, but the hospital—certainly not the only dependent of old operating systems—frequently has quibbled with manufacturers over the proper regulatory channels for modifications. U.S. Food and Drug Administration (FDA) regulations, however, place the onus on vendors to ensure their systems are secured with encryption and authentication before selling them to customers and to fix older programs already in the field. The guidelines, finalized last fall, also include a cybersecurity clause that allows post-market devices to be patched without requiring FDA recertification.
“... the FDA recommends that medical device manufacturers and healthcare facilities take steps to assure that appropriate safeguards are in place to reduce the risk of failure due to cybersecurity threats, which could be caused by the introduction of malware into the medical equipment or unauthorized access to configuration settings in medical devices and hospital networks,” the agency’s guidelines state. “Manufacturers are responsible for remaining vigilant about identifying risks and hazards associated with their medical devices, including risks related to cybersecurity, and are responsible for putting appropriate mitigations in place to address patient safety and assure proper device performance.”
Hence, manufacturers must act as both security watchdogs and repair technicians, building resistance to potential malware infections and/or hacks in new devices while also patching the holes in their legacy products. It’s a daunting task fraught with challenges.
Perhaps the most taxing issue to address in device cybersecurity is the lack of universal solutions. The FDA requires companies to incorporate cybersecurity functionality into new product designs and even recommends the kind of capabilities that should be addressed, including layered authentication levels and timed usage sessions (the latter ensures the device is connected to the network only as long as necessary). But the agency gives firms the freedom to devise their own security strategies based in part on the device, its intended use, overall vulnerability concerns, and patient risk.
Those strategies, accordingly, are likely to be as diverse as the products they protect: A defibrillator, for example, will require more stringent security than a blood pressure monitoring device.
Manufacturers also must balance security needs with emergency access and credential-management controls. It is important for life-saving medical devices to be both tamper-resistant yet still accessible in critical situations, security gurus note. Such an ideal is difficult to achieve, since strict safeguards potentially can block emergency medical treatment, while hard-coded passwords (typically listed in the device user manual) are easily hackable.
Device security is exceptionally challenging for older models. A quick fix like anti-virus software can help mitigate certain risks but also can introduce its own dangers. One-third of Rhode Island’s 19 hospitals cancelled elective surgeries and stopped treating minor injuries in emergency rooms after an automatic anti-virus software update in 2010 accidentally misclassified a critical Windows DLL (dynamic link library) as malicious. The problem with anti-virus software is that by definition, it is a post-market afterthought to rectify design flaws.
“Medical device manufacturers have to start thinking like IT organizations and adopt a cradle to grave mentality. They need to say, ‘I own this device from cradle to grave and I have to make sure it is secure and stays up to date for its entire life cycle,’ “ HITRUST’s Penn said. “All those things that have become part of IT development and application development now have to be built into medical device design. Manufacturers need to take ownership of this and not leave it up to hospitals to figure it out on their own. These [devices], for all intents and purposes, are computers and should be treated that way.”
A Collaborative Effort
If only the “Homeland” writers had consulted Barnaby Jack.
The late cybersecurity expert/computer programmer, who died of a drug overdose in July 2013, had only one objection to the show’s defibrillator hack storyline: The device serial number.
“I watched the TV show ‘Homeland’ for the first time a few months ago,” Jack wrote in a February 2013 blog. “This particular episode had a plot twist that involved a terrorist remotely hacking into the pacemaker of the vice president of the United States ... My first thought after watching this episode was ‘TV is so ridiculous! You don’t need a serial number!’ “
Jack proved as much through his research into radio-frequency medical implants. During his tenure at computer security firm IOActive, Jack helped create software for research purposes that can wirelessly scan for new model implantable cardioverter defibrillators (ICDs) and pacemakers without the need for a serial or model number. The software allows a programmer to rewrite the device firmware, modify settings and parameters, and in the case of ICDs, deliver high-voltage shocks remotely.
Regardless of its intended purpose, however, the software created by Jack and other “white hat” hackers concern U.S. regulators for the security vulnerabilities they expose in medical devices as well as the harm they potentially can inflict upon patients. The medtech industry has mostly avoided criminal hackers’ crosshairs thus far—”Homeland”-style attacks remain blessedly exiled to Hollywood—but regulators fear the sector is living on borrowed time.
Considering the recent spate of cyberattacks on healthcare systems, time may indeed be running out. Over the past year, hackers targeted Boston Children’s Hospital with a distributed-denial-of-service attack, and stole non-medical patient data from 4.5 million Community Health Systems records. And, the DHS is now investigating roughly two dozen cases of reported cybersecurity flaws in medical devices made by Hospira Inc., Medtronic plc and St. Jude Medical Inc.
“I view it as [us being] in an entire village of houses with no locked doors,” Kevin Fu, a computer scientist focused on medical devices and cybersecurity at the University of Michigan, told Scientific American after the FDA released its draft guidance in 2013. “It doesn’t take a rocket scientist to think we should have some risk mitigation strategies in place, because usually the bad guys are a couple steps ahead of the good guys.”
Regulators, however, are attempting to reverse the order by encouraging manufacturers to consider cybersecurity risks as part of the design and development process. Security is difficult to add on retroactively, experts contend, and is most effective when designed in.
It is virtually impossible for medical devices to be truly threat-free in the ever-expanding world of mobile health. But their vulnerabilities can be managed and significantly reduced through better collaboration among industry, regulators, IT professionals and medical practitioners, a change in the regulatory approval paradigm, and more feedback from patients, security experts contend.
“No one person or organization maintains sole responsibility for securing medical devices,” said Scott Erven, who spearheaded the Essentia Health security study and now works as an associate director of IT consulting for Protiviti, a global business consulting and internal audit firm. “It takes collaboration between the various stakeholders. Those include providers, device manufacturers, and regulatory agencies such as the FDA. Each stakeholder has to continue to mature its security practices in order to ensure patient safety remains a top priority. We don’t have all the answers for every challenge facing the industry. But what we can learn from other industries is not to go on doing what we have already determined has failed. By avoiding known failures, we can accelerate progress on embedding security in medical devices.”
Or risk the consequences.