Christopher Gates, Director of Product Security, Velentium03.10.23
I’m proud to kick off this first column of a regular series devoted to the topic of medical device cybersecurity. In this space, I will address a wide variety of topics related to cybersecurity, including technical details, regulations, business needs, secure development, and hacking tools and techniques.
Before that, however, I will attempt to belay any questions regarding why you may want to consider what I have to say on the topic. I am a medical device developer, and have been involved in areas related to this topic for 51 years (I was a teenager when I started). I’ve worked on low-risk medical devices through to high-risk external medtech and even implantables. My roles have included software/firmware design and implementation (my formal training and first love), hardware design (learned at my daddy’s knee; he was an electrical engineer), systems engineering, regulatory affairs, and management of development teams, but I have always been a hacker.
I started my hacking journey at the tender age of three, tearing apart mechanical devices around my home. These early attempts at reverse engineering were fantastic for a young, curious mind…but not well received by my parents. Naturally, lockpicking came next; as a child in elementary school, I carried lockpicks. (How does a child come by these “tools of the trade”? I discovered a welding rod, pounded flat with a hammer and filed/bent would make excellent hook picks, rakes, and turning keys. Never discount a precocious child with too much time on his hands.) I would routinely open the bicycle locks at the schoolyard bike racks. This was how I learned of school administrators and the fact some people didn’t share my vision and love for defeating security mitigations.
Fast forward to high school and college—the first generation of computing—and no access security at all. Even the early ARPA net (precursor to the internet) allowed me to roam throughout the various connected government mainframes where security was poor to non-existent. (No, it wasn’t as exciting or as dangerous as the 1983 movie “War Games”; well, most of the time it wasn’t.)
As an engineer creating medical devices, I found myself frequently crafting mitigations to protect intellectual property or anti-cloning mitigations for consumable accessories to products I had developed.
This finally reached a tipping point when I was at a medical device manufacturing company assisting with low-power wireless protocols utilized in their medical devices, which were very publicly hacked. This propelled me into official roles protecting medical devices, crafting approaches to incorporate security into the development environment, and finally working with regulatory and standards bodies to socialize what I learned. Most recently, Axel Wirth, Jason Smith, and I, along with several other industry leaders, authored the book “Medical Device Cybersecurity for Engineers and Manufacturers”—the first and best how-to guide available on the topic (although I may be biased about that).
Today, I’m working daily with all aspects of medical device cybersecurity, assisting clients and internal development teams with trying to move our industry into a more secure posture. As this is the first installment of this column, I will be a little more flippant than usual and relate to you, the reader, some of the more ridiculous myths plaguing our medical device industry as it relates to cybersecurity.
What if, however, the rust becomes sentient and decided to focus its combined oxidation on one specific girder or bolt on the bridge? What happens to the mathematic models of failure? Does likelihood still work in this case? The answer is “no.” Where there is malicious intent, you cannot assign a meaningful likelihood value. It literally has no relevance to this domain, yet it is often the go-to topic for a device developer to use when trying to avoid correcting a security weakness.
I have heard people claim likelihood is the odds of an attacker finding a particular vulnerability. Please explain how guessing the unknown hacker’s odds helps us assess a vulnerability or mitigate it.
Further, likelihood provides a convent place to “game” the assessment of vulnerabilities—another downside. If your vulnerability scoring system includes likelihood and you want to ignore certain vulnerabilities in your device design (because of cost, schedule, or lack of understanding), you simply indicate likelihood as “rare” to bias the assessment to the point where it can be ignored.
I once had a client’s otherwise intelligent QA representative tell me the likelihood was “rare” because it had never before been attacked and it had been in existence for 15 years. As a result, the overall vulnerability assessment score of an attack, which was currently underway, was low and could be ignored.
Once we consider we want this product to go out to the entire world (or even just one country), however, the hacking skill level of Steve doesn’t mean anything. In fact, it would be the combined maximum skill level of everyone in the world (or country) we now have to guard against. In addition, since we live in a world connected via the internet, today’s nation-state-level advanced attack will be downloadable to everyone in only a couple of days. There is constant activity of democratizing hacks by placing them in easily accessible places (think: Github) and including them in pen testing frameworks (think: Metasploit) so anyone can easily replicate the hack. Ars Technica ran an article recently on how current-gen AI chatbots are enabling “script kiddies” to auto-generate functioning malware, thanks in part to this process.1
So, as far as the security of your embedded system is concerned, every attacker might as well be maximally skilled. If it can be hacked, it will be hacked.
The only answer with legacy products is to dispose of them properly and purchase new, securely designed and implemented products. This policy has the added benefit of encouraging manufacturers to create secure products.
In five decades of developing new products, I have never seen the “feature” ever get added into the product post-release, nor in the “next release,” nor any subsequent release. Not once.
When we really are talking about just a feature, that may not be a problem (the product may not sell as well as it could have), but when the “feature” is a security mitigation that doesn’t get implemented, it can be a disaster.
Reference
Christopher Gates is the director of Product Security at Velentium. He has more than 50 years of experience developing and securing medical devices and works with numerous industry-leading device manufacturers. He frequently collaborates with regulatory and standard bodies, including the CSIA, Health Sector Coordinating Council, H-ISAC, Bluetooth SIG, and FDA to present, define, and codify tools, techniques, and processes that enable the creation of secure medical devices. Gates promotes the use of a “secure development lifecycle,” the industry-leading approach that ultimately eases the burden on developers and ensures high-quality products that work as intended to save and improve lives.
Before that, however, I will attempt to belay any questions regarding why you may want to consider what I have to say on the topic. I am a medical device developer, and have been involved in areas related to this topic for 51 years (I was a teenager when I started). I’ve worked on low-risk medical devices through to high-risk external medtech and even implantables. My roles have included software/firmware design and implementation (my formal training and first love), hardware design (learned at my daddy’s knee; he was an electrical engineer), systems engineering, regulatory affairs, and management of development teams, but I have always been a hacker.
I started my hacking journey at the tender age of three, tearing apart mechanical devices around my home. These early attempts at reverse engineering were fantastic for a young, curious mind…but not well received by my parents. Naturally, lockpicking came next; as a child in elementary school, I carried lockpicks. (How does a child come by these “tools of the trade”? I discovered a welding rod, pounded flat with a hammer and filed/bent would make excellent hook picks, rakes, and turning keys. Never discount a precocious child with too much time on his hands.) I would routinely open the bicycle locks at the schoolyard bike racks. This was how I learned of school administrators and the fact some people didn’t share my vision and love for defeating security mitigations.
Fast forward to high school and college—the first generation of computing—and no access security at all. Even the early ARPA net (precursor to the internet) allowed me to roam throughout the various connected government mainframes where security was poor to non-existent. (No, it wasn’t as exciting or as dangerous as the 1983 movie “War Games”; well, most of the time it wasn’t.)
As an engineer creating medical devices, I found myself frequently crafting mitigations to protect intellectual property or anti-cloning mitigations for consumable accessories to products I had developed.
This finally reached a tipping point when I was at a medical device manufacturing company assisting with low-power wireless protocols utilized in their medical devices, which were very publicly hacked. This propelled me into official roles protecting medical devices, crafting approaches to incorporate security into the development environment, and finally working with regulatory and standards bodies to socialize what I learned. Most recently, Axel Wirth, Jason Smith, and I, along with several other industry leaders, authored the book “Medical Device Cybersecurity for Engineers and Manufacturers”—the first and best how-to guide available on the topic (although I may be biased about that).
Today, I’m working daily with all aspects of medical device cybersecurity, assisting clients and internal development teams with trying to move our industry into a more secure posture. As this is the first installment of this column, I will be a little more flippant than usual and relate to you, the reader, some of the more ridiculous myths plaguing our medical device industry as it relates to cybersecurity.
“It’s a Shared Responsibility”
If you have read anything about medical device cybersecurity, you have probably seen this chestnut. Nothing could be further from the truth. This phase is meant to convey that both the medical device manufacturers (MDMs) and the healthcare delivery organizations (HDOs) share responsibility for securing medical devices. Let's examine this falsehood: HDOs are in the business of saving and improving the quality of patients’ lives. When they look at the latest device introduced into their environment, they have close to no knowledge of what is in it, how it performs, or how “brittle” it is. Conversely, MDMs know all of these aspects and can determine when mitigating controls need to be put in place to ensure the safe operation of the device by the HDO. It is 99% the responsibility of the MDM to secure their products.Likelihood
The concept of likelihood is an excellent consideration when discussing naturally occurring events. This is why likelihood crops up so much in the quality assurance (QA) world; they can bring statistics to bear against the likelihood of a specific action occurring, such as rust oxidizing a bridge. This can be mathematically quantified as the likelihood of the bridge failing in a given period of time.What if, however, the rust becomes sentient and decided to focus its combined oxidation on one specific girder or bolt on the bridge? What happens to the mathematic models of failure? Does likelihood still work in this case? The answer is “no.” Where there is malicious intent, you cannot assign a meaningful likelihood value. It literally has no relevance to this domain, yet it is often the go-to topic for a device developer to use when trying to avoid correcting a security weakness.
I have heard people claim likelihood is the odds of an attacker finding a particular vulnerability. Please explain how guessing the unknown hacker’s odds helps us assess a vulnerability or mitigate it.
Further, likelihood provides a convent place to “game” the assessment of vulnerabilities—another downside. If your vulnerability scoring system includes likelihood and you want to ignore certain vulnerabilities in your device design (because of cost, schedule, or lack of understanding), you simply indicate likelihood as “rare” to bias the assessment to the point where it can be ignored.
I once had a client’s otherwise intelligent QA representative tell me the likelihood was “rare” because it had never before been attacked and it had been in existence for 15 years. As a result, the overall vulnerability assessment score of an attack, which was currently underway, was low and could be ignored.
Attacker Skill Level
If you were designing an embedded system to be secure against one particular person—my imaginary brother Steve, for example—his skill as a hacker could be rightly questioned. Steve isn’t a very bright guy and doesn’t really understand computers, but he is nice enough in his own way. It would be easy to implement security mitigations to prevent Steve from attacking the embedded system.Once we consider we want this product to go out to the entire world (or even just one country), however, the hacking skill level of Steve doesn’t mean anything. In fact, it would be the combined maximum skill level of everyone in the world (or country) we now have to guard against. In addition, since we live in a world connected via the internet, today’s nation-state-level advanced attack will be downloadable to everyone in only a couple of days. There is constant activity of democratizing hacks by placing them in easily accessible places (think: Github) and including them in pen testing frameworks (think: Metasploit) so anyone can easily replicate the hack. Ars Technica ran an article recently on how current-gen AI chatbots are enabling “script kiddies” to auto-generate functioning malware, thanks in part to this process.1
So, as far as the security of your embedded system is concerned, every attacker might as well be maximally skilled. If it can be hacked, it will be hacked.
Attacker Motivation
Money…Challenge…Fame…Activism…Revenge…Murder…Fun…Competitor…but in the end, do you care? Not really. The only (very slight) consideration in this area is any vulnerability that can be easily monetized will be the first vulnerability in the system to be attacked. My favorite victim’s lament is “Why are they attacking us? We are good people.” The answer is the attacker doesn’t care about you, isn’t thinking about you, and will never care about you, so you should not care about them and their motives.Legacy Devices
I can’t turn a Kia into a Formula 1 race car. Likewise, you cannot somehow magically bolt on a security solution to existing legacy products to make them secure. It simply can’t be accomplished. That said, there are organizations that will sell you all kinds of tools to supposedly accomplish this.The only answer with legacy products is to dispose of them properly and purchase new, securely designed and implemented products. This policy has the added benefit of encouraging manufacturers to create secure products.
The Next Release
This one really isn’t unique to the security domain; it is more a function of Corporate America and what some middle manager thinks they can get away with. Specifically, some features won’t be implemented until the “next release” because the project schedule is more important than the feature.In five decades of developing new products, I have never seen the “feature” ever get added into the product post-release, nor in the “next release,” nor any subsequent release. Not once.
When we really are talking about just a feature, that may not be a problem (the product may not sell as well as it could have), but when the “feature” is a security mitigation that doesn’t get implemented, it can be a disaster.
“It’s in Binary!”
I have lost track of how many firmware developers have stated to me (like it was obvious) once their C code is compiled to binary, it can’t be reverse-engineered. There are many tools available commercially and as freeware that can disassemble binary back into assembly code and even decompile binary back into decent C code in seconds. There are some tools available to help obfuscate the code, but they are just speed bumps that, at best, slow attackers down, but certainly don’t stop them.Conclusion
Look for this column in future issues. I'll highlight current events unfolding within and impacting the industry, from pending legislation to newly-published vulnerabilities, as well as sharing steadfast principles of the secure development lifecycle. This year promises to be very busy for medical device cybersecurity.Reference
Christopher Gates is the director of Product Security at Velentium. He has more than 50 years of experience developing and securing medical devices and works with numerous industry-leading device manufacturers. He frequently collaborates with regulatory and standard bodies, including the CSIA, Health Sector Coordinating Council, H-ISAC, Bluetooth SIG, and FDA to present, define, and codify tools, techniques, and processes that enable the creation of secure medical devices. Gates promotes the use of a “secure development lifecycle,” the industry-leading approach that ultimately eases the burden on developers and ensures high-quality products that work as intended to save and improve lives.