Andrew (A.J.) Tibbetts, Shareholder, Intellectual Property & Technology Practice Group, Greenberg Traurig LLP07.20.23
Artificial intelligence (AI) is making headlines for burgeoning functionality and rapidly expanding user base, and for regulators’ hurry to catch up. AI has long impacted human life, but is poised to rapidly expand its role. The medical technology industry will not be left behind. Many companies are already identifying ways in which new “generative AI” technology can be used in their products. Prudent medtech firms are evaluating how AI might be effectively leveraged, where it might be less relevant, and weighing tool risk.
An immediate risk for regulated medical products is how the incorporation of generative AI will impact regulatory requirements. Many AI tools have been approved, but questions remain as regulators evaluate new technology using decades-old regulatory frameworks. A detailed breakdown is beyond the scope of this article, but those unfamiliar with software regulation should note that approvals may need updating as an AI-powered product improves or adds use cases.
For each tool, organizations should review the data sent, whether the data is secured during transmission and use, and how the tool will use the data to generate an output. Different data (e.g., non-health data, protected health information, genetic data, de-identified data) may face different requirements for AI usage, so it is important to understand each type of data shared, precisely how the AI tool or its provider will use it (now or in the future), and how the data is treated across jurisdictions. Considerations should include whether the tool or its owner/operator can retain any data for future use; the kind of data that could be retained; whether the data is being filtered/de-identified or retained as-is; necessary permissions for using received data in the future (and what consents are needed to grant those permissions); whether and how input data may be used for future training; the potential risk that training data could be output by the tool in response to a future input; and ways that information could be output. Careful scrutiny of these questions may be needed for each tool and for each jurisdiction in which a company operates, as some of these may be prohibited for certain data, or require opt-in/opt-out for certain data.
Even if protected health information is not input, similar questions apply to any proprietary information provided to an AI tool. Recently, stories circulated of a major company’s employees inputting trade secret information to a chatbot, raising questions of whether that jeopardized trade secret status and whether the chatbot would provide information to future users. True or not, it’s a reminder to carefully consider how to use these tools.
Tool outputs present similar questions. The output data may be sensitive health information with protection requirements. Medtech firms should be aware of any ownership/license the AI tool may claim on output data, or on IP or products (like their own) that integrate or use the output, and potential restrictions on their ability to use and retain outputs. Organizations also should know of any ongoing obligations on use of tool outputs (e.g., possible feedback training to the tool).
Liabilities surrounding output should also be well understood. For many tools, a leading concern (and the subject of ongoing lawsuits) is whether, when tool output replicates a third party’s data used in training, the output data or the training itself violates the third party’s copyright. However, liability concerns extend beyond copyright. Performance metrics such as tool accuracy and error rate should be understood, to gauge dangers around use. Some tools report confidence in answers, which companies and products could leverage to regulate using low-confidence results, as these results may have a higher error risk. Since AI tools can generate high-confidence inaccurate results, it is also important to understand how a tool was trained and how it operates, including understanding bias risk in the training data. With health equity currently a paramount concern, understanding the ways in which a tool might present biased outputs is key to understanding liabilities. Companies should also understand available indemnification from the vendor.
Instead, consider whether it is a component of a larger product to be patented. Product patents may offer improved detectability, and longevity because the IP might still protect a product’s core value even if the AI changes.
Regarding patent ownership, AI tools themselves cannot be inventors. Even where tool usage may not lead to a tool vendor’s developers being labeled an inventor of your work, companies must understand the tool’s role in the creative process and whether its terms of service impose obligations on a product’s patent filings. That could include obligations not to file IP or on IP licensing or ownership.
Before using AI tools to generate content intended for exclusive use, organizations should consider the kind of necessary ownership or license rights for obtaining tool output and the rights needed to use it as intended. Also keep in mind that AI models cannot be “authors” for copyright, making some output not able to be copyrighted. Because others may be free to duplicate uncopyrighted content, AI tools may not always be the best option.
Andrew (A.J.) Tibbetts is a shareholder in the Intellectual Property & Technology practice group in Greenberg Traurig’s Boston office. He leverages prior experience as a software engineer to provide practical IP strategy counseling for computer- and electronics-implemented tech, including in healthtech, life sciences AI, computational biology, medical records analysis/coding, medical devices, and more. He serves on the board of MassMEDIC, volunteers with digital health incubators, and guides digital health efforts for MassBio.
A Long-Term Role
AI has been intertwined with healthcare for decades. Medical applications were quickly envisioned after AI premiered in the 1950s, with the National Institutes of Health funding an AI lab in the early 1970s. The U.S. Food and Drug Administration first approved an AI algorithm in 1995; more than 500 were cleared by 2023. These medical applications often detected patterns and deviations in medical images, cardiac signals, and other signals. Recent AI interest focuses on “generative” techniques creating text, images, audio, and more from input data.Beware Maslow’s Hammer
For every great healthcare application of generative AI, there are others that may lack added value. Novel image generation, for example, may be less impactful given that radiology focuses more on accurately capturing anatomy than on fun graphics, though there may be generative applications in image improvement. The same may hold true for audio. Text generation, on the other hand, has long been in use with patient encounter notes, radiology reports, and other medical records. Order processing (lab work or prescriptions) has also leveraged AI, creating orders from natural language. These seem ready for new AI techniques and may be particularly advantageous to relieve the documentation burden that often contributes to burnout. Reducing clinician gruntwork can yield more patient time, potentially expanding access while reducing stress.Risk/Benefit Analysis
The opportunity AI can provide should first be weighed against risks, which may vary between tools and uses. Companies must carefully review service terms and privacy policies for each tool and compare to intended uses. As these policies can differ widely even between tool versions (e.g., between web and programmatic interfaces, or free and paid versions), tool users should confirm they are reviewing the correct terms.An immediate risk for regulated medical products is how the incorporation of generative AI will impact regulatory requirements. Many AI tools have been approved, but questions remain as regulators evaluate new technology using decades-old regulatory frameworks. A detailed breakdown is beyond the scope of this article, but those unfamiliar with software regulation should note that approvals may need updating as an AI-powered product improves or adds use cases.
Data Considerations
Whether or not it’s for AI, transmitting patient information to a vendor can trigger data protection requirements, including agreements to execute (e.g., a HIPAA BAA), analyses to perform (e.g., DPIA), or consents to obtain. Since regulators are sensitive to not only breaches of law, but also announced privacy policies, companies must consider their own policies (provided to users/customers/partners) for data use.For each tool, organizations should review the data sent, whether the data is secured during transmission and use, and how the tool will use the data to generate an output. Different data (e.g., non-health data, protected health information, genetic data, de-identified data) may face different requirements for AI usage, so it is important to understand each type of data shared, precisely how the AI tool or its provider will use it (now or in the future), and how the data is treated across jurisdictions. Considerations should include whether the tool or its owner/operator can retain any data for future use; the kind of data that could be retained; whether the data is being filtered/de-identified or retained as-is; necessary permissions for using received data in the future (and what consents are needed to grant those permissions); whether and how input data may be used for future training; the potential risk that training data could be output by the tool in response to a future input; and ways that information could be output. Careful scrutiny of these questions may be needed for each tool and for each jurisdiction in which a company operates, as some of these may be prohibited for certain data, or require opt-in/opt-out for certain data.
Even if protected health information is not input, similar questions apply to any proprietary information provided to an AI tool. Recently, stories circulated of a major company’s employees inputting trade secret information to a chatbot, raising questions of whether that jeopardized trade secret status and whether the chatbot would provide information to future users. True or not, it’s a reminder to carefully consider how to use these tools.
Tool outputs present similar questions. The output data may be sensitive health information with protection requirements. Medtech firms should be aware of any ownership/license the AI tool may claim on output data, or on IP or products (like their own) that integrate or use the output, and potential restrictions on their ability to use and retain outputs. Organizations also should know of any ongoing obligations on use of tool outputs (e.g., possible feedback training to the tool).
Liabilities surrounding output should also be well understood. For many tools, a leading concern (and the subject of ongoing lawsuits) is whether, when tool output replicates a third party’s data used in training, the output data or the training itself violates the third party’s copyright. However, liability concerns extend beyond copyright. Performance metrics such as tool accuracy and error rate should be understood, to gauge dangers around use. Some tools report confidence in answers, which companies and products could leverage to regulate using low-confidence results, as these results may have a higher error risk. Since AI tools can generate high-confidence inaccurate results, it is also important to understand how a tool was trained and how it operates, including understanding bias risk in the training data. With health equity currently a paramount concern, understanding the ways in which a tool might present biased outputs is key to understanding liabilities. Companies should also understand available indemnification from the vendor.
Understanding IP Protection
Intellectual property (IP) concerns can arise for these tools. Products can be protected via patents, copyrights, or potentially trade secrets, regardless of whether it includes a company’s own generative AI models or a third party’s. With the coming AI wave, medtech firms should accompany AI investments with IP investments to protect their products, leverage their own IP for potential legal challenges, or demonstrate competitive barriers to investors. I’ve obtained many patents for healthcare AI, including where a vendor was used for the AI model. In fact, I often counsel against thinking of an AI model itself as the sole patentable invention.Instead, consider whether it is a component of a larger product to be patented. Product patents may offer improved detectability, and longevity because the IP might still protect a product’s core value even if the AI changes.
Regarding patent ownership, AI tools themselves cannot be inventors. Even where tool usage may not lead to a tool vendor’s developers being labeled an inventor of your work, companies must understand the tool’s role in the creative process and whether its terms of service impose obligations on a product’s patent filings. That could include obligations not to file IP or on IP licensing or ownership.
Before using AI tools to generate content intended for exclusive use, organizations should consider the kind of necessary ownership or license rights for obtaining tool output and the rights needed to use it as intended. Also keep in mind that AI models cannot be “authors” for copyright, making some output not able to be copyrighted. Because others may be free to duplicate uncopyrighted content, AI tools may not always be the best option.
Human-in-the-Loop
In general, each employee should understand what these tools are and how they operate. It’s not magic, just using statistics to guess what the output should be for an input. Guesses can be wrong. It is best practice to ensure a human can fully understand the output and how the tool arrived at that output, particularly if the output is going to directly influence patient care or otherwise present significant consequences. Just as importantly, humans should feel entitled to overrule the AI tool’s output. Human-in-the-loop can mitigate many risks that could arise, from incorrect analysis of input, errors or bias in training, or other factors. Regulation may, for health data or some applications, require human involvement.Conclusion
Risks are evolving with AI tools as quickly as the tools themselves and the law is rushing to keep up. Pending lawsuits will influence how the law views these tools and the risks they present. It will be fascinating to watch the technology and law develop but it will be even more interesting to watch healthcare advance and see patients, providers, and payers benefit from the AI revolution.Andrew (A.J.) Tibbetts is a shareholder in the Intellectual Property & Technology practice group in Greenberg Traurig’s Boston office. He leverages prior experience as a software engineer to provide practical IP strategy counseling for computer- and electronics-implemented tech, including in healthtech, life sciences AI, computational biology, medical records analysis/coding, medical devices, and more. He serves on the board of MassMEDIC, volunteers with digital health incubators, and guides digital health efforts for MassBio.