Erin Wright, Validation Product Manager, MasterControl04.26.18
Software validation—the mere mention of it is enough to give a quality, IT, or validation professional a sinking feeling, if not a headache. Why? Because as most who have done it know, it can be one of the most resource-intensive, time-consuming, and costly compliance activities for regulated companies.
Under 21 CFR Part 11, organizations that operate under the auspices of the U.S. Food and Drug Administration (FDA) are required to validate electronic systems that create, modify, maintain, archive, retrieve, or transmit electronic records required by predicate regulations. Medical device firms must also comply with 21 CFR 820, a predicate rule that similarly requires software validation.
The perennial and problematic challenge comes from the fact that while the FDA requires validation, it does not specify how companies should go about the validation process. What the FDA wants is evidence that a company has documented how it intends to conduct validation and prove that it has done it the way it said it would. The objective of validation is to ensure the software will work as expected and specified. So, the question remains, “How do you conduct validation properly and successfully?”
In its guidance on “Part 11, Electronic Records; Electronic Signatures—Scope and Application,” the FDA states, “We suggest that your decision to validate computerized systems, and the extent of the validation, take into account the impact the systems have on your ability to meet predicate rule requirements. You should also consider the impact those systems might have on the accuracy, reliability, integrity, availability, and authenticity of required records and signatures. Even if there is no predicate rule requirement to validate a system, in some instances it may still be important to validate the system.” In addition, the agency recommends that companies base their approach on a justified and documented risk assessment.
This doesn’t give much in the way of specific direction. As such, most companies are left erring on the side of caution when it comes to validation, and they validate much more than is necessary. The result is a validation process that takes months to accomplish. This often delays an organization’s ability to go live with new software or upgrade existing software. This causes the company to work with out-of-date software that can hinder its ability to optimize processes and stay up to date on the most recent security updates and other features.
Best Approach to Software Validation
In the Part 11 guidance, the FDA alludes that taking a risk-based approach cuts down the time it takes to conduct validation. Risk assessment is the key. Validation is a process that helps ensure life science companies are doing what they need to do to remove or mitigate risk. In fact, the argument can be made that most regulations are in place to remove risk along the product lifecycle, from the design and development through the use by consumers. A current trend among software suppliers, particularly those that create cloud-based software, is to provide their users with “canned” validation methodologies or scripts that are intended to cut time for the user. It is a step in the right direction, but it is not enough and can even put users at greater risk.
A simplified example in such a scenario would be a supplier providing a risk assessment for the software stating that “exporting a document is low risk.” Again, this is not enough and may introduce risk for the company because it all depends on how the software is actually used, not necessarily on how the supplier intended it to be used.
For instance, exporting documents may be low risk as a software feature, but if those documents are intended for regulatory submission, it suddenly increases the risk of the software to the company because of how it’s using the software. Another company may just export documents to a desktop for the purpose of sending those documents to employees via email. It’s the same software functionality, but a significantly different impact of failure for each company. This is why regulated companies can’t rely on supplier software risk assessments alone, since suppliers can’t take into account a company’s business practices.
It’s important to leverage the supplier validation documentation, but it can’t be the only step of the risk assessment. Risk must be assessed by functionality and usage, not just one or the other. A properly executed risk assessment will focus on the user’s critical business processes (CBPs), not just on the software.
Once risk assessments for individual CBPs have been determined, a validation approach for each area can be conducted. The following best-practice approach outlines three types of validations that can be utilized with a risk-based process.
If a company understands its true risk of software adoption based on its CBPs, validation no longer needs to be an all-or-nothing activity. It can be done with surgical precision, cutting down the validation time from months to days or even hours. Companies can leverage their suppliers’ documentation, but they must properly assess their own software usage in relation to their regulatory requirements.
Erin Wright, MasterControl’s validation product manager, spearheads the efforts pertaining to the development of the company’s Validation Excellence Tool (VxT), which streamlines the risk-assessment process and greatly reduces validation time. She joined MasterControl in 2013 as a professional services consultant and worked closely with hundreds of regulated companies, including the FDA’s Center for Drug Evaluation and Research (CDER), Ancestry.com, Abbott Point of Care, Institute for Transfusion Medicine (ITxM), and the University of Utah, in conducting custom validation implementations. Her extensive experience in quality, validation, and regulatory compliance includes working for an automated-testing software company and several clinical-trial software providers. She graduated summa cum laude from West Chester University with a degree in psychology.
Under 21 CFR Part 11, organizations that operate under the auspices of the U.S. Food and Drug Administration (FDA) are required to validate electronic systems that create, modify, maintain, archive, retrieve, or transmit electronic records required by predicate regulations. Medical device firms must also comply with 21 CFR 820, a predicate rule that similarly requires software validation.
The perennial and problematic challenge comes from the fact that while the FDA requires validation, it does not specify how companies should go about the validation process. What the FDA wants is evidence that a company has documented how it intends to conduct validation and prove that it has done it the way it said it would. The objective of validation is to ensure the software will work as expected and specified. So, the question remains, “How do you conduct validation properly and successfully?”
In its guidance on “Part 11, Electronic Records; Electronic Signatures—Scope and Application,” the FDA states, “We suggest that your decision to validate computerized systems, and the extent of the validation, take into account the impact the systems have on your ability to meet predicate rule requirements. You should also consider the impact those systems might have on the accuracy, reliability, integrity, availability, and authenticity of required records and signatures. Even if there is no predicate rule requirement to validate a system, in some instances it may still be important to validate the system.” In addition, the agency recommends that companies base their approach on a justified and documented risk assessment.
This doesn’t give much in the way of specific direction. As such, most companies are left erring on the side of caution when it comes to validation, and they validate much more than is necessary. The result is a validation process that takes months to accomplish. This often delays an organization’s ability to go live with new software or upgrade existing software. This causes the company to work with out-of-date software that can hinder its ability to optimize processes and stay up to date on the most recent security updates and other features.
Best Approach to Software Validation
In the Part 11 guidance, the FDA alludes that taking a risk-based approach cuts down the time it takes to conduct validation. Risk assessment is the key. Validation is a process that helps ensure life science companies are doing what they need to do to remove or mitigate risk. In fact, the argument can be made that most regulations are in place to remove risk along the product lifecycle, from the design and development through the use by consumers. A current trend among software suppliers, particularly those that create cloud-based software, is to provide their users with “canned” validation methodologies or scripts that are intended to cut time for the user. It is a step in the right direction, but it is not enough and can even put users at greater risk.
A simplified example in such a scenario would be a supplier providing a risk assessment for the software stating that “exporting a document is low risk.” Again, this is not enough and may introduce risk for the company because it all depends on how the software is actually used, not necessarily on how the supplier intended it to be used.
For instance, exporting documents may be low risk as a software feature, but if those documents are intended for regulatory submission, it suddenly increases the risk of the software to the company because of how it’s using the software. Another company may just export documents to a desktop for the purpose of sending those documents to employees via email. It’s the same software functionality, but a significantly different impact of failure for each company. This is why regulated companies can’t rely on supplier software risk assessments alone, since suppliers can’t take into account a company’s business practices.
It’s important to leverage the supplier validation documentation, but it can’t be the only step of the risk assessment. Risk must be assessed by functionality and usage, not just one or the other. A properly executed risk assessment will focus on the user’s critical business processes (CBPs), not just on the software.
Once risk assessments for individual CBPs have been determined, a validation approach for each area can be conducted. The following best-practice approach outlines three types of validations that can be utilized with a risk-based process.
- High: Complete/comprehensive testing required. All usage scenarios must be thoroughly tested. This is similar to the “traditional” approach to validation.
- Medium: Testing of the functional requirements per the CBPs required with sufficient assurance that each item has been properly characterized.
- Low: No formal testing needed.
If a company understands its true risk of software adoption based on its CBPs, validation no longer needs to be an all-or-nothing activity. It can be done with surgical precision, cutting down the validation time from months to days or even hours. Companies can leverage their suppliers’ documentation, but they must properly assess their own software usage in relation to their regulatory requirements.
Erin Wright, MasterControl’s validation product manager, spearheads the efforts pertaining to the development of the company’s Validation Excellence Tool (VxT), which streamlines the risk-assessment process and greatly reduces validation time. She joined MasterControl in 2013 as a professional services consultant and worked closely with hundreds of regulated companies, including the FDA’s Center for Drug Evaluation and Research (CDER), Ancestry.com, Abbott Point of Care, Institute for Transfusion Medicine (ITxM), and the University of Utah, in conducting custom validation implementations. Her extensive experience in quality, validation, and regulatory compliance includes working for an automated-testing software company and several clinical-trial software providers. She graduated summa cum laude from West Chester University with a degree in psychology.