Synopsis of the Project
Computerized Maintenance Management Systems (CMMS) have been around for decades. Because they maintain predicate rule data in regulated environments, these systems are validated, and many sites use hybrid versions of paper and computer information to manage maintenance. To make matters even more complex, these same systems are often used for calibration management—or cobbled together with the CMMS in a hybrid multi-system paper-partial Frankenstein’s monster. Our client maintains dozens of manufacturing facilities across multiple continents. While some of the facilities have been a part of client operations for years, others have been more recent acquisitions. Consequently, each site had different CMMS and differing maintenance and calibration processes. No site was actually paperless, and no two sites used the exact same systems in the exact same way—even though many sites produced very similar products.
Tremendous savings could occur if each site performed standardization of maintenance and calibration. Specifically,
• Spare parts could be purchased in greater lots to achieve quantities of scale.
• Equipment could be transferred between facilities to reduce maintenance expense costs.
• Job plans, safety plans, and work practices when standardized across all plants, could reduce the cost of re-creating the same procedures at different sites.
• Maintenance data from multiple plants could be analyzed to better understand process improvements.
• Software licensing and overall software maintenance costs could be minimized.
• Helpdesk for the CMMS could be consolidated.
• Successful maintenance improvements at one facility could be easily rolled out at all others.
Advantages of Using Cloud Services
In order to implement a new enterprise-wide CMMS that also managed calibration data and was completely paperless, our client chose a popular CMMS package leveraging Platform as a Service (PaaS) from a cloud services provider (CSP) that ensured:
• A truly distributed database of the implementation, so that each site was able to see its own equipment, but purchasing personnel were able to see inventories across all facilities.
• Multi-level, continuously monitored identity and access management, tied to plant Active Directory tools, and including a VPN communication layer.
• Unlimited data storage.
• Elastic load balancing.
This way, the CSP managed runtime, middleware, operating systems, virtualization, servers, storage, and networking. The client managed the application and its associated data and database. 21CFR Part 11/Annex11 aspects, such as audit trails, application security, electronic records and signatures, and application validation are all handled through the application and managed by the client.
From a regulated perspective, changes made by the CSP affecting the application and database layers, could indirectly impact on data integrity. So the client and the CSP established communication channels to mitigate any impact. Additionally, the client created multiple environments for the application (Development, Sandbox, Training, Validation, Production) such that changes could be tested under controlled circumstances before being rolled out. For example, the client was able to control the time window when operating system updates were applied, such that they could be fully vetted prior to being applied to the production environment.
While the program was to be rolled out to dozens of sites, the client initially assembled a team with representatives from five sites that defined data formats, data movements, and changes to standardized workflows necessary to accommodate a regulated environment. Some customization of the CMMS was inevitable due to the diverse nature of their regulated manufacturing facilities. Most of this customization was related to workflow configuration and application security profiles. Good project management was crucial for success as there were a large number of decisions to be made.
Of the initial sites, the one with the most efficient maintenance operations was selected as the pilot site. We extracted data from the existing CMMS at the pilot site and reformatted it for the new CMMS—revealing a host of issues—the old data was not hierarchically organized, the new data had to be. The old data did not account for portable equipment and location changes, the new data needed different location codes. The old data did not include job plans, safety plans, work instructions, or spare parts and for the new system, much of this data had to be created. Additionally, once the old data could be examined holistically, data discrepancies were everywhere—usually in the form of similar equipment with dissimilar PMs, job plans, calibration data points, and so forth. All these problems needed correction.
Data Migration and Validation
We created a Data Migration Plan that detailed the process steps, data to be migrated, created, and how it would be verified. Much of the maintenance history existed already as paper printouts, and as the old CMMS would not be retired for many years, electronic historical maintenance records did not need to be transferred to the new system.
Planning was crucial, as the entire data creation and migration process had to occur in a very short time window. This is because the data in the old system had to be frozen—no new assets, no new spare parts, etc. Realistically, it is impossible to continue to maintain a facility while the data is frozen so we created a separate process whereby we could track all new data changes, to be added after the new CMMS go-live.
For this relatively small facility, there were over 50,000 records to be created and/or migrated. We used large spreadsheets that aligned with the table structures of the new CMMS to organize the exported data. Changes and additions were highlighted. Corrections were made programmatically via filters or formulae to maintain consistency. Dry-runs of data imports helped catch parent-child errors (where the child data is loaded before the parent data in the same datasheet) and other errors. Once the data was correct and ready for loading into the new system, we printed and reviewed the spreadsheets with the Quality Assurance group.
After approval by QA and the data owners, the spreadsheets became the document of record. We then loaded the data into the new CMMS and generated data extractions from the database. We electronically compared the spreadsheets to the data extractions. Of course, there were still errors—but ultimately, we resolved all the errors and loaded all data successfully.
Bringing New Sites Online
Validation used an E2500 risk-based approach that focused testing on the elements that had been configured or changed. Out-of-the-box elements were indirectly tested as part of workflow functionality. Approximately 120 use cases were directly challenged, as opposed to about 600 use cases for the entire system. Most testing occurred in FAT or SAT and then was leveraged in the OQ. The PQ challenged high-level workflow processes as defined in the operating procedures.
As new sites prepared to come online, they too had to go through the same structured data migration process, but the validation was much simpler and limited to a small IQ (for environment creation and networking) and the PQ. Functional testing was not repeated.
Additionally, new sites were brought online one environment at a time, so it was possible to work through the go-live checklists without being in the production environment. This allowed us to perform other CSP benchmark tests, such as latency checking, security overrides, and so forth.