The Impossible Dream: Universal Predictive Maintenance Models

Written by Brian Turnquist, Boon Logic

Posted on:

March 3, 2023

The continuous advancement in research and development surrounding Artificial intelligence and Machine Learning has truly made the impossible possible. From self-driving cars to automatic facial recognition, what was once only a farfetched plot in a Sci-fi movie has now become normal in our everyday life. The same changes can be seen in reliability programs. What was previously seen as an industry only for hard hats and overalls has now become a practice driven by data and algorithms. However, with all the incredible advancements both in Machine Learning and reliability practices, one thing still remains an impossible dream: The universal model in predictive maintenance. In this blog, I will cover the concept of a universal predictive maintenance model and its inevitable shortcomings and introduce you to Amber, the best-in-class predictive maintenance software designed to allow reliability engineers to create unique machine learning models for all of their assets.

What is the dream of a universal model in predictive maintenance?

The concept of a universal model is seen across a variety of data-driven predictive maintenance techniques. The approach sounds simple: take the learnings from one asset and apply them to all similar assets. Whether that is a simple threshold on a temperature sensor or a complex neural network trained on a range of different failure modes, the basic principle remains the same: take the learnings from asset A and apply them to B-Z.

How “Universal” can a Universal Model Be?

The major downfall in any universal modeling approach to predictive maintenance is that universal models ignore the individuality of each asset. Whether internal or external factors cause these individualities, every reliability engineer knows that, fundamentally, every asset is different. Inevitably these diversities cause subtle variance in sensor measurements resulting in high false alarm rates, the accuracy of the solution to decrease, and eventually, the reliability engineer’s trust in the overall solution to vanish.

Below I have highlighted a variety of reasons that an asset’s behavior may differ and inevitably cause a universal approach to fail.

  • Environmental diversity: It is becoming ever more common for companies to have equipment located all around the world, often in diverse operating environments. Let’s take two motors, one operating in a harsh desert environment and the other here in Minnesota. Although performing the same task, each motor will inevitably perform differently and cause significant variations in the sensor telemetry being collected.
  • Age diversity: As assets begin to age, their sensor telemetry is assured to shift. Every reliability engineer knows that these shifts are to be expected, and although an asset may still be performing acceptably 5 years after installation, the telemetry is undoubtedly going to be different from day 1.
  • Usage diversity: Complex assets typically operate in many different modes with varying loads and operating speeds. These different production modes produce significant variation in sensor telemetry. What is normal sensor telemetry at 10% operating speed is very different from what is normal at 90% and all the speeds in between.
  • Uniqueness: The simple but often forgotten reality that every asset is different. Two motors with identical model numbers and manufactured using the same process will have slight variances undetectable to the human eye: differences in motor windings, lubrication, and internal physical interaction of components.

So, with the above factors contributing to the inevitable failure of a universal model for predictive maintenance, you may find yourself asking why the universal approach continues to live on. The answer is simple, what’s the alternative?

With reliability teams often monitoring hundreds to thousands of assets and a single data-driven solution taking weeks or possibly months to build, the idea of individualizing the predictive maintenance strategy for each asset has traditionally been a non-starter.

What if I suggested another “impossible dream”, a solution where every asset has its own unique machine learning model, a solution where model training takes seconds, a solution that places the machine learning directly in the hands of the reliability engineers. You’d probably call it crazy. Here at Boon Logic, we call it Amber.


Amber is a self-configuring AI-based predictive maintenance tool powered by the Boon Nano, Boon Logic’s next-gen clustering algorithm. Amber builds a reliable “normal operation” model for each asset using Nano’s proprietary unsupervised machine learning approach. In addition to building unsupervised machine learning models faster, this approach produces fewer false alarms and automatically generates insights into what is causing a failure condition to take place.

Amber’s individualized approach is based on the theory that every asset is different, and therefore, each asset must be monitored with its own unique machine learning model. With models taking seconds to train, Amber is the first individualized data-driven approach to predictive maintenance that is both scalable and highly accurate.

Use Case

With the unparalleled clustering capabilities of the Boon Nano, Amber has the unique ability to create high-dimensional machine learning models for every asset in a matter of seconds.

Amber trains by consuming “normal”, compliant sensor telemetry from the asset and dynamically creates clusters as new data variations (operating modes) are seen.

Let’s demonstrate our unique approach in a use case. A reliability team has been tasked with monitoring two identical pumps on the same dredging vessel fitted with matching sensors. The traditional universal approach to data-driven monitoring would be to study and create a model using one of the asset’s telemetry and apply it to the second, taking weeks to create and possibly months to deploy. However, with Amber, the reliability team now has the option to create individualized models for each pump in a matter of seconds, and that’s exactly what happened when we partnered with Great Lakes Dredge and Dock (GLDD) to monitor a range of equipment throughout their fleet of dredging ships. Using Amber’s self-tuning, individualized predictive maintenance models, the GLDD reliability teams were able to deploy Amber on 27 unique assets across 4 vessels.

Dr. Brian Turnquist is the CTO of Boon Logic. Brian has worked in academics and industry for the past 25 years applying both traditional analytic techniques and machine learning. His academic research is focused on biosignals in neuroscience where he has 15 publications, collaborating with major universities in the US, Europe, and Asia. In 2016, Turnquist came to Boon Logic to apply these same techniques to industrial applications, especially those focused on anomaly detection in asset telemetry signals and video streams.

Protect your critical assets.

Detect signs of failure 6 weeks in advance
  • No data science required

  • 6 weeks average warning before asset failure

  • Up to 500 tags in single model

  • 5-minutes to create a model from scratch

  • All within your AVEVA PI System

Make failure and downtime an anomaly