Case Study

72% Test Reduction Using Trustworthy AI Models | Kistler & Monolith AI

The Challenge:

Automobiles spend weeks, even months, on the test track at great expense to vehicle OEMs. Over that time, gigabytes of data are collected from hundreds of sensors. Through data analysis, the team decides on optimizations that, once implemented, must be tested. However, due to the increasing amount of data, this process becomes more unwieldy as each year passes.

The Solution:

Rather than pick through masses of data traces, Monolith uses self-learning models to facilitate the data analysis process and enables engineers to quickly understand and instantly predict the performance of complex systems, enhancing their existing workflows to solve the world’s most intractable physics problems. Along the way, the solution developed not only predicted the outcome of vehicle optimizations, it also detected and highlighted inconsistencies in data labeling. This helped reduce the time needed on the test track significantly.

kistler case study with monolith

The Company:

Kistler Group is the market leader in dynamic pressure, force, torque, and acceleration measurement technology. Their offering encompasses people, products, and know-how for partnership in  complex industry and scientific research projects.

Data...Lots of Data

Businesses today live or die on the quality of their data. And none more so than the automotive industry. With a vehicle platform developed and functional, it’s time to take it to the test track to put it through its paces. Kitted out with hundreds of sensors measuring steering moment, speed, and torque, drivers undertake seemingly endless laps of the course while automotive engineers collect gigabytes of data. Such vehicle dynamics and durability testing is a specialty of Kistler, where their range of high-precision optical, mechanical sensors, and data loggers record every turn.

With a drive complete, it is time to rapidly analyze the data of the test run, looking at the roll, pitch, and yaw in the turns before choosing changes that will – hopefully – improve the ride. With the modifications made, the entire process is repeated for weeks, sometimes months, until the optimal vehicle dynamics setup is found.

From Swamps of Data to Structured Data Lakes

Of course, automotive developers don’t rely upon a single supplier for all their sensing needs. Some parameters demand unique sensing capabilities coupled with the supplier’s data logger to gather each environmental parameter, motion, and bump. This only adds to the challenge, as now the team is working with multiple sets of data, collected at different rates, and stored in varying file formats.

Aptargroup building self learning models with monolith

However, even with the rigorousness applied to data preparation, occasionally a data source will be incorrectly labeled. Furthermore, sensors may be inadvertently mounted backward, resulting in the data being inverted compared to other sensor results. All of this can cause delays as the mistakes are uncovered before analysis can begin.

No Time for Python Programming

Despite the odd mislabeled sensor, this well-proven process remains transparent and understandable – just how engineers like it. However, the number of sensors grows with each passing year, as does the number of data points. Today, it’s not unusual for a vehicle dynamics setup to encompass more than 300 sensors. As a result, measurement engineers and their testing teams are increasingly struggling to cope with the flood of data resulting from each test run.

Considering the quantity of data and the complexity of analyzing it, it is natural to wonder whether self-learning models have something to offer. This is the point where Monolith came in. Our team demonstrated the power of self-learning models that become smarter as they analyze the data provided. Ultimately, this enables engineers to do less testing and more learning, resulting in better quality products that are developed significantly faster. Naturally, amongst the engineering team, there was skepticism regarding this approach. After all, they had spent decades refining their techniques to understand the nuances of each sensor. Furthermore, already being under time pressure left them with little interest in teaching data scientists and Python programmers the intricacies of vehicle dynamics, only to find out that AI would be of little help to them.


Engineers Don’t Believe in Black Box AI

Healthy skepticism in engineering is a positive trait that is expected by the Monolith team. AI often looks like a black box coupled with vague processes and optimizations that, seemingly magically, lead to the desired result.

However, the solution provided by Monolith is different. Developed by engineers for engineers, our AI software is designed for ease of use and rapid adoption, allowing it to add value to development processes quickly. Thanks to its self-learning capability and having analyzed pertinent data, it can instantly predict the performance of your system. This is the case for both for data collected and for tests that have yet to be executed as virtual test campaigns within the Monolith platform. The beauty of this is that teams can dismiss inappropriate design changes much earlier, thereby saving on unnecessary physical testing by up to 80%.

Dashboard example to visualise track dynamics data and analyse it inside of Monolith

Dashboard example to visualise track dynamics data and analyse it inside of Monolith

An excellent example of the power of Monolith’s self-learning models came from Kistler data collected during test runs to evaluate vehicle dynamics during steering. The relevant data, including the angle of the steering wheel, acceleration, GPS traces, and forces in the tires, were used to train a self-learning model.


First Steps in Predicting Using AI Model

In the first step, the data is imported into the Monolith platform. The data manipulation, visualization and validation tools allow the team to search for obvious sensor orientation and labeling mistakes. For example, steering wheel angle data should concur with the lateral, or Y axis, accelerometer inputs. In addition, the X-axis (forward acceleration) should be zero across the test run, while the Z-axis should reflect the forces of the vehicle through the tires to the road.

Monolith Dashboard for Kistler automotive validation with ai

 Self-learning model sanity check - lower left 2D plot showing the incorrectly installed sensor

The data is used as an input for self-learning models that, backed by cloud-based GPUs, interpret the relationship between the inputs and outputs of the system. From here, the model is used for sanity checking, comparing the values acquired by the sensors with a prediction of the values that specific sensors should deliver. At this point, the model can use the patterns it has learned from the real world to highlight its prediction accuracy, marking values that do not fit. This highlights incorrectly or poorly mounted sensors, erroneously labeled sensors, and even marks sensors that suffered a failure during data collection.


Predicting Outcomes for Tests Not Yet Undertaken

With the data imported, the model can be validated further. For example, the steering input could be trialed with a vigorous step-change input from the steering wheel to predict the forces through the tires (Z-axis). The prediction of the AI, along with its uncertainty, is then overlaid with the data acquired from an actual test run. Validation of this nature with Kistler datasets demonstrated that the AI model had developed a very accurate understanding of the relationship between steering wheel input and Z-axis force.

monolith_track_dynamics_dashboard kistler validation

Monolith dashboard – Quickly understand and instantly predict complex systems under unseen test conditions

From here, the Monolith platform can now be used to predict the outcome for test cases for which data does not exist. For example, the steering input could be changed to trial a more vigorous step-change test or used to examine steering wheel sine oscillation tests over a more extended period than possible on the test track. Vehicle loading is another consideration for automotive design teams, with huge variations between north American, Europe, and Asian customers. Such deviations can be applied to the model to determine vehicle dynamics without running each test on the track.


Engineers question whether such self-learning AI models can be afforded the level of trust given to a physical model. After all, these are carefully crafted, and their inner workings can be examined. However, the harsh reality is that only a limited number of people understand these models and, thus, it remains possible that they also do not accurately reflect reality under all use cases. By comparison, the self-learning AI model developed with Kistler is based upon the physical forces of an actual vehicle under real test conditions.


Through validation, a Monolith model can be shown to solve intractable problems in the physical world, empowering engineers to easily identify vehicle dynamics without the need for creating models from scratch. Monolith also displays the uncertainty of the results from the self-learning AI model. Thus, when tests are trialed that push the boundaries of what the model “knows,” the engineering team is immediately made aware of the meaningfulness of the results.


Exploring More Conditions With Less Track Testing

Track testing to understand vehicle dynamics will remain an essential element in automobile development. However, the quantity of data being collected is growing exponentially year on year. OEM partners such as Kistler have a unique data collection and sorting capability, but the analysis step has long been a big-data problem. Self-learning models created in Monolith allow measurement engineers to validate data more efficiently than ever before, answering “What if?” scenarios that would push physical models to their limits.


Thanks to the speed and accuracy of this approach, positive iterative improvements are made to the vehicle more often in less time, resulting in a reduction in track testing that saves OEMs time, resources, and significant financial outlay.

Ready to get started?

BAE Systems