As illustrated across our website, time series are everywhere. Each industry vertical, each domain, each company comes into contact with time series in one way or another. In the use case library below, you can explore the extensive applications of time series, or look for applications in your specific field of interest.
Michal Bezak - Tangent Works
Concrete is one of the most important building materials used in construction. It is a composite of fine and coarse aggregates bonded together with a fluid cement that hardens over time to form a solid mass.
Ensuring the strength, quality and durability of concrete are critical for the stability of buildings. Because of this, concrete must pass quality control to check various parameters.
Compressive strength is one of the parameters used to evaluate the quality. In short, compressive strength is the capacity to withstand loads. It is measured on a universal load testing machine that requires several samples.
With the capability to calculate compressive strength by Machine Learning, the testing process can be optimized.
TIM can build ML models automatically using historical data. The very principle – model, used for calculating future values (forecast) – can also be utilised to evaluate qualitative parameters.
Explanatory variables should include volumes of inputs such as cement, ash, water, fine/coarse aggregate.
TIM’s output consists of a value in MPa calculated for a given input.
Michal Bezak - Tangent Works
One of the factors that have an impact on battery health and capacity is temperature. With the rising temperature, there is more capacity available for discharge and vice versa. Temperature is an important factor also during battery charging. To maximize the lifespan of Li-ion batteries, they should not be charged below 0°C.
Nowadays, advanced battery systems rely on cooling and heating mechanisms that help batteries operate efficiently (and keep them healthy) even in extreme conditions.
Knowing when to take action to prevent over-heating means getting an accurate forecast at first. This can be the case for one battery only or multiple batteries installed on a grid.
With various deployment options, incl. on the edge, TIM can to be used for various industry-specific device options (e.g., in EV).
TIM can build ML models from time-series data and predict temperature in tens of seconds or minutes. Data from device sensors are often sampled in seconds or even milliseconds basis. TIM can work with data sampled with any sampling rate starting from milliseconds.
Models built for each battery regularly can be incredibly beneficial, especially when you consider factors specific to each battery. Batteries are known for their degradation with each charge/discharge cycle; thus model built for a new battery may not be relevant for an older battery. Moreover, conditions in which batteries are operated (e.g. ambient temperatures) also differ. Profile of discharge reflecting usage is another dynamic factor.
Explanatory variables should include measurement from relevant sensors such as voltage, current, temperature, external conditions and others.
TIM’s output consists of forecasted temperature values.
Michal Bezak - Tangent Works
The accelerating adaption of electric vehicles (EVs) is driving improvements of battery technologies at unprecedented speed. Bigger capacities, faster charging, and a longer lifespan of batteries are in focus.
Despite the progress, batteries’ capacity still implies constraints on how we use them, and until there is substantial progress, information about how much time is left till complete discharge is particularly important.
Knowing how much time is left helps us plan subsequent actions such as an optimal route when to charge, how much additional load can be used etc.
TIM allows for various deployment methods (from edge to cloud). TIM can be deployed inside the device (e.g. inside an electric vehicle), or in the cloud to which the battery grid is connected.
TIM can build ML models from time-series data and predict temperature in tens of seconds or minutes. Data from device sensors are often sampled in seconds or even milliseconds basis. TIM can work with data sampled with any sampling rate starting from milliseconds.
Models built for each battery regularly can be incredibly beneficial, especially when you consider factors specific to each battery. Batteries are known for their degradation with each charge/discharge cycle; thus model built for a new battery may not be relevant for an older battery. Moreover, conditions in which batteries are operated (e.g. ambient temperatures) also differ. Profile of discharge reflecting usage is another dynamic factor.
Explanatory variables should include measurement from relevant sensors such as voltage, current, temperature, external conditions and others.
TIM’s output consists of forecasted temperature values.
Michal Bezak - Tangent Works
Companies across a variety of industries rely on machines: pumps, engines, elevators, turbines, etc. Some are more complex than others, but they surely have one thing in common – degradation of material. With each cycle (moment) of operation, components are losing their original physical parameters. Regular checks, diagnostics, and maintenance, or even replacement is an important part of machine operations.
The ideal scenario is to avoid failure of a given machine, thus being pro-active rather then reactive is for many businesses the only option. Also, acting at the right time has real financial implications. Imagine two extreme situations:
Predictive maintenance solutions can provide the optimal time for maintenance. Thanks to the data coming from sensors and AI/ML, it is possible to get advice, almost in real-time, on what is the best time to take action.
TIM can build automated ML models from time series data and predict the time remaining (Remaining Useful Life, RUL) or classify whether the device is already in a window (zone) of possible failure within a certain period of time (cycles).
Data from machine sensors are often sampled in seconds, or even milliseconds. TIM can work with data sampled in any sampling rate starting from milliseconds.
Also, effort and time required to set up TIM for production use is reduced to a fraction of what would be typically required. TIM, by design, automates most of the steps required for set-up and operations, and offers a robust ML solution.
Input: explanatory variables should include measurements from relevant sensors, values of key settings, information about failures, cycle numbers and/or other.
Output: TIM’s output consists of forecasted RUL value or binary classification (1 or 0), depending on given scenario.
Philippe Thys - Tangent Works
Companies who monitor (in real time) tactical and strategic change to identify gaps and discover market opportunities will maintain or increase their competitive edge. Operations managers and senior executives use control towers to get visibility in supply chain operations. By collecting and combining data from a growing number of new information sources like IoT, GPS, and electronic logging devices companies get an additional layer of intelligence across their operations, and across enterprises. Information on production processes, stock levels, shipment and orders can now be tracked to a new level of detail, enabling supply chains to optimize contingency plans by monitoring disruptions, evaluating the impact towards the plan, and acting in real time. Incorporating this type of new data streams into traditional track and trace, S&OP, or supply chain monitoring activities is not straightforward; organizations are looking into data science to open up new options to produce meaningful outputs and to use that output for improved risk mitigating strategies and operational processes to boost their performance when reacting to unexpected disruptions or market opportunities. 99% of the data used in control towers to monitor supply chain information consists of time series data streams. This information can be used in AI/ML to evaluate future behavior, calculate the impact on performance, and act accordingly. Evaluating the potential performance improvement from the introduction of AI/ML with the goal to adapt your operational scripts is not an easy task. Defining the right AI/ML strategy, what ML approach to use, how to train and configure your models and then deciding how to deploy them tends to be a timely and costly project. And then there is also the enormous number of potential uses, scenarios, and configurations of supply chain networks to take in account. Once you choose, configure and deploy your models, you will need to continuously monitor your setup for performance (accuracy) deterioration due to changes in the data sources, changes in supply chain and logistics networks, changes of business models, and last, but not least, the dynamics of your business and industry. Typically, this is covered by having a department of specialists who maintain and optimize these configurations and deployments.
With TIM’s Real Time Instant Machine Learning (RTInstantML) forecasting organizations can skip the configuration process and immediately deploy and execute ML models that adapt to the input data streams without the need of human intervention – and this in near real time. This allows companies to embed AI/ML into their control towers with the benefit of better insights into future events and their impact and hence react faster and smarter. And all this at a fraction of the cost and the time required using traditional ML approaches.
Typical inputs of this use case include data from the supply chain operations (IoT, schedules, planning, throughput, etc.), logistics (ELD, GPS, IoT, stock levels, order status, etc.), sales and marketing (campaigns, new orders, etc.) and environmental data (infrastructure, weather, etc.) Typical outputs of this use case consist of time series forecasts on various reporting aspects in the control towers (performance, ETA’s…). This data can be compared to the to-be situation to calculate predicted performance, potential diversions to the plan, etc. as input to your contingency actions.
Carl Fransman - Tangent Works
Complex and distributed assets (i.e. differently configured pumps or compressors installed across the globe) fail because of many reasons; some are purely due to the conception of the asset and represent normal wear and tear. Some failures though are due to local operating conditions and/or the specific configuration of the asset. Gathering data through IIoT platforms and performing anomaly detection not only allows for foreseeing such failures, but when this anomaly detection leads to explainable forecasts, engineers can perform root cause analysis. This leads to faster resolution of the issue and also allows R&D to analyse failures and come up with more robust and reliable equipment, which is even more important under servicisation-type contracts where the manufacturer bears (some of) the cost for maintaining the equipment and guaranteeing uptime.
TIM’s forecasting and anomaly detection capabilities not only produce accurate results, but these results are fully explainable; therefore TIM’s value extends beyond avoiding the failure and supporting predictive maintenance. TIM’s information can be analysed by technical maintenance teams in order to pinpoint the culprit rapidly and thus save precious production time by limiting downtime. TIM’s information can also be analysed by R&D teams to determine structural improvements to the equipment.
Typical datasets used in this use case, consist of CMMS data combined with IIoT data and potentially external elements, such as operating conditions (weather, vibrations, speed, etc.)
Philippe Thys - Tangent Works
Supply chains are under continuous pressure to maintain or improve their market position. The digital revolution lead to a surge of digital transformation initiatives as well as the emergence of new players who leverage new technological innovations to create new business models that trigger a tidal wave of disruptive contenders in an already highly competitive world. The speed of innovation leads to unprecedented dynamics; only the most agile supply chains are able to (re-)act and adapt. As a result of these new dynamics, traditional mid- to long-term strategies must be reviewed and adapted at a higher frequency. Evaluating the impact of market disruption, both from the demand and the supply side, requires advanced intelligence and analytics that can be set up and reconfigured rapidly to evaluate risk and discover opportunity. Traditional AI, and even automated machine learning approaches are expensive, slow and difficult to adapt to support the agility and velocity required to keep your business on track on the short and the long term. By combining business data and market prognosis scenarios with real time instant machine learning, organisations can create new, improve existing, and evaluate more what-if scenarios and simulations for strategic planning and business transformation. Some example of business strategy planning processes that benefit from InstantML forecasting are strategic budgeting exercises, business transformation and design initiatives, strategic product lifecycle planning and optimisation, and the product and product maintenance design process.
TIM (Real Time) Instant Machine Learning can be used to complement what-if and simulation scenarios for budget exercises, adapting maintenance for product support strategies, run forecasting and anomaly detection on digital twins in product design, do risk assessments in your business transformation process, etc. With TIM users can shorten the time to run and compare scenarios, include different future market projections as predictor candidates and easily interface with simulation tools.
Typical inputs for this use case include historical demand, supply, production, prices, costs and strategic performance data, complemented with external data concerning weather, sales periods, global and regional disruptive events, sales campaigns, product introduction information, etc. In return, TIM’s output consists of middle to long term time series forecasts on budget, sales, etc.
Philippe Thys - Tangent Works
Getting the most out of your production assets, especially with constraints, is the foundation of increasing the flow of profits through your production lines. The proliferation of time-stamped data follows naturally from the digitisation of industry. The ongoing deployment of billions of connected sensors will only accelerate the trend. As a consequence, lots of decision-making processes that used to be fairly static (based on stable information) are becoming dynamic (based on streaming data). Today, machines are connected through communication that is initiated and deployed within local gateways or virtual machines. We see a fragmented base of protocols and IT-systems running the machines globally and many different configurations of the same machines even within the same plant.
Increase your return on assets with TIM’s anomaly detection capabilities by reducing unplanned maintenance and increasing equipment uptime. Indirectly, this will improve on time delivery performance and customer retention. The ease of use, speed of setting up and generating trained models, together with a very fast AI engine enables companies to implement near real time anomaly detection at an unprecedented scale. Users can now create and deploy models at all levels of manufacturing control operations: at field level (sensors), direct control level, plant supervisory level, production control level, up to the production scheduling level. Furthermore, with TIM it is easy to generate a model collection of time series forecasting models that map to different types of failures (electrical, mechanical, integrity, structural…) at different levels in the equipment’s (maintenance) bill of material. Users will be able to create and maintain Machine Learning capabilities that keep up with the dynamics of their enterprise.
Typical input data in this use case consists of raw time series data from PLCs, SCADA, sensor data, data from the maintenance scheduling system and data from the condition monitoring process. Concrete examples include data on vibration, temperature, revolutions, pressure, quality, etc., as well as past error codes, future condition monitoring alerts, past and future maintenance schedules, past maintenance information and past and future equipment operations schedules. After processing this data, TIM returns equipment failure predictions as output.
Philippe Thys - Tangent Works
Getting the most out of your production assets, especially with constraints, is the foundation of increasing the flow of profits through your production lines. The proliferation of time-stamped data follows naturally from the digitisation of industry. The ongoing deployment of billions of connected sensors will only accelerate the trend. As a consequence, lots of decision-making processes that used to be fairly static (based on stable information) are becoming dynamic (based on streaming data). Highlighting abnormal patterns directly from multi-variate sensor readings to help with inspection and diagnoses through anomaly detection in time-series data (generated on top of failure codes returned from PLC and SCADA systems) will help to alert for potential equipment failures during the production runs. These signals can then be analysed and used as indicators for potential performance degradation or equipment failure. Time series machine learning differs significantly from standard machine learning practices, and many current machine learning solutions applied to time series underperform and are not agile enough to react to the dynamics of the new data inflows.
Increase your return on assets with TIM’s anomaly detection capabilities by reducing unplanned maintenance and increasing equipment uptime. The ease of use, speed of setting up and generating trained models, together with a very fast AI engine enables companies to implement near real time anomaly detection at an unprecedented scale. Users can now create and deploy models at field level (sensors) as well as direct control and plant supervisory levels. They will be able to create and maintain Machine Learning capabilities that keep up with the dynamics of their enterprise.
Typical input data in this use case consists of raw time series data from PLCs, SCADA and sensor data, such as vibration, temperature, revolutions, pressure, quality, etc… TIM then returns the detected anomalies to the user, consisting of anomalies on component, subcomponent, machine, and/or production line level.
The recent evolution of Internet of Things (IoT) technologies has resulted in the deployment of massive numbers of sensors in various fields, including manufacturing, energy and utilities, and logistics. These sensors produce huge amounts of time series data, but understanding the data generated and finding meaningful patterns remains an obstacle to successful IoT implementations.
A common problem that can be solved with IoT data is anomaly detection, where temporal patterns in the data are used to identify mechanical or electronic anomalies in a piece of equipment, prior to the occurrence of a failure. This approach can help to minimize downtime for manufacturing pipelines or other IoT networks, thus preventing potential blocks on revenue streams. It can also enable cost savings by allowing maintenance interventions to be scheduled only when necessary.
Machine learning techniques provide an ideal solution for solving anomaly detection problems. However, they are typically time-consuming and costly to implement. TIM provides a revolutionary solution to this problem by allowing the development of rigorous anomaly detection models with minimal lead time. This is due to its highly automated and exceptionally fast modeling algorithm.
Due to its speed, TIM’s anomaly detection can easily be applied at scale, to huge numbers of IoT instruments. In addition, the TIM algorithm is extremely lightweight, and can thus be run directly on edge devices, reducing the need for costly network communication.
Finally, TIM’s API-first infrastructure makes it simple to integrate models into a production workflow.
Anomaly detection can be performed on a single instrument output data field, or it can combine information from multiple fields. For example, the information from a number of manufacturing instruments might be used to predict a quality metric for a material being produced. Or multiple data points from a single instrument might be used to predict when failure is likely to occur.