As illustrated across our website, time series are everywhere. Each industry vertical, each domain, each company comes into contact with time series in one way or another. In the use case library below, you can explore the extensive applications of time series, or look for applications in your specific field of interest.
Michal Bezak - Tangent Works
Nowadays, trading is mostly automated, and when there is an order placed you bet it is likely a robot that hit the trigger. AI (robots) took over, it has been years since this trend started.
To build a profitable and sustainable trading system many elements are needed, from risk management, collection of the right data, back-testing etc. There is plethora of areas that can be solved with AI/ML tools, and they can be framed as problems for: forecasting, classification, or anomaly detection. All of them are problems that TIM can solve.
TIM is robust and fast. It can work with data sampled starting from milliseconds to years, data that contain gaps, irregularly sampled data (just like tick data are). It can also build new ML model in truly short time, even with each forecast (or classification).
Effort and time required to set up pipeline with TIM is reduced to fraction of what would be typically required. TIM, by design, automates most of the steps required for set up and operations, and offers robust ML solution capable to quickly adapt to structural changes.
Depending on the problem being solved, prediction horizon, and market, different data may be required. Knowing which data to use is typically part of well protected intellectual property.
If we take short term forecasting, having market (bars) data combined with technical indicators, correlated, or cointegrated assets would be a good start. In a game played with leveraged positions chasing tiniest deltas (movements) high quality data make difference.
TIM’s output consists of forecasted value per step (sample) over desired forecasting horizon.
James Smith - Demand DataMore info
How to prepare your retail and online production, inventory and distribution for changing COVID-19 measures?
External factors can have a huge impact on your demand forecast for specific products in specific locations and channels. The constantly changing government guidance during the current COVID pandemic can also cause huge swings in demand, especially when wide reaching regulation – such as decisions to close restaurants or limit movement – can come and go within hours for entire towns, cities or countries. To help businesses dynamically allocate their scarce resources of staff and inventory, the use of TIM and real-time instant ML can create forecasts that are as dynamic as the events that influence them. This can enable fast business decisions to take advantage of opportunities and limit the costs of reacting to current events.
Talk to our specialist about adaptive retail sales forecasting
The TIM Engine is perfectly suited for this application, because of the speed, resilience and ease of deployment. Other forecasting methods, such as statistical forecasting, are far too slow to react to the modern business climate. Single variate models will miss on the complex interaction between seasonal changes, externally driven changes to demand and externally driven changes to mobility. The speed of forecasting is also incredibly important in determining what factors are causing permanent changes to demand and customer behavior and which changes will be temporary.
Using the TIM Engine, an analyst can quickly iterate on models using dozens or even hundreds of features to get predicted impacts on demand in near real-time. This is a must-have tool for anyone in an organization who is responsible for planning of inventory and/or staffing levels.
As an example, using the TIM product we can immediately predict the impact a new restriction in a specific town would have on online ordering in specific postcodes. This can be used to ensure the correct allocation of capital equipment (trucks), staffing (drivers) and product (warehouses) even in advance of the restriction being implemented. With the TIM Engine, this forecast can be available in minutes to respond to significant events that may be happening within only a few days. We can see that a new announcement of a restriction to restaurants causes an immediate surge in online grocery demand that lasts for several days before subsiding to pre-restriction levels. This analysis can be extended to review the impact on specific products at a SKU level – including the ability to run independent demand forecasts for individual SKUs at individual stores in seconds.
Demand Data has real-time access to external data of significance, such as weather data (forecast and history) for any geographic point, as well as real-time COVID cases, mobility and restrictions for each postcode in most major countries. This data can instantly be prepared as inputs to be combined with sales data for specific products, channels or store locations. We have templates available which can plug into sales data at a store level and create the base models instantly (including COVID, weather and human mobility). From there, your analysts can iterate using other data or assumptions they have and get feedback in seconds on which assumptions or data are good predictors and which ones are not.
Talk to our specialist about adaptive retail sales forecasting
Watch a video taking you through this use case below:
Philippe Thys - Tangent Works
Pharmacauticals & Life Sciences
Manufacturing & Natural Resources
Companies who monitor (in real time) tactical and strategic change to identify gaps and discover market opportunities will maintain or increase their competitive edge. Operations managers and senior executives use control towers to get visibility in supply chain operations. By collecting and combining data from a growing number of new information sources like IoT, GPS, and electronic logging devices companies get an additional layer of intelligence across their operations, and across enterprises. Information on production processes, stock levels, shipment and orders can now be tracked to a new level of detail, enabling supply chains to optimize contingency plans by monitoring disruptions, evaluating the impact towards the plan, and acting in real time. Incorporating this type of new data streams into traditional track and trace, S&OP, or supply chain monitoring activities is not straightforward; organizations are looking into data science to open up new options to produce meaningful outputs and to use that output for improved risk mitigating strategies and operational processes to boost their performance when reacting to unexpected disruptions or market opportunities. 99% of the data used in control towers to monitor supply chain information consists of time series data streams. This information can be used in AI/ML to evaluate future behavior, calculate the impact on performance, and act accordingly. Evaluating the potential performance improvement from the introduction of AI/ML with the goal to adapt your operational scripts is not an easy task. Defining the right AI/ML strategy, what ML approach to use, how to train and configure your models and then deciding how to deploy them tends to be a timely and costly project. And then there is also the enormous number of potential uses, scenarios, and configurations of supply chain networks to take in account. Once you choose, configure and deploy your models, you will need to continuously monitor your setup for performance (accuracy) deterioration due to changes in the data sources, changes in supply chain and logistics networks, changes of business models, and last, but not least, the dynamics of your business and industry. Typically, this is covered by having a department of specialists who maintain and optimize these configurations and deployments.
With TIM’s Real Time Instant Machine Learning (RTInstantML) forecasting organizations can skip the configuration process and immediately deploy and execute ML models that adapt to the input data streams without the need of human intervention – and this in near real time. This allows companies to embed AI/ML into their control towers with the benefit of better insights into future events and their impact and hence react faster and smarter. And all this at a fraction of the cost and the time required using traditional ML approaches.
Typical inputs of this use case include data from the supply chain operations (IoT, schedules, planning, throughput, etc.), logistics (ELD, GPS, IoT, stock levels, order status, etc.), sales and marketing (campaigns, new orders, etc.) and environmental data (infrastructure, weather, etc.) Typical outputs of this use case consist of time series forecasts on various reporting aspects in the control towers (performance, ETA’s…). This data can be compared to the to-be situation to calculate predicted performance, potential diversions to the plan, etc. as input to your contingency actions.
Henk De Meetsenaere - Tangent Works
Consumption of medical supplies and the need for certain raw materials or other potentially scarce resources need to be forecasted by governments and medical institutions. Normal fluctuations in consumption patterns can be complemented by sudden structural changes due to extreme events, climate change influences and epidemiological changes. This requires adaptive forecasting models that capture new dynamics fast and give insights into the underlying demand influencers.
TIM RTInstantML models allow end users and operational experts to automatically generate predictive models. The Augmented Machine Learning capabilities of TIM elucidate insights in the dynamics that underpin the forecasted values. TIM allows for fast recalibration or recalculation in minutes so models remain accurate and update as new data flows in.
Typical data sources constitute weather information, calendar information, demographics, major event planning and epidemiological indicators.
Henk De Metsenaere - Tangent Works
Admission rates of patients in hospitals affect both the Supply Chain and Human Resources planning of a hospital. Admission rates fluctuate based on human factors linked to weather, calendar and time of day information. Disease spread and epidemiological evolutions introduce potential structural changes. The differences between day and night, correlations with weather, public holidays, events and medical parameters further define admission rates. Hospitals need to organise and optimise their supply chains and staffing accordingly.
TIM’s RTInstantML technology gives business users the capability to generate predictive models in an automated and fast way. This allows for fast results and what-if analysis. TIM’s Augmented Machine Learning capabilities give users insights in the underlying influencing parameters, enabling them to understand and analyse forecasted results. The adaptability TIM brings, allows fast adjusting of forecasts to structural changes in the data, so that forecasting models can adapt to new situations and events, such as pandemic information.
Typical data for admission rate forecasting includes weather information, calendar information, time of day insights, epidemiological data, etc.
Philip Wauters - Tangent Works
Wind turbines have become progressively more influential as the share of energy production and the infiltration of wind energy into power systems are steadily increasing. With this, the need for reliability in the production capacity of wind turbines has increased as well. The turbines must operate as smoothly as possible, since the unscheduled stoppage of these turbines can lead to significant production losses. In this use case the importance of operations and predictive maintenance are highlighted, and especially the role of health monitoring. Continuous monitoring of wind turbine health using anomaly detection improves turbine reliability and efficiency thus reducing maintenance and wind power costs. Finally, it allows for the optimal timing of turbine maintenance and repairs so that they can reduce the impact on the overall energy production and avoid catastrophic failure of the turbines.
Due to the highly automated, exceptionally fast and reliable modeling algorithm, TIM can build multiple anomaly detection models in a limited amount of time. It is especially useful in this case, since wind turbines often operate in wind farms where multiple turbines need to be monitored simultaneously. The speed and frequency of model building that TIM is capable of also allows for real time notifications of suspicious behavior in any turbine.
Building a model for the detection of anomalous behavior in wind turbines requires a set of training data with several variables. The power output of a wind turbine is dependent on the efficiency of the blades, gear assembly, alternator/dynamo, as well as wind speed, wind direction and wind consistency. Also, the taller the wind turbine, the greater the energy produced, since wind speeds are greater at higher altitudes. With these variables set up in a time series format, TIM can use its anomaly detection capabilities to determine whether or not a power output observation is abnormal.
Are you interested in a walk-through scenario of this type of use case? Then take a look at our solution template on this use case! You can find it under Wind Turbine.
Lyubomira Buresch - Polygon ResearchMore info
The velocity of change in the mortgage industry is outpacing the abilities and reach of existing prepayment and profitability models, especially in the tail-risk coronavirus world. With over $16T total outstanding mortgages, the US mortgage market requires a scalable, accurate forecasting and modeling solution.
TIM enables users to iteratively forecast mortgage prepayment, delinquency, and default. This can help investors, GSE’s, servicers, lenders, and other stakeholders to evaluate and quantify the valuation and credit risk of their mortgage assets.
The US mortgage industry generates a vast quantity of data highly relevant to profitability and risk analysis. Some of this data is in users’ possession, some of it is publicly available, and some of it can be acquired. InstantML allows you to understand, differentiate, and quantify the relevance and impact of each data source to your forecasts.
Carl Fransman - Tangent Works
Manufacturing & Natural Resources
Complex and distributed assets (i.e. differently configured pumps or compressors installed across the globe) fail because of many reasons; some are purely due to the conception of the asset and represent normal wear and tear. Some failures though are due to local operating conditions and/or the specific configuration of the asset. Gathering data through IIoT platforms and performing anomaly detection not only allows for foreseeing such failures, but when this anomaly detection leads to explainable forecasts, engineers can perform root cause analysis. This leads to faster resolution of the issue and also allows R&D to analyse failures and come up with more robust and reliable equipment, which is even more important under servicisation-type contracts where the manufacturer bears (some of) the cost for maintaining the equipment and guaranteeing uptime.
TIM’s forecasting and anomaly detection capabilities not only produce accurate results, but these results are fully explainable; therefore TIM’s value extends beyond avoiding the failure and supporting predictive maintenance. TIM’s information can be analysed by technical maintenance teams in order to pinpoint the culprit rapidly and thus save precious production time by limiting downtime. TIM’s information can also be analysed by R&D teams to determine structural improvements to the equipment.
Typical datasets used in this use case, consist of CMMS data combined with IIoT data and potentially external elements, such as operating conditions (weather, vibrations, speed, etc.)
Philippe Thys - Tangent Works
Manufacturing & Natural Resources
Supply chains are under continuous pressure to maintain or improve their market position. The digital revolution lead to a surge of digital transformation initiatives as well as the emergence of new players who leverage new technological innovations to create new business models that trigger a tidal wave of disruptive contenders in an already highly competitive world. The speed of innovation leads to unprecedented dynamics; only the most agile supply chains are able to (re-)act and adapt. As a result of these new dynamics, traditional mid- to long-term strategies must be reviewed and adapted at a higher frequency. Evaluating the impact of market disruption, both from the demand and the supply side, requires advanced intelligence and analytics that can be set up and reconfigured rapidly to evaluate risk and discover opportunity. Traditional AI, and even automated machine learning approaches are expensive, slow and difficult to adapt to support the agility and velocity required to keep your business on track on the short and the long term. By combining business data and market prognosis scenarios with real time instant machine learning, organisations can create new, improve existing, and evaluate more what-if scenarios and simulations for strategic planning and business transformation. Some example of business strategy planning processes that benefit from InstantML forecasting are strategic budgeting exercises, business transformation and design initiatives, strategic product lifecycle planning and optimisation, and the product and product maintenance design process.
TIM (Real Time) Instant Machine Learning can be used to complement what-if and simulation scenarios for budget exercises, adapting maintenance for product support strategies, run forecasting and anomaly detection on digital twins in product design, do risk assessments in your business transformation process, etc. With TIM users can shorten the time to run and compare scenarios, include different future market projections as predictor candidates and easily interface with simulation tools.
Typical inputs for this use case include historical demand, supply, production, prices, costs and strategic performance data, complemented with external data concerning weather, sales periods, global and regional disruptive events, sales campaigns, product introduction information, etc. In return, TIM’s output consists of middle to long term time series forecasts on budget, sales, etc.
Philippe Thys - Tangent Works
Getting the most out of your production assets, especially with constraints, is the foundation of increasing the flow of profits through your production lines. The proliferation of time-stamped data follows naturally from the digitisation of industry. The ongoing deployment of billions of connected sensors will only accelerate the trend. As a consequence, lots of decision-making processes that used to be fairly static (based on stable information) are becoming dynamic (based on streaming data). Today, machines are connected through communication that is initiated and deployed within local gateways or virtual machines. We see a fragmented base of protocols and IT-systems running the machines globally and many different configurations of the same machines even within the same plant.
Increase your return on assets with TIM’s anomaly detection capabilities by reducing unplanned maintenance and increasing equipment uptime. Indirectly, this will improve on time delivery performance and customer retention. The ease of use, speed of setting up and generating trained models, together with a very fast AI engine enables companies to implement near real time anomaly detection at an unprecedented scale. Users can now create and deploy models at all levels of manufacturing control operations: at field level (sensors), direct control level, plant supervisory level, production control level, up to the production scheduling level. Furthermore, with TIM it is easy to generate a model collection of time series forecasting models that map to different types of failures (electrical, mechanical, integrity, structural…) at different levels in the equipment’s (maintenance) bill of material. Users will be able to create and maintain Machine Learning capabilities that keep up with the dynamics of their enterprise.
Typical input data in this use case consists of raw time series data from PLCs, SCADA, sensor data, data from the maintenance scheduling system and data from the condition monitoring process. Concrete examples include data on vibration, temperature, revolutions, pressure, quality, etc., as well as past error codes, future condition monitoring alerts, past and future maintenance schedules, past maintenance information and past and future equipment operations schedules. After processing this data, TIM returns equipment failure predictions as output.
Philippe Thys - Tangent Works
Getting the most out of your production assets, especially with constraints, is the foundation of increasing the flow of profits through your production lines. The proliferation of time-stamped data follows naturally from the digitisation of industry. The ongoing deployment of billions of connected sensors will only accelerate the trend. As a consequence, lots of decision-making processes that used to be fairly static (based on stable information) are becoming dynamic (based on streaming data). Highlighting abnormal patterns directly from multi-variate sensor readings to help with inspection and diagnoses through anomaly detection in time-series data (generated on top of failure codes returned from PLC and SCADA systems) will help to alert for potential equipment failures during the production runs. These signals can then be analysed and used as indicators for potential performance degradation or equipment failure. Time series machine learning differs significantly from standard machine learning practices, and many current machine learning solutions applied to time series underperform and are not agile enough to react to the dynamics of the new data inflows.
Increase your return on assets with TIM’s anomaly detection capabilities by reducing unplanned maintenance and increasing equipment uptime. The ease of use, speed of setting up and generating trained models, together with a very fast AI engine enables companies to implement near real time anomaly detection at an unprecedented scale. Users can now create and deploy models at field level (sensors) as well as direct control and plant supervisory levels. They will be able to create and maintain Machine Learning capabilities that keep up with the dynamics of their enterprise.
Typical input data in this use case consists of raw time series data from PLCs, SCADA and sensor data, such as vibration, temperature, revolutions, pressure, quality, etc… TIM then returns the detected anomalies to the user, consisting of anomalies on component, subcomponent, machine, and/or production line level.
Carl Fransman - Tangent Works
Track-operated transportation system (metropolitan or passenger and freight rail) failures can be very expensive; from merely causing a delay (often blocking a track to follow-on traffic) to derailments. En-route failures need to be avoided at all costs for both safety and economic reasons. Predicting failures is complex though, because of a high degree of customisation among rolling stock and because the system is impacted by varying factors such as load and weather.
All-in predictive maintenance roll-out requires a huge upfront investment in systems and change management. TIM’s extremely fast approach to generating predictions permits rail and track operators to roll out predictive maintenance approaches one use case at a time, which reduces organisational stress due to change management (actually, once initial cases have proven value, teams typically demand to be next!), but also leads to very rapid ROI. This means projects can be kickstarted top-down as well as bottom-up. The low initial investment in order to prove the value of AI/ML through TIM allows users to put together a business case based on the actual impact on the business.
TIM typically runs on top of a data and/or IoT platform and connects through an API for automated data ingestion. This can include schedules, sensor data, load data (passengers or cargo load), weather data, etc. Forecasted failures are typically fed to a service planning system or CMMS for planning preventive maintenance.
Carl Fransman - Tangent Works
Data centers are critical infrastructure for countless operations. HVAC (Heating, Ventilation and Air Conditioning) failures can lead to a partial or full shutdown of the data center infrastructure in order to avoid critical equipment destruction. These shutdowns can cost hundreds of thousands of dollars in service and repair and in missed SLA fines. The ability to timely forecast HVAC malfunctions allows for predictive maintenance intervention, which can be planned during off-peak hours and allows for better system balancing during the intervention.
TIM’s approach to anomaly detection leads to high accuracy forecasts, because TIM deploys the optimal model for each situation; i.e. a different model may be required at 2 AM compared to 2 PM. TIM not only provides an anomaly detection, but also explains what leads to this result; feeding this information back to the technical teams empowers them to rapidly pinpoint what will cause a failure and take appropriate evasive action.
Typical data for this use case relates to power consumption, sensor data (i.e. DeltaP) from the HVAC and filter age, among others. Data from external sources is also often included, such as weather data.
Daniel Parton - BardessMore info
The recent evolution of Internet of Things (IoT) technologies has resulted in the deployment of massive numbers of sensors in various fields, including manufacturing, energy and utilities, and logistics. These sensors produce huge amounts of time series data, but understanding the data generated and finding meaningful patterns remains an obstacle to successful IoT implementations.
A common problem that can be solved with IoT data is anomaly detection, where temporal patterns in the data are used to identify mechanical or electronic anomalies in a piece of equipment, prior to the occurrence of a failure. This approach can help to minimize downtime for manufacturing pipelines or other IoT networks, thus preventing potential blocks on revenue streams. It can also enable cost savings by allowing maintenance interventions to be scheduled only when necessary.
Machine learning techniques provide an ideal solution for solving anomaly detection problems. However, they are typically time-consuming and costly to implement. TIM provides a revolutionary solution to this problem by allowing the development of rigorous anomaly detection models with minimal lead time. This is due to its highly automated and exceptionally fast modeling algorithm.
Due to its speed, TIM’s anomaly detection can easily be applied at scale, to huge numbers of IoT instruments. In addition, the TIM algorithm is extremely lightweight, and can thus be run directly on edge devices, reducing the need for costly network communication.
Finally, TIM’s API-first infrastructure makes it simple to integrate models into a production workflow.
Anomaly detection can be performed on a single instrument output data field, or it can combine information from multiple fields. For example, the information from a number of manufacturing instruments might be used to predict a quality metric for a material being produced. Or multiple data points from a single instrument might be used to predict when failure is likely to occur.
Daniel Parton - BardessMore info
All efforts in marketing and advertising today rely on a wide array of data sources, usually including both internal and external datasets. While this allows for deep insights and highly efficient marketing campaigns, it can also cause problems.
Imagine you are analyzing an ad campaign, when you realize the number of impressions being delivered per day dropped dramatically at a certain date, two weeks ago. A frantic investigation reveals that something has changed in the external data source being used to target potential customers, but the vendor had not alerted you to this. This could equally affect a customer’s insights or segmentation project.
Using machine learning techniques for anomaly detection, you could have detected this ahead of time, instead of discovering the problem weeks or months down the line. However, implementing such a system from scratch requires much time and specialized expertise.
TIM provides a much-needed new approach to this problem, by making it possible to implement robust anomaly detection routines with minimal lead time. Firstly, it is highly automated, meaning no data science experience is required to build effective models. Secondly, the model training process is stunningly fast, taking only a few seconds for a typical dataset – this makes it very easy to build effective models and also allows for huge scaling possibilities. Finally, the API-first infrastructure makes it simple to integrate models into a production workflow.
TIM’s anomaly detection capabilities rest upon first defining “normal behavior” for a given variable or data field (achieved using the TIM forecasting model) and then extending that with an “anomalous behavior” learning algorithm.
Ultimately, an anomaly detection platform with TIM at the center can provide organizations with much-needed confidence in the data that is fundamental to their operation.
TIM’s anomaly detection capabilities can be exploited using a single input variable – for example, if you want to detect anomalies for 1,000 fields from an external data source, you could build one model for each field. It can also be achieved when using multiple input variables. For example, you might want to detect anomalies in conversions, using inputs such as numbers of impressions across multiple marketing channels, economic metrics and more. TIM can handle both types of anomaly detection problems smoothly.
Elke Van Santvliet - Tangent Works
Industry, companies, cities, households… all consume energy. Whether opting for electricity, gas or thermal power – or, more likely, a combination of them – the need for energy is all around us. Both consumers and producers can benefit greatly from accurate estimates of future consumption, not in the least because extreme volatility of wholesale prices force market parties to hedge against volume risk and price risk. Handling on incorrect volume estimates is often expensive, but accurate estimates tend to require the work of data scientists. This leads to the next challenge, since data scientists are hard to find and hard to keep. The ability to accurately forecast future energy consumption is a determining factor of the financial performance of market players. Therefore, the forecasts are also a key input of the decision making process.
The value of Machine Learning in this use case is clear, but has to be weighed against the costs and efforts it introduces. To achieve accurate forecasts, relevant predictors should be used. TIM automates model generation of accurate forecasting models, and tells you which input variables have the highest relevance in calculating the forecasts. Contrary to data scientists, TIM creates these models in seconds rather than days, or even weeks. The scalability of TIM’s model generation process allows for hundreds of models to be generated at the same time. This allows valuable data scientists to focus on the areas where their expertise matters most.
Let’s put this in numbers. Looking at a rough estimate of savings from a 1% reduction in the MAPE (Mean Average Percentage Error) of the load forecast, for 1 GigaWatt of peak load, can save a market player about:
And these numbers don’t even take into account the savings on data scientist capacity.
Explanatory variables in energy consumption use cases include historical load data, in different levels of aggregation, as well as real-time measurements. These variables are supplemented by weather data, calendar information, day/night differences, In this use case, explanatory variables can include weather related data, wind speed in particular, complemented by more technical information such as the wind turbine type(s). TIM’s output consists of the forecasted wind production in the same unit of measurement (typically kWh or MWh) and granularity as the input data, over the desired forecasting horizon, production data…
TIM’s output in turn consists of the desired consumption forecast, in the same level of aggregation as the input target data, on short term, medium term and long term horizons.
Are you interested in a walk-through scenario of this type of use case? Then take a look at our solution template on this use case! You can find it under Electricity Load.