As illustrated across our website, time series are everywhere. Each industry vertical, each domain, each company comes into contact with time series in one way or another. In the use case library below, you can explore the extensive applications of time series, or look for applications in your specific field of interest.
Michal Bezak - Tangent Works
Contact centers typically operate with a pool of resources. For contact centers, predicting volume of incoming requests at specific times is critical to make proper resource scheduling. In such case, forecasts are expected for very short, and short-term, horizon (e.g. a week ahead).
High-quality short-term forecast brings confidence that FTEs (full time equivalent) planned for the next week are just right for delivering on SLAs, not to mention other benefits, such as higher confidence when planning absence, or improving morale of employees who would not face overload from “sudden” volume peaks.
Predicting volume of requests for mid-term horizons, e.g. 3-months ahead for weekly data, is important input to resource management. It takes time (weeks if not longer) moving people around, hiring, upskilling, or down-sizing pool of resources. Because of this, forecasts for longer horizon are needed, starting from one to more months.
The picture below depicts how contact centers are typically linked to WFM (work force management), internal departments and other factors. This provides intuition which factors should be included in data used for building models and forecasting.
Big contact centers would support not one, but multiple regions, cultures, and languages. Very likely forecast by language, or country would be beneficial.
Having forecast for just one perspective, one or two prediction horizons is not sufficient, also dynamics of ever-changing business means that using model built one month ago is suboptimal. This means that having capability to build models and make new predictions instantly is necessary for successful management of resources.
TIM can forecast for any prediction horizon, from intra-day to days or weeks ahead, and thus can be used for short term, midterm as well as long term forecasting. It can build new model in faction of time, building models from the latest data which helps to achieve better accuracy.
It’s automated, thus your analysts and data scientists have free capacity to focus on other agenda. You’d gain new capability, more frequent forecasting or forecasting per various perspectives with minimal additional effort is possible.
Effort and time required to set up forecasting is reduced to fraction of what would be typically required. TIM, by design, automates most of the steps required for set up and operations, and offers robust ML solution capable to quickly adapt to structural changes.
High-quality forecasts would deliver following benefits:
Explanatory variables should include historical actual volume values, meteo predictors, holiday information, marketing activity (campaign) information, factors describing customer base, planned outages, and/or other relevant data with as low latency as possible.
TIM’s output consists of forecasted volume of requests per each hour/day/week in selected prediction horizon.
Michal Bezak - Tangent Works
Concrete is one of the most important building materials used in construction. It is a composite of fine and coarse aggregates bonded together with a fluid cement that hardens over time to form a solid mass.
Ensuring the strength, quality and durability of concrete are critical for the stability of buildings. Because of this, concrete must pass quality control to check various parameters.
Compressive strength is one of the parameters used to evaluate the quality. In short, compressive strength is the capacity to withstand loads. It is measured on a universal load testing machine that requires several samples.
With the capability to calculate compressive strength by Machine Learning, the testing process can be optimized.
TIM can build ML models automatically using historical data. The very principle – model, used for calculating future values (forecast) – can also be utilised to evaluate qualitative parameters.
Explanatory variables should include volumes of inputs such as cement, ash, water, fine/coarse aggregate.
TIM’s output consists of a value in MPa calculated for a given input.
Michal Bezak - Tangent Works
One of the factors that have an impact on battery health and capacity is temperature. With the rising temperature, there is more capacity available for discharge and vice versa. Temperature is an important factor also during battery charging. To maximize the lifespan of Li-ion batteries, they should not be charged below 0°C.
Nowadays, advanced battery systems rely on cooling and heating mechanisms that help batteries operate efficiently (and keep them healthy) even in extreme conditions.
Knowing when to take action to prevent over-heating means getting an accurate forecast at first. This can be the case for one battery only or multiple batteries installed on a grid.
With various deployment options, incl. on the edge, TIM can to be used for various industry-specific device options (e.g., in EV).
TIM can build ML models from time-series data and predict temperature in tens of seconds or minutes. Data from device sensors are often sampled in seconds or even milliseconds basis. TIM can work with data sampled with any sampling rate starting from milliseconds.
Models built for each battery regularly can be incredibly beneficial, especially when you consider factors specific to each battery. Batteries are known for their degradation with each charge/discharge cycle; thus model built for a new battery may not be relevant for an older battery. Moreover, conditions in which batteries are operated (e.g. ambient temperatures) also differ. Profile of discharge reflecting usage is another dynamic factor.
Explanatory variables should include measurement from relevant sensors such as voltage, current, temperature, external conditions and others.
TIM’s output consists of forecasted temperature values.
Michal Bezak - Tangent Works
The accelerating adaption of electric vehicles (EVs) is driving improvements of battery technologies at unprecedented speed. Bigger capacities, faster charging, and a longer lifespan of batteries are in focus.
Despite the progress, batteries’ capacity still implies constraints on how we use them, and until there is substantial progress, information about how much time is left till complete discharge is particularly important.
Knowing how much time is left helps us plan subsequent actions such as an optimal route when to charge, how much additional load can be used etc.
TIM allows for various deployment methods (from edge to cloud). TIM can be deployed inside the device (e.g. inside an electric vehicle), or in the cloud to which the battery grid is connected.
TIM can build ML models from time-series data and predict temperature in tens of seconds or minutes. Data from device sensors are often sampled in seconds or even milliseconds basis. TIM can work with data sampled with any sampling rate starting from milliseconds.
Models built for each battery regularly can be incredibly beneficial, especially when you consider factors specific to each battery. Batteries are known for their degradation with each charge/discharge cycle; thus model built for a new battery may not be relevant for an older battery. Moreover, conditions in which batteries are operated (e.g. ambient temperatures) also differ. Profile of discharge reflecting usage is another dynamic factor.
Explanatory variables should include measurement from relevant sensors such as voltage, current, temperature, external conditions and others.
TIM’s output consists of forecasted temperature values.
Michal Bezak - Tangent Works
Every year billions of transactions are made by payment cards worldwide. Card companies spend vast amounts of resources to keep card operations fast and secure. Fraudulent activity related to misuse of cards can relate to both debit and credit cards. Costs incurred due to credit cards fraud can go as high as tens of billions of dollars annually.
This is a broad topic, securing card operations does not stop only at protecting data trying to avoid data breaches. Card issuers, banks and merchants need to take countermeasures to combat card payment fraud. Considering vast volumes and velocity, it would not be possible without automation, and AI/ML comes as natural choice.
TIM’s RTInstantML technology builds ML models in automated fashion in fraction of time. Its capabilities cover use cases for time series forecasting, classification and anomaly detection. Detecting fraudulent activity is a task for classification and/or anomaly detection.
Due to its hyper automation and speed, (re)building new ML model every hour, couple of minutes, or on demand for specific transactions is fully possible.
Yet, from operations perspective, TIM can be deployed rapidly fast, and is easy to operate. It can run in cloud or on the edge, scales automatically, is robust enough to withstand defects in data and comes with support of various sampling rates.
In classification cases for detection of fraudulent activity, it is necessary to provide labelled data, i.e., to include flag indicating to which class given activity belongs to (1 for fraudulent, or 0 otherwise).
Explanatory variables would typically include: amount, geo location information, time parameters, effective credit limit, descriptors of previous transaction, channel etc. In general, there are additional predictors used by banks/card companies that (improve accuracy and) are rather kept undisclosed to not give any hints to fraudsters.
TIM’s output in classification tasks is a value ranging from 0 to 1, closer to 1, the bigger probability activity is fraudulent.
Michal Bezak - Tangent Works
Companies across a variety of industries rely on machines: pumps, engines, elevators, turbines, etc. Some are more complex than others, but they surely have one thing in common – degradation of material. With each cycle (moment) of operation, components are losing their original physical parameters. Regular checks, diagnostics, and maintenance, or even replacement is an important part of machine operations.
The ideal scenario is to avoid failure of a given machine, thus being pro-active rather then reactive is for many businesses the only option. Also, acting at the right time has real financial implications. Imagine two extreme situations:
Predictive maintenance solutions can provide the optimal time for maintenance. Thanks to the data coming from sensors and AI/ML, it is possible to get advice, almost in real-time, on what is the best time to take action.
TIM can build automated ML models from time series data and predict the time remaining (Remaining Useful Life, RUL) or classify whether the device is already in a window (zone) of possible failure within a certain period of time (cycles).
Data from machine sensors are often sampled in seconds, or even milliseconds. TIM can work with data sampled in any sampling rate starting from milliseconds.
Also, effort and time required to set up TIM for production use is reduced to a fraction of what would be typically required. TIM, by design, automates most of the steps required for set-up and operations, and offers a robust ML solution.
Input: explanatory variables should include measurements from relevant sensors, values of key settings, information about failures, cycle numbers and/or other.
Output: TIM’s output consists of forecasted RUL value or binary classification (1 or 0), depending on given scenario.
Michal Bezak - Tangent Works
Nowadays, trading is mostly automated, and when there is an order placed you bet it is likely a robot that hit the trigger. AI (robots) took over, it has been years since this trend started.
To build a profitable and sustainable trading system many elements are needed, from risk management, collection of the right data, back-testing etc. There is plethora of areas that can be solved with AI/ML tools, and they can be framed as problems for: forecasting, classification, or anomaly detection. All of them are problems that TIM can solve.
TIM is robust and fast. It can work with data sampled starting from milliseconds to years, data that contain gaps, irregularly sampled data (just like tick data are). It can also build new ML model in truly short time, even with each forecast (or classification).
Effort and time required to set up pipeline with TIM is reduced to fraction of what would be typically required. TIM, by design, automates most of the steps required for set up and operations, and offers robust ML solution capable to quickly adapt to structural changes.
Depending on the problem being solved, prediction horizon, and market, different data may be required. Knowing which data to use is typically part of well protected intellectual property.
If we take short term forecasting, having market (bars) data combined with technical indicators, correlated, or cointegrated assets would be a good start. In a game played with leveraged positions chasing tiniest deltas (movements) high quality data make difference.
TIM’s output consists of forecasted value per step (sample) over desired forecasting horizon.
Michal Bezak - Tangent Works
Metro is one of the most important means of public transport across the globe. It cuts travelling time for millions of people every day, and so its availability is critical.
Metro operations require precise management and forecasting systems. Making accurate forecasts about volume of passengers travelling on concrete lines on certain day (and time) supports decisions about timely and right-sized dispatch of resources – the right amount of carriages prepared with the right number of personnel etc.
TIM is able to forecast practically for any prediction horizon, spanning from intra-day to days or weeks ahead. Effort and time required to set up forecasting is reduced to fraction of what would be typically required. TIM, by design, automates most of the steps required for set up and operations, and offers robust ML solution capable to quickly adapt to structural changes.
Useful data besides historical actual values should also include weather and holiday information. Adding (traffic) data about adjacent connection points could improve accuracy even further.
TIM’s output consists of forecasted volumes over desired forecasting horizon per each hour, 15-min, 5-min. etc. depending on the sampling of your data.
Michal Bezak - Tangent Works
Smart traffic solutions are becoming increasingly important and play a vital role in making our cities (and infrastructure) smarter. They comprise of multiple parts, spanning from hardware, software, and in recent years also AI/ML.
With prediction of traffic (and potential congestion) it is possible to better optimize routes taken thus cut time necessary to transport goods, people etc. Value derived from such capability can be measured with proxy indicators such as avoidance of (wasted) time spent in traffic jams etc.
TIM can forecast practically for any prediction horizon, from intra-day to days or weeks ahead. Effort and time required to set up forecasting is reduced to fraction of what would be typically required. TIM, by design, automates most of the steps required for set up and operations, and offers robust ML solution capable to quickly adapt to structural changes.
Explanatory variables can include besides historical actual values for traffic at given point, also weather, and holiday information. Adding (traffic) data about adjacent connection points could improve accuracy even further.
TIM’s output consists of forecasted traffic over desired forecasting horizon per each hour, 15-min, 5-min. etc. depending on the sampling of your data.
For physical reasons, transport of electrical energy via electricity transmission grid leads to losses; thus, less power can be withdrawn than is fed into the grid. Losses are generated, for example, through resistance in lines or transformers, and are released as heat. Factors such as load, outside temperature and switching states in the grid have an impact on power loss. Transmission system operators (TSO) must compensate for losses, and thoroughly manage them because losses influence balance on the grid.
TIM proved to deliver highly accurate predictions for intra-day and day-ahead forecasts and is fully capable to forecast practically for any prediction horizons. Even more, effort and time required to set up such forecasting solution is reduced to fraction of what would be typically required. TIM, by design, automates most of the steps required for set up and operations, and offers robust ML solution capable to quickly adapt to structural changes.
Explanatory variables can include, besides historical actual values for losses, also technical information about relevant points on the power grid, load, and weather data.
TIM’s output consists of the forecasted active power loss in the same unit of measurement (typically kW or MW) and granularity as the input data, over the desired forecasting horizon.
Read the exciting success story for this Use Case.
How to prepare your retail and online production, inventory and distribution for changing COVID-19 measures?
External factors can have a huge impact on your demand forecast for specific products in specific locations and channels. The constantly changing government guidance during the current COVID pandemic can also cause huge swings in demand, especially when wide reaching regulation – such as decisions to close restaurants or limit movement – can come and go within hours for entire towns, cities or countries. To help businesses dynamically allocate their scarce resources of staff and inventory, the use of TIM and real-time instant ML can create forecasts that are as dynamic as the events that influence them. This can enable fast business decisions to take advantage of opportunities and limit the costs of reacting to current events.
Talk to our specialist about adaptive retail sales forecasting
The TIM Engine is perfectly suited for this application, because of the speed, resilience and ease of deployment. Other forecasting methods, such as statistical forecasting, are far too slow to react to the modern business climate. Single variate models will miss on the complex interaction between seasonal changes, externally driven changes to demand and externally driven changes to mobility. The speed of forecasting is also incredibly important in determining what factors are causing permanent changes to demand and customer behavior and which changes will be temporary.
Using the TIM Engine, an analyst can quickly iterate on models using dozens or even hundreds of features to get predicted impacts on demand in near real-time. This is a must-have tool for anyone in an organization who is responsible for planning of inventory and/or staffing levels.
As an example, using the TIM product we can immediately predict the impact a new restriction in a specific town would have on online ordering in specific postcodes. This can be used to ensure the correct allocation of capital equipment (trucks), staffing (drivers) and product (warehouses) even in advance of the restriction being implemented. With the TIM Engine, this forecast can be available in minutes to respond to significant events that may be happening within only a few days. We can see that a new announcement of a restriction to restaurants causes an immediate surge in online grocery demand that lasts for several days before subsiding to pre-restriction levels. This analysis can be extended to review the impact on specific products at a SKU level – including the ability to run independent demand forecasts for individual SKUs at individual stores in seconds.
Demand Data has real-time access to external data of significance, such as weather data (forecast and history) for any geographic point, as well as real-time COVID cases, mobility and restrictions for each postcode in most major countries. This data can instantly be prepared as inputs to be combined with sales data for specific products, channels or store locations. We have templates available which can plug into sales data at a store level and create the base models instantly (including COVID, weather and human mobility). From there, your analysts can iterate using other data or assumptions they have and get feedback in seconds on which assumptions or data are good predictors and which ones are not.
Talk to our specialist about adaptive retail sales forecasting
Watch a video taking you through this use case below:
Anyone who has ever sat in an S&OP meeting or in budget review will have heard this before: “We would have hit our target if it wasn’t for the (rain, wind, snow, sun, cold, hot…).” Very rarely are these comments backed up with real facts – but you will get the occasional nod around the room. Ah yes… I do recall it rained on the second Saturday in July – that must be why we missed our target by 12%. Or the line in a seasonal review meeting that says, “Our winter boot sales are up this year over last year because of the early winter storm this year and the late Indian summer in 2017, remember???”
The problem with these statements is that though they may be true, mostly they are easy excuses that cover true problems within the business. Maybe there are supply issues, or merchandising issues or a general decline or growth of our brand. Instead of comparing actuals to budget, what if a third line could be added called “Event adjusted budget”. This way, managers, planners and executives can understand the impact that events, such as weather, may have played on our performance, but most importantly, this will help us to understand and think about what else might be happening in our business. Sounds interesting?
The good news is that with advances in Machine Learning and tools like the TIM engine, this is entirely possible to do. By comparing actual sales history at different levels of your organization and bringing in weather conditions on the day as features, a company can start to understand the importance of these features on driving actual sales. Using this data, we can start to see what categories are heavily impacted (ice cream, umbrellas, or the heating bill) and what locations get impacted (outdoor shopping malls, online sales, restaurant delivery). As the TIM engine begins to understand what event variables are driving volatility, we can create historical forecasts based on simulated conditions. (This is what we would have predicted this year based on the same conditions as last year as an example.) This means that the next time your sales rep argues they would have made their target if it wasn’t for the cold front, your business can support this with real evidence. Not only that, but you can start to use live weather forecasts or long-term weather trends to drive predictive interventions – like adding an extra employee at your beach café this weekend.
With cloud storage being so cheap, detailed datasets containing years of history for hundreds of thousands of weather data points are readily available. Not to mention other event data like football games, traffic patterns, holidays and much more. This data is relatively easy to find and there are many services that make this data available historically and as forward-looking projections. With this data and your historical sales and historical budgets, the TIM engine can train models to produce forecasts based on simulated conditions (or after removing them entirely).
Watch a video taking you through this use case below:
Looking for a more in-depth view into this use case? Check out this video:
Launching a new product successfully can be a very stressful but extremely rewarding task if executed correctly. After weeks or months of planning, your new product or line extension is ready to go to market and in the initial few weeks or days of launch, speed and agility is critical. There are several complexities to consider with new product introductions (NPIs). First is that your initial forecast is a “cold start”, meaning that you must make many assumptions and usually have little data on how a product will perform. This makes it extremely difficult to anticipate how your product is going to be adopted in the market and how much inventory to buy or produce. Second is that usually, capacity is finite on an item that has not been made before and the inventory risk is very high – so rarely will your CFO want to take large inventory positions on an unproven item.
TIM’s InstantML can be an excellent solution to managing a new product forecast. Unlike other demand modeling tools, which require history to build models and then are inflexible to change models quickly, TIM can completely rebuild models as soon as the first set of sales data comes trickling in. With TIM, you can rebuild your models daily or even multiple times per day in the first several days of trading. Through storing the projections from each model, you can quickly evaluate how quickly the models converge or know immediately if your original projections are too high or too low. This enables the planning and launch team to react immediately and swiftly to rapidly changing conditions. It has been proven in the fashion industry that the market signal in the first few days of a new item trading is the single largest predictor of the overall long-term margin for that product. Isn’t this something you should be watching constantly and adapting to during those critical first few days?
A new product introduction will start with of course your initial forecast. If you have launched similar products in the past, you can use common features of the product to build an expected demand profile. In launching a new clothing item for example, you can use color, cut, fabric and price points as well as category to help build this. As we move into the launch phase, sales data is critical and getting as much detail as fast as possible is best. Sales by each transaction at each store or sales channel is ideal.
The output of TIM in this case is a timestamped forecast so you can see how the forecast is changing over the next several weeks hour by hour as new data comes in. Unique to the TIM product, is that every forecast can be built based on an entirely new model and each forecast can be compared to previous forecasts. When the forecasts converge and are consistent you can safely assume that demand has stabilized, however if they fluctuate or trend heavily up or down you can immediately adjust your production plan, distribution footprint or inventory policies.
Philippe Thys - Tangent Works
Companies who monitor (in real time) tactical and strategic change to identify gaps and discover market opportunities will maintain or increase their competitive edge. Operations managers and senior executives use control towers to get visibility in supply chain operations. By collecting and combining data from a growing number of new information sources like IoT, GPS, and electronic logging devices companies get an additional layer of intelligence across their operations, and across enterprises. Information on production processes, stock levels, shipment and orders can now be tracked to a new level of detail, enabling supply chains to optimize contingency plans by monitoring disruptions, evaluating the impact towards the plan, and acting in real time. Incorporating this type of new data streams into traditional track and trace, S&OP, or supply chain monitoring activities is not straightforward; organizations are looking into data science to open up new options to produce meaningful outputs and to use that output for improved risk mitigating strategies and operational processes to boost their performance when reacting to unexpected disruptions or market opportunities. 99% of the data used in control towers to monitor supply chain information consists of time series data streams. This information can be used in AI/ML to evaluate future behavior, calculate the impact on performance, and act accordingly. Evaluating the potential performance improvement from the introduction of AI/ML with the goal to adapt your operational scripts is not an easy task. Defining the right AI/ML strategy, what ML approach to use, how to train and configure your models and then deciding how to deploy them tends to be a timely and costly project. And then there is also the enormous number of potential uses, scenarios, and configurations of supply chain networks to take in account. Once you choose, configure and deploy your models, you will need to continuously monitor your setup for performance (accuracy) deterioration due to changes in the data sources, changes in supply chain and logistics networks, changes of business models, and last, but not least, the dynamics of your business and industry. Typically, this is covered by having a department of specialists who maintain and optimize these configurations and deployments.
With TIM’s Real Time Instant Machine Learning (RTInstantML) forecasting organizations can skip the configuration process and immediately deploy and execute ML models that adapt to the input data streams without the need of human intervention – and this in near real time. This allows companies to embed AI/ML into their control towers with the benefit of better insights into future events and their impact and hence react faster and smarter. And all this at a fraction of the cost and the time required using traditional ML approaches.
Typical inputs of this use case include data from the supply chain operations (IoT, schedules, planning, throughput, etc.), logistics (ELD, GPS, IoT, stock levels, order status, etc.), sales and marketing (campaigns, new orders, etc.) and environmental data (infrastructure, weather, etc.) Typical outputs of this use case consist of time series forecasts on various reporting aspects in the control towers (performance, ETA’s…). This data can be compared to the to-be situation to calculate predicted performance, potential diversions to the plan, etc. as input to your contingency actions.
Henk De Meetsenaere - Tangent Works
Consumption of medical supplies and the need for certain raw materials or other potentially scarce resources need to be forecasted by governments and medical institutions. Normal fluctuations in consumption patterns can be complemented by sudden structural changes due to extreme events, climate change influences and epidemiological changes. This requires adaptive forecasting models that capture new dynamics fast and give insights into the underlying demand influencers.
TIM RTInstantML models allow end users and operational experts to automatically generate predictive models. The Augmented Machine Learning capabilities of TIM elucidate insights in the dynamics that underpin the forecasted values. TIM allows for fast recalibration or recalculation in minutes so models remain accurate and update as new data flows in.
Typical data sources constitute weather information, calendar information, demographics, major event planning and epidemiological indicators.
Henk De Metsenaere - Tangent Works
Admission rates of patients in hospitals affect both the Supply Chain and Human Resources planning of a hospital. Admission rates fluctuate based on human factors linked to weather, calendar and time of day information. Disease spread and epidemiological evolutions introduce potential structural changes. The differences between day and night, correlations with weather, public holidays, events and medical parameters further define admission rates. Hospitals need to organise and optimise their supply chains and staffing accordingly.
TIM’s RTInstantML technology gives business users the capability to generate predictive models in an automated and fast way. This allows for fast results and what-if analysis. TIM’s Augmented Machine Learning capabilities give users insights in the underlying influencing parameters, enabling them to understand and analyse forecasted results. The adaptability TIM brings, allows fast adjusting of forecasts to structural changes in the data, so that forecasting models can adapt to new situations and events, such as pandemic information.
Typical data for admission rate forecasting includes weather information, calendar information, time of day insights, epidemiological data, etc.
Philip Wauters - Tangent Works
Wind turbines have become progressively more influential as the share of energy production and the infiltration of wind energy into power systems are steadily increasing. With this, the need for reliability in the production capacity of wind turbines has increased as well. The turbines must operate as smoothly as possible, since the unscheduled stoppage of these turbines can lead to significant production losses. In this use case the importance of operations and predictive maintenance are highlighted, and especially the role of health monitoring. Continuous monitoring of wind turbine health using anomaly detection improves turbine reliability and efficiency thus reducing maintenance and wind power costs. Finally, it allows for the optimal timing of turbine maintenance and repairs so that they can reduce the impact on the overall energy production and avoid catastrophic failure of the turbines.
Due to the highly automated, exceptionally fast and reliable modeling algorithm, TIM can build multiple anomaly detection models in a limited amount of time. It is especially useful in this case, since wind turbines often operate in wind farms where multiple turbines need to be monitored simultaneously. The speed and frequency of model building that TIM is capable of also allows for real time notifications of suspicious behavior in any turbine.
Building a model for the detection of anomalous behavior in wind turbines requires a set of training data with several variables. The power output of a wind turbine is dependent on the efficiency of the blades, gear assembly, alternator/dynamo, as well as wind speed, wind direction and wind consistency. Also, the taller the wind turbine, the greater the energy produced, since wind speeds are greater at higher altitudes. With these variables set up in a time series format, TIM can use its anomaly detection capabilities to determine whether or not a power output observation is abnormal.
Are you interested in a walk-through scenario of this type of use case? Then take a look at our solution template on this use case! You can find it under Wind Turbine.
The velocity of change in the mortgage industry is outpacing the abilities and reach of existing prepayment and profitability models, especially in the tail-risk coronavirus world. With over $16T total outstanding mortgages, the US mortgage market requires a scalable, accurate forecasting and modeling solution.
TIM enables users to iteratively forecast mortgage prepayment, delinquency, and default. This can help investors, GSE’s, servicers, lenders, and other stakeholders to evaluate and quantify the valuation and credit risk of their mortgage assets.
The US mortgage industry generates a vast quantity of data highly relevant to profitability and risk analysis. Some of this data is in users’ possession, some of it is publicly available, and some of it can be acquired. InstantML allows you to understand, differentiate, and quantify the relevance and impact of each data source to your forecasts.
Accurate forecasts of sales and demand in the retail industry can make a huge difference in allowing a company to adapt to changing times. Furthermore, sales planners frequently rely on forecast-driven tools to adjust levers such as product pricing and the timing of promotions. There is frequently also a need to apply these techniques dynamically across different levels of product hierarchy, geography, or other dimensions. However, many companies are still using relatively rudimentary forecasting techniques, which can affect the accuracy of these forecasts. Many enterprise-facing tools are also designed with inefficient workflows, reducing the ability to do effective analysis for end-users such as FP&A and sales planning teams.
Machine learning techniques provide the most up-to-date approach for accurate forecasting, but are time-consuming to implement from scratch. This is where TIM’s highly automated, lightning-fast, ML-driven capabilities can make the difference. Automation means that analysts can use the tool without needing experience in ML theory or in programming. Lightning-fast means that end-users will be easily able to apply forecasting to any data that requires it, and to quickly iterate on scenario planning. It also means that TIM’s forecasting abilities can be applied at scale, on any data that needs to be forecasted. It can also be used in interactive BI applications. For example, planning teams could use the same BI tools that they use every day, select a particular slice of data, adjust some inputs such as pricing and promotions, and use a TIM BI tool integration to get a forecast result – all without leaving the BI tool.
Typical inputs for a sales forecasting application might include historical sales (usually split across product hierarchy, geography etc.), regional store information, local demographics, level of competition, and indicators of consumer demand, industry performance, and economic performance. Sales planning applications might also include pricing and promotion start/end dates as adjustable inputs, allowing planners to see their effect on the sales forecast.
Carl Fransman - Tangent Works
Complex and distributed assets (i.e. differently configured pumps or compressors installed across the globe) fail because of many reasons; some are purely due to the conception of the asset and represent normal wear and tear. Some failures though are due to local operating conditions and/or the specific configuration of the asset. Gathering data through IIoT platforms and performing anomaly detection not only allows for foreseeing such failures, but when this anomaly detection leads to explainable forecasts, engineers can perform root cause analysis. This leads to faster resolution of the issue and also allows R&D to analyse failures and come up with more robust and reliable equipment, which is even more important under servicisation-type contracts where the manufacturer bears (some of) the cost for maintaining the equipment and guaranteeing uptime.
TIM’s forecasting and anomaly detection capabilities not only produce accurate results, but these results are fully explainable; therefore TIM’s value extends beyond avoiding the failure and supporting predictive maintenance. TIM’s information can be analysed by technical maintenance teams in order to pinpoint the culprit rapidly and thus save precious production time by limiting downtime. TIM’s information can also be analysed by R&D teams to determine structural improvements to the equipment.
Typical datasets used in this use case, consist of CMMS data combined with IIoT data and potentially external elements, such as operating conditions (weather, vibrations, speed, etc.)
Philippe Thys - Tangent Works
Supply chains are under continuous pressure to maintain or improve their market position. The digital revolution lead to a surge of digital transformation initiatives as well as the emergence of new players who leverage new technological innovations to create new business models that trigger a tidal wave of disruptive contenders in an already highly competitive world. The speed of innovation leads to unprecedented dynamics; only the most agile supply chains are able to (re-)act and adapt. As a result of these new dynamics, traditional mid- to long-term strategies must be reviewed and adapted at a higher frequency. Evaluating the impact of market disruption, both from the demand and the supply side, requires advanced intelligence and analytics that can be set up and reconfigured rapidly to evaluate risk and discover opportunity. Traditional AI, and even automated machine learning approaches are expensive, slow and difficult to adapt to support the agility and velocity required to keep your business on track on the short and the long term. By combining business data and market prognosis scenarios with real time instant machine learning, organisations can create new, improve existing, and evaluate more what-if scenarios and simulations for strategic planning and business transformation. Some example of business strategy planning processes that benefit from InstantML forecasting are strategic budgeting exercises, business transformation and design initiatives, strategic product lifecycle planning and optimisation, and the product and product maintenance design process.
TIM (Real Time) Instant Machine Learning can be used to complement what-if and simulation scenarios for budget exercises, adapting maintenance for product support strategies, run forecasting and anomaly detection on digital twins in product design, do risk assessments in your business transformation process, etc. With TIM users can shorten the time to run and compare scenarios, include different future market projections as predictor candidates and easily interface with simulation tools.
Typical inputs for this use case include historical demand, supply, production, prices, costs and strategic performance data, complemented with external data concerning weather, sales periods, global and regional disruptive events, sales campaigns, product introduction information, etc. In return, TIM’s output consists of middle to long term time series forecasts on budget, sales, etc.
Philippe Thys - Tangent Works
Getting the most out of your production assets, especially with constraints, is the foundation of increasing the flow of profits through your production lines. The proliferation of time-stamped data follows naturally from the digitisation of industry. The ongoing deployment of billions of connected sensors will only accelerate the trend. As a consequence, lots of decision-making processes that used to be fairly static (based on stable information) are becoming dynamic (based on streaming data). Today, machines are connected through communication that is initiated and deployed within local gateways or virtual machines. We see a fragmented base of protocols and IT-systems running the machines globally and many different configurations of the same machines even within the same plant.
Increase your return on assets with TIM’s anomaly detection capabilities by reducing unplanned maintenance and increasing equipment uptime. Indirectly, this will improve on time delivery performance and customer retention. The ease of use, speed of setting up and generating trained models, together with a very fast AI engine enables companies to implement near real time anomaly detection at an unprecedented scale. Users can now create and deploy models at all levels of manufacturing control operations: at field level (sensors), direct control level, plant supervisory level, production control level, up to the production scheduling level. Furthermore, with TIM it is easy to generate a model collection of time series forecasting models that map to different types of failures (electrical, mechanical, integrity, structural…) at different levels in the equipment’s (maintenance) bill of material. Users will be able to create and maintain Machine Learning capabilities that keep up with the dynamics of their enterprise.
Typical input data in this use case consists of raw time series data from PLCs, SCADA, sensor data, data from the maintenance scheduling system and data from the condition monitoring process. Concrete examples include data on vibration, temperature, revolutions, pressure, quality, etc., as well as past error codes, future condition monitoring alerts, past and future maintenance schedules, past maintenance information and past and future equipment operations schedules. After processing this data, TIM returns equipment failure predictions as output.
Philippe Thys - Tangent Works
Getting the most out of your production assets, especially with constraints, is the foundation of increasing the flow of profits through your production lines. The proliferation of time-stamped data follows naturally from the digitisation of industry. The ongoing deployment of billions of connected sensors will only accelerate the trend. As a consequence, lots of decision-making processes that used to be fairly static (based on stable information) are becoming dynamic (based on streaming data). Highlighting abnormal patterns directly from multi-variate sensor readings to help with inspection and diagnoses through anomaly detection in time-series data (generated on top of failure codes returned from PLC and SCADA systems) will help to alert for potential equipment failures during the production runs. These signals can then be analysed and used as indicators for potential performance degradation or equipment failure. Time series machine learning differs significantly from standard machine learning practices, and many current machine learning solutions applied to time series underperform and are not agile enough to react to the dynamics of the new data inflows.
Increase your return on assets with TIM’s anomaly detection capabilities by reducing unplanned maintenance and increasing equipment uptime. The ease of use, speed of setting up and generating trained models, together with a very fast AI engine enables companies to implement near real time anomaly detection at an unprecedented scale. Users can now create and deploy models at field level (sensors) as well as direct control and plant supervisory levels. They will be able to create and maintain Machine Learning capabilities that keep up with the dynamics of their enterprise.
Typical input data in this use case consists of raw time series data from PLCs, SCADA and sensor data, such as vibration, temperature, revolutions, pressure, quality, etc… TIM then returns the detected anomalies to the user, consisting of anomalies on component, subcomponent, machine, and/or production line level.
Carl Fransman - Tangent Works
Track-operated transportation system (metropolitan or passenger and freight rail) failures can be very expensive; from merely causing a delay (often blocking a track to follow-on traffic) to derailments. En-route failures need to be avoided at all costs for both safety and economic reasons. Predicting failures is complex though, because of a high degree of customisation among rolling stock and because the system is impacted by varying factors such as load and weather.
All-in predictive maintenance roll-out requires a huge upfront investment in systems and change management. TIM’s extremely fast approach to generating predictions permits rail and track operators to roll out predictive maintenance approaches one use case at a time, which reduces organisational stress due to change management (actually, once initial cases have proven value, teams typically demand to be next!), but also leads to very rapid ROI. This means projects can be kickstarted top-down as well as bottom-up. The low initial investment in order to prove the value of AI/ML through TIM allows users to put together a business case based on the actual impact on the business.
TIM typically runs on top of a data and/or IoT platform and connects through an API for automated data ingestion. This can include schedules, sensor data, load data (passengers or cargo load), weather data, etc. Forecasted failures are typically fed to a service planning system or CMMS for planning preventive maintenance.
Carl Fransman - Tangent Works
Back when buildings were pure power consumers and all the electricity was provided by a single supplier controlling the entire production apparatus and distribution grid, life was easy: simply match production and demand and everything is in balance. Now however, production has multiple suppliers with very different characteristics; stable (nuclear, hydro), controllable variable (coal and gas) and variable (solar, wind). Power consumption is also variable because nowadays many buildings produce electricity in addition to consuming it, and some even provide storage. Being able to forecast power requirements allows operators to ‘balance the system‘ or allows producers to optimise profitability (and even plan maintenance events at moments of otherwise lower return). Forecasting allows for decision-making between drawing power, producing and uploading energy, producing and consuming or producing and storing.
TIM’s straightforward configuration empowers users to build accurate forecasting models which can be dynamically augmented as new data streams become available. TIM will build a Model Zoo to improve accuracy in reaction to different patterns (both in consumption and in production) at different time-of-day. Industrial players can run their own TIM-powered forecasting in order to shave off peak consumption and lower their energy costs, whereas service providers can build TIM into their solution offering towards their clients in order to provide accurate and reactive forecasting capabilities.
Typical data inputs are historic power consumption, smart meter readings, battery settings, weather data, etc.
Carl Fransman - Tangent Works
Data centers are critical infrastructure for countless operations. HVAC (Heating, Ventilation and Air Conditioning) failures can lead to a partial or full shutdown of the data center infrastructure in order to avoid critical equipment destruction. These shutdowns can cost hundreds of thousands of dollars in service and repair and in missed SLA fines. The ability to timely forecast HVAC malfunctions allows for predictive maintenance intervention, which can be planned during off-peak hours and allows for better system balancing during the intervention.
TIM’s approach to anomaly detection leads to high accuracy forecasts, because TIM deploys the optimal model for each situation; i.e. a different model may be required at 2 AM compared to 2 PM. TIM not only provides an anomaly detection, but also explains what leads to this result; feeding this information back to the technical teams empowers them to rapidly pinpoint what will cause a failure and take appropriate evasive action.
Typical data for this use case relates to power consumption, sensor data (i.e. DeltaP) from the HVAC and filter age, among others. Data from external sources is also often included, such as weather data.
Carl Fransman - Tangent Works
Retail forecast errors reach on average more than 30%, with forecast accuracy impaired due to ever-changing conditions. This makes for a challenging use case, but one with much potential for improvement.
Datasets in this use case vary depending on product, sector or even geographic location, resulting in a cumbersome and complex model building process. TIM not only avoids this pitfall through automated selection of the right input variables, but will even explain the impact of each predictor, allowing for further refinement or data sourcing.
Furthermore, TIM brings responsiveness through automated model tuning in reaction to internal and external changes. The increased responsiveness leads to higher accuracy. This results in less waste due to inventory scrapping (especially for perishable goods) as well as less lost sales due to inventory shortages.
Typical data in this use case includes past sales volumes, supplemented with data regarding commercial actions and external factors impacting sales volumes.
The recent evolution of Internet of Things (IoT) technologies has resulted in the deployment of massive numbers of sensors in various fields, including manufacturing, energy and utilities, and logistics. These sensors produce huge amounts of time series data, but understanding the data generated and finding meaningful patterns remains an obstacle to successful IoT implementations.
A common problem that can be solved with IoT data is anomaly detection, where temporal patterns in the data are used to identify mechanical or electronic anomalies in a piece of equipment, prior to the occurrence of a failure. This approach can help to minimize downtime for manufacturing pipelines or other IoT networks, thus preventing potential blocks on revenue streams. It can also enable cost savings by allowing maintenance interventions to be scheduled only when necessary.
Machine learning techniques provide an ideal solution for solving anomaly detection problems. However, they are typically time-consuming and costly to implement. TIM provides a revolutionary solution to this problem by allowing the development of rigorous anomaly detection models with minimal lead time. This is due to its highly automated and exceptionally fast modeling algorithm.
Due to its speed, TIM’s anomaly detection can easily be applied at scale, to huge numbers of IoT instruments. In addition, the TIM algorithm is extremely lightweight, and can thus be run directly on edge devices, reducing the need for costly network communication.
Finally, TIM’s API-first infrastructure makes it simple to integrate models into a production workflow.
Anomaly detection can be performed on a single instrument output data field, or it can combine information from multiple fields. For example, the information from a number of manufacturing instruments might be used to predict a quality metric for a material being produced. Or multiple data points from a single instrument might be used to predict when failure is likely to occur.
All efforts in marketing and advertising today rely on a wide array of data sources, usually including both internal and external datasets. While this allows for deep insights and highly efficient marketing campaigns, it can also cause problems.
Imagine you are analyzing an ad campaign, when you realize the number of impressions being delivered per day dropped dramatically at a certain date, two weeks ago. A frantic investigation reveals that something has changed in the external data source being used to target potential customers, but the vendor had not alerted you to this. This could equally affect a customer’s insights or segmentation project.
Using machine learning techniques for anomaly detection, you could have detected this ahead of time, instead of discovering the problem weeks or months down the line. However, implementing such a system from scratch requires much time and specialized expertise.
TIM provides a much-needed new approach to this problem, by making it possible to implement robust anomaly detection routines with minimal lead time. Firstly, it is highly automated, meaning no data science experience is required to build effective models. Secondly, the model training process is stunningly fast, taking only a few seconds for a typical dataset – this makes it very easy to build effective models and also allows for huge scaling possibilities. Finally, the API-first infrastructure makes it simple to integrate models into a production workflow.
TIM’s anomaly detection capabilities rest upon first defining “normal behavior” for a given variable or data field (achieved using the TIM forecasting model) and then extending that with an “anomalous behavior” learning algorithm.
Ultimately, an anomaly detection platform with TIM at the center can provide organizations with much-needed confidence in the data that is fundamental to their operation.
TIM’s anomaly detection capabilities can be exploited using a single input variable – for example, if you want to detect anomalies for 1,000 fields from an external data source, you could build one model for each field. It can also be achieved when using multiple input variables. For example, you might want to detect anomalies in conversions, using inputs such as numbers of impressions across multiple marketing channels, economic metrics and more. TIM can handle both types of anomaly detection problems smoothly.
Many cities around the world have adopted a bicycle sharing system. These systems allow people to cycle around the city, picking up a bicycle at one docking station and dropping it off at another. Not all docking stations experience the same use, however. Some stations might be popular starting places, leaving them empty at the end of the day, whereas others see many people arrive throughout the day and end up completely occupied. Understanding and predicting these patterns goes a long way in the planning of the redistribution of bicycles accross docking stations that often happens at night. Yet the situation is even more complex, since people’s behaviour is influenced by many other factors, such as the weather, time of day and calendar. Forecasting the usage of bicycles also gives insight into the demand, allowing cities to anticipate higher demand by expanding the bicycle network, for example.
TIM can help in these scenarios, by taking into account many different variables, transforming and weighing them in order to produce accurate and understandable forecasts. Moreover, TIM allows for frequent model rebuilding, quickly adapting to meet challenges posed by sudden and unexpected changes in peoples behaviour. In case of any issues with data collection, TIM is also able to account for changes in data availability. Thanks to the explainability of the models, it is possible to develop a sense of understanding at what constitutes an accurate forecast, and what influence changes in predictors may have. This can serve as a start for answering questions like “How would the necessary redistribution change if it was colder than expected, tomorrow?”
Apart from historical values of the amount of bicycles in use, this use case takes calendar data, weather data (temperature, windspeed, wind feel, humidity…) and the time of day as input. The generated models then produce forecasted values of the amount of bicycles in use, in the same granularity as the target input data.
Elke Van Santvliet - Tangent Works
This use case looks at heat consumption, more specifically through water heating. Typical domestic uses of hot water include cooking, bathing and space heating. This heat transfer process is associated with significant costs, thus ensuring energy efficiency is important. This illustrates the need for continuous monitoring of heating system health by closely watching whether the measured heat consumption is appropriate under given circumstances. Anomalous values might indicate underlying issues, such as a ruptured pipe, loss of system pressure, water being stolen or issues with a radiator or boiler. Accurate detection of these issues allows for well aimed, timely inspections.
TIM’s ability to generate explainable models proves its value in this use case as it enables users to understand which factors influence the target variable, heat consumption. Understanding what should be happening is a first step towards figuring out why this might sometimes not be the case. Accurate models can help to detect anomalies early on, which in turn can be crucial in preventing damage and costs. For example, the ability to timely detect and fix a leaking pipe might help in preventing a ruptured pipe. Although some anomalies might be fairly obvious to the trained eye (ex.g. a sudden fall out of (a part of) consumption might indicate a broken meter), others might be more subtle (ex.g. someone stealing a part of the supply by draining some of the water from a pipe). TIM manages to detect both observations that are anomalous in relation to historical values and observations that are anomalous in relation to current circumstances (predictors).
Creating a model that can detect anomalous heat consumption, requires a set of training data. This training data typically consists of past values of the heat consumption, as well as other available variables that play a role in heat consumption. Such variables can be found in meteorological data (outside temperature, wind speed, wind direction…) as well as metered system data (incoming and outgoing water flow).
TIM then uses this data to determine each observation’s anomaly indicator, indicating how anomalous that observation is. This anomaly indicator in turn determines whether or not the threshold is crossed and the observation can be considered anomalous.
Are you interested in a walk-through scenario of this type of use case? Then take a look at our solution template on this use case! You can find it under Heat Consumption.
Elke Van Santvliet - Tangent Works
Industry, companies, cities, households… all consume energy. Whether opting for electricity, gas or thermal power – or, more likely, a combination of them – the need for energy is all around us. Both consumers and producers can benefit greatly from accurate estimates of future consumption, not in the least because extreme volatility of wholesale prices force market parties to hedge against volume risk and price risk. Handling on incorrect volume estimates is often expensive, but accurate estimates tend to require the work of data scientists. This leads to the next challenge, since data scientists are hard to find and hard to keep. The ability to accurately forecast future energy consumption is a determining factor of the financial performance of market players. Therefore, the forecasts are also a key input of the decision making process.
The value of Machine Learning in this use case is clear, but has to be weighed against the costs and efforts it introduces. To achieve accurate forecasts, relevant predictors should be used. TIM automates model generation of accurate forecasting models, and tells you which input variables have the highest relevance in calculating the forecasts. Contrary to data scientists, TIM creates these models in seconds rather than days, or even weeks. The scalability of TIM’s model generation process allows for hundreds of models to be generated at the same time. This allows valuable data scientists to focus on the areas where their expertise matters most.
Let’s put this in numbers. Looking at a rough estimate of savings from a 1% reduction in the MAPE (Mean Average Percentage Error) of the load forecast, for 1 GigaWatt of peak load, can save a market player about:
And these numbers don’t even take into account the savings on data scientist capacity.
Explanatory variables in energy consumption use cases include historical load data, in different levels of aggregation, as well as real-time measurements. These variables are supplemented by weather data, calendar information, day/night differences, In this use case, explanatory variables can include weather related data, wind speed in particular, complemented by more technical information such as the wind turbine type(s). TIM’s output consists of the forecasted wind production in the same unit of measurement (typically kWh or MWh) and granularity as the input data, over the desired forecasting horizon, production data…
TIM’s output in turn consists of the desired consumption forecast, in the same level of aggregation as the input target data, on short term, medium term and long term horizons.
Are you interested in a walk-through scenario of this type of use case? Then take a look at our solution template on this use case! You can find it under Electricity Load.
Elke Van Santvliet - Tangent Works
Although ecological and quite popular, wind production is a volatile source of energy. Besides great opportunities for balancing the grid and forecasting production, this use case also involves a lot of predictive maintenance. Wind production use cases rarely centre around a single windmill or even a single wind farm, instead often involving a large portfolio of wind assets. The larger the portfolio, the more difficult to manage and obtain an optimal dispatch and exposure to the electricity market.
It is worth mentioning that mixed portfolios of solar and wind assets are common; don’t hesitate to take a look at this solar production use case.
TIM can contribute in this use case through automating and managing complex wind & solar modelling pipelines. Moreover, TIM allows for blended forecasts that unify high-quality intraday modelling and day(s) ahead modelling into a single API call. These forecasts are fully explainable and can take into account many additional variables, such as weather data, on top of historical values of the wind production. TIM accomplishes this in a scalable and accurate way, taking care to incorporate either current or expected data availability into the models it builds.
In this use case, explanatory variables can include weather related data, wind speed in particular, complemented by more technical information such as the wind turbine type(s). TIM’s output consists of the forecasted wind production in the same unit of measurement (typically kWh or MWh) and granularity as the input data, over the desired forecasting horizon.
Are you interested in a walk-through scenario of this type of use case? Then take a look at our solution templates on this use case! You can find them under Single Asset Wind Production and Portfolio Wind Production.
Elke Van Santvliet - Tangent Works
Many different parties are impacted by the production of photovoltaic plants, from owners to electricity traders to system regulators. This production has an impact on multiple domains, such as maintenance, trading and regulation strategies. However, the high short-term volatility in solar production makes balancing the grid a difficult task. Moreover, a single impacted party often has interests in a large portfolio of solar assets, which might consist of different sizes of plants at different locations. Inaccurate forecasts can result in significant financial penalties, whereas improvement of forecasting accuracy can lead to significant financial gains. Large portfolios with significant impacts require consistent and scalable forecasts.
Many parties are interested in mixed portfolios of solar and wind assets; if interested, take a look at this wind production use case.
TIM empowers users to intuitively execute and even automate this forecasting task by managing complex modelling pipelines and allowing for blended forecasts that unify high-quality intraday modelling and day(s) ahead modelling into a single API call. In addition, TIM works with fully explainable models, so users can easily understand which decisions are made and why.
Achieving a high accuracy isn’t the only challenge in these situations, though. These large portfolios of volatile assets might not always dispose of the same expected data availability. TIM can handle different data availability situations either by allowing the user to account for the situation in the relevant Model Building Definition or by building and deploying models ad hoc taking into account the current data availability situation.
Several different variables can be explanatory in this use case and should therefore ideally be included as inputs into the model building scenarios. These variables include weather related data such as the global horizontal irradiation (GHI) and the global tilted irradiation (GTI). Other factors cover the position of the sun, the GPS location of the PV plant(s) and the direct normal irradiance (DNI). Extensive domain knowledge can help identify possible explanatory variables that can be added to the input dataset.
The output values, i.e. the forecast, will contain the solar production in the same unit and intervals as the input data on the target variable, over the requested forecasting horizon. If desired, these output values can even be used as input for further risk analysis and optimisation.
Are you interested in a walk-through scenario of this type of use case? Then take a look at our solution templates on this use case! You can find them under Single Asset Solar Production and Portfolio Solar Production.