SIBYL: A machine learning-based framework for forecasting dynamic workloads

已发布

作者 , Senior Applied Scientist , PhD Candidate , Senior Researcher , Principal Scientist Group Manager , Principal Research SDE Manager , Principal Scientist Manager

This paper was presented at the ACM SIGMOD/Principles of Database Systems Conference (opens in new tab) (SIGMOD/PODS 2024), the premier forum on large-scale data management and databases.

SIGMOD/PODS 2024 logo to the left of the first page of accepted paper,

In today’s fast-paced digital landscape, data analysts are increasingly dependent on analytics dashboards to monitor customer engagement and app performance. However, as data volumes increase, these dashboards can slow down, leading to delays and inefficiencies. One solution is to use software designed to optimize how data is physically stored and retrieved, but the challenge remains in anticipating the specific queries analysts will run, a task complicated by the dynamic nature of modern workloads.

In our paper, “SIBYL: Forecasting Time-Evolving Query Workloads,” presented at SIGMOD/PODS 2024, we introduce a machine learning-based framework designed to accurately predict queries in dynamic environments. This innovation allows traditional optimization tools, typically meant for static settings, to seamlessly adapt to changing workloads, ensuring consistent high performance as query demands evolve.

Spotlight: Blog post

Research Focus: Week of September 9, 2024

Investigating vulnerabilities in LLMs; A novel total-duration-aware (TDA) duration model for text-to-speech (TTS); Generative expert metric system through iterative prompt priming; Integrity protection in 5G fronthaul networks.

SIBYL’s design and features

SIBYL’s framework is informed by studies of real-world workloads, which show that most are dynamic but follow predicable patterns. We identified the following recurring patterns in how parameters change over time:

  • Trending: Queries that increase, decrease, or remain steady over time.
  • Periodic: Queries that occur at regular intervals, such as hourly or daily.
  • Combination: A mix of trending and periodic patterns.
  • Random: Queries with unpredictable patterns.

These insights, illustrated in Figure 1, form the basis of SIBYL’s ability to forecast query workloads, enabling databases to maintain peak efficiency even as usage patterns shift.

A figure illustrating the analysis of how parameter changes with query arrival times, identifying four common patterns. The Y-axis represents the query arrival time and the X-axis shows the parameter values. Section (a) shows the trending pattern, which includes increasing, decreasing trends. Section (b) displays the periodic pattern, characterized by a regular pattern with fixed intervals such as hourly, daily, or weekly. Section (c) combines the trending and periodic patterns, while section (d) represents the random pattern, indicating no regular or predictable pattern.
Figure 1. We studied the changing patterns and predictability of database queries by analyzing two weeks’ worth of anonymized data from Microsoft’s telemetry system, which guides decision-making for Microsoft products and services.

SIBYL uses machine learning to analyze historical data and parameters to predict queries and arrival times. SIBYL’s architecture, illustrated in Figure 2, operates in three phases:

  • Training: It uses historical query logs and arrival times to build machine learning models.
  • Forecasting: It employs pretrained models to predict future queries and their timing.
  • Incremental fine-tuning: It continuously adapts to new workload patterns through an efficient feedback loop.
The figure shows SIBYL’s three phases. The first phase is a training phase: it featurizes the past queries and their arrival time, and trains ML models from scratch. The second phase is forecasting phase: it continuously receives recent queries from the workload traces and employs the pre-trained ML models from the training phase to predict the queries within the next time interval along with their expected arrival time. The last phase is the Incremental fine-tuning, it monitors model accuracy and detects workload shifts (e.g., new types of queries emerging in the workload) via a feedback loop. It adjusts its models efficiently by fine-tuning incrementally on the shifted workload, without retraining from scratch.
Figure 2. An overview of SIBYL’s architecture.

Challenges and innovations in designing a forecasting framework

Designing an effective forecasting framework is challenging, particularly in managing the varying number of queries and the complexity of creating separate models for each type of query. SIBYL addresses these by grouping high-volume queries and clustering low-volume ones, supporting scalability and efficiency. As demonstrated in Figure 3, SIBYL consistently outperforms other forecasting models, maintaining accuracy over different time intervals and proving its effectiveness in dynamic workloads.

The figure presents a comprehensive comparison of four forecasting models across three different workloads: Telemetry, SCOPE, and BusTracker, and Sales dataset. The models compared are History-Based, Random Forest, Vanilla LSTM, and Sibyl-LSTMs. These models are evaluated based on three metrics: Recall, Precision, and F-1 Score. Each metric is represented in a separate column, while the workloads are organized in rows. The evaluation is done over different forecast intervals: 1 Hour, 6 Hours, 12 Hours, and 1 Day. 

Sibyl-LSTMs surpasses other forecasting models and maintains stable accuracy across various time intervals settings. Vanilla LSTM and Random Forecast perform poorly on the Sales workload, which has more outliers and more unstable patterns. For Telemetry workload, the history-based method performs well with the 12-hour interval due to the workload’s recurrent queries that have the same parameter values within a day (between the past 12-hour window and the future 12-hour window). But this method is ineffective with the one-day interval, as many query parameter values change when crossing the day boundary. The history-based method yields unsatisfactory results for the other three workloads that exhibit more rapid and intricate evolution and involve time-related parameters that operate on a finer time scale. Therefore, it is imperative to use an ML-based forecasting model to handle the evolving workload.
Figure 3. SIBYL-LSTM’s accuracy compared with other models in forecasting queries for the next time interval.

SIBYL adapts to changes in workload patterns by continuously learning, retaining high accuracy with minimal adjustments. As shown in Figure 4, the model reaches 95% accuracy after fine-tuning in just 6.4 seconds, nearly matching its initial accuracy of 95.4%.

The figure consists of two parts a and b.  (a) depicts a pattern change of a parameter in the Telemetry workload. The Y-axis represents the query arrival time and the X-axis shows the parameter values. The shift in the patten starts from May 13 (highlighted in light blue), which Sibyl detects by observing the decline in accuracy. The model accuracy on the shifted pattern is 51.9%, which falls below the threshold 𝛼 = 75%, triggering model fine-tuning.  Figure 11 (b) shows that Sibyl fine-tunes the Sibyl-LSTMs by incrementally training on newly observed data, rather than training from scratch. The Y-axis represents recall, and the X-axis shows the number of epochs. Th figure demonstrates that the model converges in just two epochs, taking 6.4 seconds of overhead, and improves accuracy to 95.0%, which is close to the pre-trained accuracy of 95.4%.
Figure 4. Fine-tuning results on telemetry workload changes.

To address slow dashboard performance, we tested SIBYL by using it to create materialized views—special data structures that make queries run faster. These views identify common tasks and recommend which ones to store in advance, expediting future queries.

We trained SIBYL using 2,237 queries from anonymized Microsoft sales data over 20 days, enabling us to create materialized views for the following day. Using historical data improved query performance 1.06 times, while SIBYL’s predictions achieved a 1.83-time increase. This demonstrates that SIBYL’s ability to forecast future workloads can significantly improve database performance.

Implications and looking ahead

SIBYL’s ability to predict dynamic workloads has numerous applications beyond improving materialized views. It can help organizations efficiently scale resources, leading to reduced costs. It can also improve query performance by automatically organizing data, ensuring that the most frequently accessed data is always available. Moving forward, we plan to integrate more machine learning techniques, making SIBYL even more efficient, reducing the effort needed for setup, and improving how databases handle dynamic workloads, making them faster and more reliable.

Acknowledgments

We would like to thank our paper co-authors for their valuable contributions and efforts: Jyoti Leeka, Alekh Jindal, and Jishen Zhao.

相关论文与出版物

继续阅读

查看所有博客文章