Home Big Data Analyzing Bursty Traffic in Real-Time Insights

Analyzing Bursty Traffic in Real-Time Insights

0
Analyzing Bursty Traffic in Real-Time Insights

That’s the third publication in a series by Rockset’s Chief Technology Officer. To stay informed about our upcoming content, we’ll be publishing additional posts in this series soon.

Data Insights Uncovered

  1. Analyzing Bursty Web Traffic in Real-Time Analytics Contexts

While builders, information engineers, and website reliability engineers may have varying perspectives, a shared understanding is that bursty information traffic is essentially inevitable.

Throughout Black Friday, internet retailers experience a significant surge in online traffic, as shoppers flock to their websites and mobile apps seeking exclusive deals and bargains. Unexpected events unfold regularly at this location, catching unsuspecting visitors off guard with an array of surprises. Spooky season sparks a deluge of Halloween-themed posts on clients’ social media platforms. Global market fluctuations of significant magnitude can ignite explosive trading activity in the digital sphere. A meme can suddenly and unexpectedly go viral among young people.

In the past, batch analytics made it easier for data analysts to process and manage influxes of visitor information within a relatively short timeframe. Executives initially underestimated the frequency with which they would require reviews, expecting no more than one per week, and were similarly unprepared for the need for real-time dashboards providing up-to-the-minute information. While some sources, such as live streams, are now delivering real-time content, neither the information itself nor user inquiries have had to be time-sensitive. Databases regularly buffer, process, and absorb data in accordance with a standardized daily routine.

However, the convergence of advanced analytics and streamlined pipelines has become a crucial synergy. Analytics wasn’t deeply ingrained in business strategies or utilized for daily operations, unlike today’s practice. Ultimately, you would always plan ahead to accommodate unexpected spikes in visitor traffic and proactively scale up your database clusters and data processing pipelines accordingly. While it came at a high price, the investment did ensure its security.

Bursty information site visitors pose a pressing concern today because they overwhelm online resources with fleeting interest and scattered attention.

These situations have utterly flipped. Companies are rapidly transforming into digital businesses in an effort to replicate the success of pioneers like Uber, Airbnb, and Meta. Real-time analytics now drive their operations and bottom line, whether through a customer-driven recommendation engine, an automated personalization system, or an internal business observability platform. No room exists for buffering information for leisurely consumption. As the sheer volume of data continues to grow, the risk of overprovisioning poses a significant financial threat to corporations.

Databases claim the ability to scale on demand, allowing organizations to bypass costly overprovisioning and ensure seamless data-driven operations. Carefully scrutinizing the situation reveals that such databases often resort to using one of two suboptimal solutions:

  • Manually deploying new configuration information to scale up databases is a requirement for several methods involving system directors. Scale-up cannot be triggered mechanistically through a predefined rule or API naming convention. Inevitably, that approach generates congestion and downtime, rendering it unsuitable for real-time operations.
  • Databases claim that their designs provide resistance to sudden spikes in query traffic. Key-value and document-oriented databases are two exemplary types of NoSQL databases that offer flexible data models and efficient scalability. With remarkable agility, these tools swiftly execute their assigned tasks – retrieving specific data points or entire documents – and their speed remains largely impervious to surges in data volume. Despite their capabilities, these databases often compromise on support for complex SQL queries at any scale. As substitutes, these database creators have relegated complex analytics to their builders, individuals lacking the skills or time to continually update queries as data evolves seamlessly? Optimization of query performance is a key strength for most SQL databases, which they perform with mechanical efficiency.

As a result, many databases that are initially set up with an even balance between compute and storage resources often struggle to effectively separate their respective responsibilities, ultimately hindering performance for information-seeking visitors. Not distinguishing between ingestion and queries implies that they instantaneously impact the converse. The pace at which you read influences the quantity of information you consume?

This drawback – the propensity for ingest and query computation to compete, resulting in potential slowdowns – is a common issue affecting numerous architectures. While it may be easier to scale without competition, While that’s a viable approach, it can also be a pricey one.

Database creators have innovatively reimagined their structures to accommodate sudden surges in data traffic without compromising speed, functionality, or affordability. There exist a low-cost yet high-performing approach, alongside a costly and ineffective one.

Lambda Structure: Too Many Compromises

In the distant past, a revolutionary concept began to take shape: the multitiered database architecture started gaining traction. Lambda methods strive to cater to the diverse needs of both data-intensive researchers and stream-processing developers by decoupling data ingestion into distinct tiers. The One Layer processes large volumes of historical data in batches. Hadoop, initially utilized, has undergone significant changes.

There may be an additional layer sometimes constructed around a stream processing technology like Apache Spark. Instantly furnishes up-to-the-minute insights into current happenings. The serving layer, often comprising technologies such as Elasticsearch or Cassandra, provides the aggregated results to various dashboards and ad-hoc query consumers.

When methods are forged from compromise, their limitations become apparent. Developing two distinct data pathways necessitates construction workers to craft and maintain duplicate versions of code, thereby introducing unnecessary complexity and increasing the likelihood of data inaccuracies. While builders and information scientists often exert limited control over the processing of streaming and batch data within complex information pipelines.

Information processing in Lambda largely unfolds as fresh data is logged onto the system. The serving layer facilitates simple, fast key-value or document lookups, sans the need to handle intricate transformations or queries. Data-application builders should handle the workload of leveraging novel transformations and refining queries seamlessly. Not very agile. It’s little wonder that companies struggle to sustain growth year after year.

Why Your Landing Page Must Be Ready to Handle Burst Traffic

There might be a chic solution to the problem of bursty information visitors.

To efficiently handle sudden surges in website traffic, databases are designed to decouple data storage from analysis capabilities. This design enables flexible scalability of ingestions or queries, accommodating varying demands. By eliminating computational bottlenecks, this design prevents query surges from hindering data writes, while also ensuring that write requests aren’t slowed down by concurrent queries. Ultimately, the database must be cloud-native, ensuring automated and seamless scaling that remains transparent to both developers and customers. No have to overprovision prematurely.

A serverless architecture utilizing an Aggregator-Leaf-Tailer (ALT) framework has been developed, featuring a distinct separation of duties between data retrieval, indexing, and query processing, with each component playing its designated role.

By implementing a robust ALT structure, it is possible to maintain optimal ingestion rates even when query volumes suddenly surge or unexpectedly dip. Operating in tandem with utility guidelines, the cruise management system enables these ingestion and query speeds to autonomously scale upwards without requiring manual server reconfigurations. Without the need to worry about contention-caused slowdowns or premature overprovisioning of your system, each option offers a seamless experience. Alternative architectures offer unparalleled performance and efficiency for real-time analytics.

As a member of the team that pioneered the Information Feed, now rebranded as Feed, I had the privilege of witnessing the facility’s potential firsthand. We revolutionized the update process by transitioning from an hourly schedule to real-time updates, allowing users to stay informed about their friends’ activities with unprecedented speed and immediacy. When LinkedIn transformed its real-time FollowFeed to a more efficient architecture, it simultaneously reduced the number of servers required by half. Google and other web-scale companies also employ ALT. Given the complexity of modern data landscapes, leveraging a well-crafted architecture is crucial to extract insights from diverse datasets in real-time.

Corporations don’t necessarily require an excessive number of information engineers to successfully deploy Alternative Learning Technologies (ALT). Provides instant access to a comprehensive, cloud-based analytics database, expertly organized within an adaptable and scalable (ALT) architecture framework. Our database empowers corporations to seamlessly manage fluctuating information traffic for their real-time analytical needs, as well as efficiently process diverse key real-time aspects such as mutable and out-of-order data, low-latency queries, flexible schemas, and more.

When choosing a system to deliver real-time information for purposes, evaluate whether it incorporates an ALT architecture to effectively manage bursty traffic from any direction.


Serves as Chief Technology Officer (CTO) and co-founder of Rockset, overseeing the company’s technical direction. As a founding engineer at Facebook’s database staff, he played a pivotal role in establishing the company’s information infrastructure. As a founding engineer at Yahoo, he played a key role in shaping the company’s early success with the . As a key stakeholder in the open-source ecosystem, he also contributed to the initiative.


Is the primary platform designed from the ground up for the cloud, providing lightning-fast analytics on real-time data with extraordinary efficiency? Study extra at .

LEAVE A REPLY

Please enter your comment!
Please enter your name here