[ウェビナー] Confluent と AWS を使用して生成 AI スタックを構築 | 今すぐ登録

What is Batch Processing?

Batch processing provides efficient and scalable ways to process large volumes of data in predefined batches or groups. At its core, batch processing refers to the execution of batch jobs, where data is collected, stored, and processed in batches, often at scheduled intervals. This approach offers numerous use cases across various industries, such as financial transactions, data analytics, and report generation.

Combining fault-tolerant, scalable batch processing and streaming in one, Confluent can process data streams in real time with infinite storage. Get started with a complete suite of stream processing tools on any cloud.

Batch Processing Overview

Traditional thinking views batch processing as fundamentally different from stream processing, since it handles data in discrete chunks rather than real-time streams. To implement batch processing effectively, organizations rely on dedicated software and systems that streamline data ingestion, processing, and output generation. Examples of batch processing include ETL (Extract, Transform and Load) processes, daily backups, and large-scale data transformations.

However, batch processing can also be thought of as a special case of stream processing. It can be argued that all data processing is stream processing and the reasons we started with batch processing are due to technical limitations. Since most if not all data can be reduced to streaming data, all data processing, even batch processing, can be viewed as stream processing.

From Batch to Streams

Batch processing can also be thought of as a natural stepping stone to stream processing. From early days of computing, data has always been stored and processed in batches even when it was generated in a stream. This is largely due to technical limitations in data collection, storage and processing. Over a period of decades, those technical limitations lessened and the cost of storage, compute and networking came down by orders of magnitude. This allowed for the rise of low cost distributed systems such as Hadoop dramatically increasing the size of a batch and shortening the time it takes to process a batch. This latency decrease blurred the line between batch processing and stream processing.

Batch vs Stream Processing

Batch processing introduces an arbitrary difference from stream processing, as it requires the data be bounded in discrete chunks rather than in real-time streams. To implement batch processing effectively, organizations rely on dedicated software and systems that streamline data ingestion, processing, and output generation.

Benefits of Real-Time vs Batch Processing

Since batch processing can be thought of as a special case of stream processing, it’s not quite accurate to compare the two. All things being equal, real-time processing is always better than batch processing, since it would not be necessary to divide the data into batches before processing it. Traditionally, though, real-time processing has been expensive and required a high level of computing resources that, lacking the low cost of storage and compute taken advantage of by stream processing. Therefore, stream processing was only seen as practical for high value applications that require immediate feedback or responses, such as fraud detection, anomaly detection, and real-time analytics.

Feature Batch Processing Stream Processing
Data processing Data is processed in batches Data is processed as it is received
Data volume Large amounts of data Small amounts of data
Data latency High latency Low latency
Cost Low cost High cost
Use cases Data consolidation, data analysis, data mining, data backup and recovery Real-time analytics, fraud detection, anomaly detection

When to Use Batch Processing

Historically, batch processing is a good choice for applications that do not require immediate feedback or response, such as data consolidation, data analysis, data mining, and data backup and recovery. Batch processing has been less expensive than real-time processing and previously required fewer computing resources.

Examples of When Batch Processing is the Best Choice

  • Data consolidation: Batch processing can consolidate data from multiple sources into a single data warehouse or data lake. This can help businesses to improve their data quality and make it easier to analyze data.
  • Data analysis: Batch processing can be used to analyze large amounts of data to identify trends and patterns. This can help businesses to make better decisions about their products, services, and marketing campaigns.
  • Data mining: Batch processing can be used to mine data for hidden patterns and insights. This can help businesses to identify new opportunities and improve their efficiency.
  • Data backup and recovery: Batch processing can be used to back up data regularly. This can help businesses to protect their data from loss or corruption.

Considerations Choosing Between Batch Processing vs Stream Processing

  • Latency: Batch processing has higher latency than real-time processing. Batch jobs are typically pre-deployed and run on a schedule with a looser SLA.
  • Cost: Batch processing was typically less expensive than real-time processing. This is largely because data processing was cost-constrained. Since latency was less of a concern, a long processing time or a wait between scheduled intervals was less of a concern, even though it was never ideal.
  • Scalability: Batch processing is easy to scale than real-time processing since storage and computation can scale separately. Some techniques, such as shared-nothing distributed systems, allow storage and compute to scale together economically, dramatically increasing the size of batches and reducing the processing time and blurring the distinction between a batch and a stream.
  • Use cases: Traditionally batch processing was well-suited for data consolidation, data analysis, data mining, and data backup and recovery while real-time processing is well-suited for fraud detection, anomaly detection, and real-time analytics. But these use cases are often incomplete without each other. For example, even real-time fraud detection often requires data analysis of a consolidated set so the job can determine how anomalous a transaction is compared to a historical pattern traditionally processed in batch, and real-time analytics is often made more useful with historical context provided by batch processing.

Challenges with Batch Processing

The utility of batch processing has always been limited by time. Specifically,

  • The time it takes to process a batch
  • The interval between scheduled batch runs

Both of these challenges have technical solutions, such as using a linearly scalable distributed shared nothing architecture and adding more nodes, but as the processing time and interval between scheduled batch runs both decrease, batch processing becomes more like stream processing.

Why Fully Managed Data Streaming with Confluent

Event streaming with Confluent unifies batch processing and stream processing, allowing you to accomplish both with the same platform. Confluent allows you to process streams in real time as well as persist those streams with infinite storage, as well as a complete suite of stream processing technologies including Kafka Streams, KSQL, and our fully managed Apache Flink service in Confluent Cloud.