Développez l'apprentissage automatique prédictif avec Flink | Atelier du 18 déc. | S'inscrire

T+1 and Beyond: Transforming Trade Settlement for Modern Markets With Confluent

Écrit par

In the capital markets, the adage “time is money” applies even after the trade is made. Investors who want to gain quicker access to their funds are confined by a post-trade settlement process that still relies on batch processing. This blog takes a closer look at the T+1 industry regulation seeking to change this and how Confluent’s data streaming platform can enable this transformation today and beyond as securities transactions edge closer toward a potential future of near real-time settlement.

What are settlement challenges?

The life cycle of settlements in today's complex financial landscape presents a multitude of challenges revolving around the integration and synchronization of disparate applications and systems. From trade capture and trade confirmation to clearing and reconciliation to final settlement, data and events must move across each of these often siloed processes while being abstracted to be consumable and valuable for the processing parties along the way.

In addition to the hurdle of achieving seamless integration, the web of interconnected technology demands data consistency. All the involved applications rely on receiving the same data. Without this, discrepancies can arise, leading to operational inefficiencies and potential financial risks. Even once a trade is executed and settled, additional systems, like regulatory reporting, need access to the data in a timely manner. There are mandated deadlines for reporting trades, so publishing trades as transactions to the regulation engines in real time, rather than processing them overnight in batch, helps to meet these timelines.

The settlement process, therefore, hinges on the delicate balance of achieving cohesion among diverse technologies while ensuring the data remains accurate and up to date, making it a critical concern in the realm of modern finance. 

Why change?

Trade volume and volatility has increased in recent years, in part because of meme stocks, “Robinhood traders,” and a general rise of retail trading. The volatility has seen average volume of roughly 100 million transactions a day peak at over 360 million in 2021. With volatility comes risk.

Regulators are responding to the increase in trade volume, volatility, and variability and have adapted to keep up. The industry standard settlement cycle of T+2 (trade date plus two business days) is evolving as securities markets build out regulations to shorten the trade life cycle by one full business day (T+1) to reduce risk and improve operational efficiencies during the settlement process. India transitioned to this accelerated T+1 cycle in 2022 and, at the time of this writing, the SEC deadline of May 2024 still stands for U.S. markets to make the switch (with Canadian regulators aligned on a similar timeline). As the world’s largest securities market, settling U.S. trades 24 hours faster will have a significant impact on international investors and other global markets who remain on the two business day cycle. The net effect of T+1 for financial institutions worldwide is driving an increased need for real-time systems, more efficient trade operations, and improved risk management.

At a data architecture level,  settlement systems must evolve to meet the increasingly complex needs of a multi-step post-trade process that requires highly accurate data orchestration across multiple parties, multiple geographies, and at varied times. This involves adapting how you move data around the business: migrating off old monolithic technologies, like mainframes, to new technologies, like Confluent, that support building out a distributed asynchronous ecosystem. Modernization allows organizations to build and deploy business applications with greater flexibility, support larger scale, and be more responsive to changing regulatory requirements.

How can Confluent help?

With the increasing pressure to settle trades faster, batch processing may not be able to meet future requirements as the settlement time continues to shrink, dropping from hours today to potentially near real time. Confluent’s data streaming platform enables financial institutions to create more effective settlement pipelines in preparation for this change. Securities and counterparty information can be collected from their sources and stored as an immutable log in Confluent, replayable to their previous state. Multiple systems can concurrently consume the same improved data product with the enriched information, meaning the settlement system can respond to changing state in the reference data in real time, while also working in parallel with the trading system.

Take internal integrity between custodian and counterparty, for example, where it is required to be frequently proven to negate risk. If there is demand for reconciliation, Confluent can help by having accurate and enriched data points as well as a replayable log to recapture point-in-time views. This provides a safer trading environment.

Exploring the stream

Below is a reference architecture showing where Confluent sits as the stream processing engine for settlements:

(See full-size image)

Let’s take a closer look at some of the pivotal Confluent features considered for this particular implementation.

Accelerating onboarding of external data

To help connect the disparate systems, Confluent offers over 120 pre-built connectors available in the Confluent connector hub.

Connectors prove particularly useful when there is demand for moving processes to the cloud. Instead of having to lift and shift an entire pipeline, Confluent's fully managed connectors for queues like AMPS or systems of record like Oracle can be quickly connected to Confluent, and the changes from these systems can then be distributed to services both on premises and in the cloud.This significantly reduces the overhead of joining systems, where building integrations has been known to take three to six months, and can instead be handled in a matter of hours with Confluent connectors.

Scaling for market volatility

The SEC's recent rules around the settlement cycle are aimed to make “market plumbing more resilient, timely, orderly, and efficient.” These wider systematic changes require technical underpinnings as well, and Confluent’s Apache Kafka service powered by Kora, an engine built around the Kafka protocol, was built to achieve similar goals. It delivers 30x faster elasticity, 10x more reliability when compared to the fault rate with self-managed deployments, and provides a 99.99% uptime SLA.

The outstanding reliability and uptime SLA are critical in mitigating risk. Any downtime to a system could result in an increased cost of transaction, adding to operational overhead and creating an overall chokepoint of inefficiency through its impact on downstream systems.

The 30x faster elasticity delivered by the Kora engine is also essential. In one customer’s case their instances process the majority of their volume in a four-hour period and are relatively quiet at other times. Confluent Cloud allows Kafka clusters to dynamically expand and shrink as needed. Confluent Cloud's self-balancing clusters optimize data placement to ensure that newly provisioned capacity is immediately used and that skewed load profiles are smoothly leveled.

Delivering reliable information to siloed processes

Getting the most current and reliable information to potentially siloed parties across the front office, mid office, and back office, and other supporting departments, is a critical challenge.

There is never just one system handling the settlement process. So, the next challenge solved by pipeline modernization is delivering the most current and reliable information to the silos across the business. Trading systems driven by batch processes will struggle with the complexities of an accelerated settlement process and the new pressures it puts on the integrations. One of the biggest issues with batch processing is the snowball effect as each batch process creates a downstream latency of stale data.

When there is a dependency on static data (currency information, counterparty information, etc.) to be up to date, the traditional model of different departments obtaining information independently and without coordination creates extra risk and effort with little reward.

Building richer real-time data products on top of the fundamental principles of streaming with Confluent provides a solution to this challenge. Securities and counterparty information can be collected from their sources and their changes stored immutably in Confluent, replayable to their previous state.

Automated trade enrichment more importantly enables parties in the mid and back office to consume the same improved data product, meaning the settlement system both responds to changing state in the reference data in real time and accelerates the path to trade agreement. To see this in action, take a look at the opportunities that were opened up for Affin Hwang. They’ve been able to explore product offerings that were considered too volatile with their existing batch architecture.

To build these data products, teams turn to Confluent’s Stream Processing, allowing them to serve a normalized, enriched product to their consumers elsewhere in the business.

For instance, stream processing can simplify server-side payload filtering—where messages are delivered to a consumer based on its content. Stream processing creates branches from an original topic based on filtering criteria. Then, consumers subscribe to subtopics to consume only subsets of the original topic. Furthermore, filtering can be achieved on the consumer side and be discarded as consumption happens.

Stream processing can also help with a variety of other use cases: 

  • Windowed sorting can be leveraged to help with out-of-sequence message processing 

  • KTables can be used for data quality and data validation 

  • Point-in-time joins can help determine FX pricing 

  • Kafka Streams’ broadcast pattern can help ensure all consumers receive “Done for Day'' messaging

Ultimately, adopting stream processing allows data to be elevated from simply being events and messages to curated data products.

Enforcing entitlements and standards throughout the settlement life cycle

As these dynamic automated systems are built out, controls spanning from the lowest level in a message to the highest levels in people and processes become more and more critical to protect customers and stay in line with regulations. As such, applications should only be able to publish and subscribe according to approved permissions.

In order to ensure this, role-based access control (RBAC) can be used to define granular topic-level access, metadata access, and field-level access through notated data contracts. Strictly defined access controls on who produces data to an enrichment source and who can access the data facilitates increased trust in the quality of the information received. Confluent also provides the additional benefit of capturing the role creations and their access as part of its audit logging.

Data contracts provide a critical step in ensuring the quality of the new data systems. Data governance programs should focus on the ownership, controls, and audit of the information that flows through the system, enabled by processes and technologies for standardization and collaboration.

At a more specific level to each field, maintaining stream quality ensures that there is no malformed information moving through the system. Schema Registry enables universal and evolvable data standards to be defined, and the wider toolset also allows for easier monitoring and data discovery of the governed assets. By building and enforcing structural standards (such as SWIFT MT standards like MT518), downstream consumers like trade reporting can run more efficiently thanks to developing their analytics to a defined framework while maintaining trust in the quality of data received.

Conclusion: Taking the first steps into the future of trade settlement

Due to accelerated needs, customers are continuing to seek out ways to optimize their settlement pipeline and are turning to Confluent as a platform to enable their real-time data processing and integration demands.

Confluent enables a reliable and real-time pipeline of settlements—from connectors facilitating the push of trading events to Kora’s scalability cushioning the volatility of the modern trading volumes with stream processing to normalize and enrich data as it moves through the pipe, topping it all off with strict controls on access and data quality.

Fundamentally, preparing for faster settlement workflows is becoming nothing short of a requirement. With two of the biggest capital markets in the world pushing T+1, it is only natural that the rest of the world will follow. Looking even further into the future, demand for complete intraday trading (T+0) by leveraging options like distributed ledger technology for a truly 24/7 decentralized market is a possibility (fortunately DLT plays well with Confluent too). By accelerating settlement processes, financial institutions can unlock significant benefits around counterparty risk, improved liquidity, and lower capital requirements.

Whether the goal is to achieve core functionality in stream processing or architect around regulatory and data privacy requirements, working with Confluent allows organizations to set their data in motion in the trade settlement world.

  • Alex Stuart is a Senior Solutions Engineer at Confluent, guiding digital-native businesses across Europe on their path to adopting data in motion. His passion for fintech and analytics comes from previous roles at Experian and Splunk. He’s “in motion” outside of work too: as a running community leader and a keen globetrotter at 52 countries and counting.

  • Phoebe Bakanas is a Staff Solutions Engineer within the Strategic Financial Services team at Confluent where she helps companies build enterprise event streaming platforms. She supports a variety of companies in the finserv space, including large banks.

Avez-vous aimé cet article de blog ? Partagez-le !

Win the CSP & MSP Markets by Leveraging Confluent’s Data Streaming Platform and OEM Program

This blog explores how cloud service providers (CSPs) and managed service providers (MSPs) increasingly recognize the advantages of leveraging Confluent to deliver fully managed Kafka services to their clients. Confluent enables these service providers to deliver higher value offerings to wider...


Atomic Tessellator: Revolutionizing Computational Chemistry with Data Streaming

With Confluent sitting at the core of their data infrastructure, Atomic Tessellator provides a powerful platform for molecular research backed by computational methods, focusing on catalyst discovery. Read on to learn how data streaming plays a central role in their technology.