While banking M&A activity has slowed somewhat from its 2021 high point, drivers for M&A remain strong. Rising funding costs that exert pressure on smaller institutions, the expectation of increased regulation, a desire to divest small or high-overhead units, and a need to offer greater product coverage are all likely to incentivize new M&A activity.

By this point, the industry is well aware that much of the true work of a merger happens in the years following the signing of a deal. But recent events have shown that the scale and duration of post-merger data integration often remain overlooked, creating problems and increasing the organization’s inherent risk over the long term.

Spectacular near-term data integration problems have gathered media and industry attention. In one case, in 2022, a failure to prepare in advance left the merged bank exposed to data breaches, account data integration woes, and severe operational and reputational costs.

But in many ways, it’s the longer-term challenges of fully integrating two banks after a merger that are the most significant to the future of the merged entity. These problems don’t (usually) attract headlines, but they are well known to regulators and can undermine a banking unit’s compliance and profitability for years.

The Two Fundamental Challenges of Bank Data Integration

For banks planning or undergoing a merger or acquisition, data integration takes two major forms:

  • Physical integration of data infrastructure: This is driven by the need to provide unified reporting, to reduce the cost and complexity of data operations, and to create a data infrastructure that can support digital offerings that span the entire merged entity.
  • Normalization of data meaning and data usage: This is driven by the need to create consistent, accurate KPIs and KRIs, to manage data quality and data supply across the enterprise, and to correctly calculate firmwide metrics such as counterparty credit risk.

Both these efforts are complex and difficult even within a single organization. And in fact both are areas where midsize and large banks have sometimes struggled to achieve the desired level of compliance and risk. In large banks especially, the problems with achieving an integrated infrastructure and an integrated information space can be traced back to previous mergers—mergers that have often left a legacy of systems, information diversity, and duplication in the enterprise.

The Basel Committee’s 2023 Progress Report Highlights Gaps

Almost a decade ago, the Basel Committee on Banking Supervision released its “BCBS 239” principles for effective risk data aggregation—a set of principles that has largely defined what “good” looks like in a bank’s data estate. In November 2023, the BCBS released a progress report reviewing the successes and challenges experienced by 32 U.S. and international banks in achieving compliance with the principles.

This progress report is largely a map of the elements of their data estate that banks find most difficult to integrate—which provides useful intelligence in merger planning. For example, the BCBS progress report calls out:

  • Complex data architecture
  • Lack of a unified, firmwide taxonomy and end-to-end lineage
  • Diverse IT infrastructure

Given that merging two organizations impacts these three specific factors so strongly, the message is clear: From a BCBS 239 (and therefore from a Basel III/IV) point of view, merger activity is a major source of risk and potential compliance exposure.

The BCBS 239 status report focuses on firmwide risk aggregation—i.e., on large-scale architecture, infrastructure, and governance problems. Our experience at Treliant is that while these are important, the small-scale integration of actual terms and data sets is also a very material challenge.

In the rest of this article, we’ll examine both the large-scale and small-scale challenges in more detail and suggest mitigations for organizations that are merging or about to merge.

M&A Challenges: Data Infrastructure and Governance

At the highest level, the BCBS 239 progress report confirms what everyone knew: A highly compliant, efficient data pipeline has certain basic dependencies, and these dependencies are stressed when banks take on complexity or increase transaction volume. Allowing for this stress is key to reducing the long-term cost of data integration.

  • Governance: Banks have made significant progress in data governance over last decade, especially at larger firms. Ten years ago, “data governance” was not a common term; today the BCBS 239 report highlights governance as an area in which banks have done well. Control frameworks, independent validation and data quality functions, and central authorities for cataloging and tracking data assets are now common. In a merger scenario, governance functions need to be further standardized and empowered to extend consistent control and oversight across the merged entity as quickly as possible. Establishing standards and interfaces to which the business units can work may prevent inconsistent governance later on.
  • Data estate rationalization: Rationalizing the physical data estate is as much a technology problem as a data problem. Some banks find themselves with a multitude of data lakes even in the absence of mergers, the legacy of enthusiastic adoption of new data technologies at team or business unit levels. Post-merger, deduplicating the data estate begins by defining an approach to identifying redundant systems or those that can be merged. Strong asset and data catalogs are important tools at this stage. If the merged entity spans more time zones, jurisdictions, or product lines, of course, an increase in complexity may be inevitable. Some recent larger mergers have found that modern strategic data platforms are capable of hosting new use cases relatively easily, minimizing the proliferation of platforms.
  • Scaling and capacity: If the target systems are cloud-based, scaling to accommodate new use cases may be painless; if they are not, work is required around capacity planning and service levels. This can be especially true of vendor supplied systems, which may have idiosyncratic infrastructure and licensing models. The benefits of preparing scalability in advance of a merger, however, are vast. Some banks have largely achieved the use of transaction-level data throughout the pipeline, eliminating many complex aggregation and data explainability challenges, and greatly increasing the data estate’s agility in the event of a merger.
  • Taxonomy and lineage: A complete data lineage at field level is a daunting undertaking for any large organization, yet BCBS rightly regards lineage as a core top-level capability for achieving compliance. The solution is to right-size the investment of time and effort, focusing on creating a meaningful end-to-end description of data flows even if the level of detail varies initially. The same is true of taxonomy; unifying terms that appear in key regulatory reports or that are used to join and relate data from the different business entities is more important than creating a glossary that covers everything.

M&A Challenges: Specific Information Integration Challenges

Even when the physical data estate is rationalized, overarching governance is in place, and the physical delivery and transformation of data is a solved problem, normalizing and integrating the business information used by two different banks is complex.

  • Data consistency is a BCBS 239 principle that’s important in stress testing or in achieving any firmwide metric. But data consistency will be challenged by a merger in ways that might not be immediately visible from the top of the organization. For example, the number of different market data snapshots and the number of different reference data sources will increase, lowering data consistency and increasing the challenge of managing the business and also the chance of inaccurate regulatory reporting.
  • Labeling and derivation of data is another challenge. Market data and reference data will not only be gathered with different parameters and timestamps, they may also be labeled differently and used with different assumptions. For example, both banks will have a set of yield curves to value future cash flows, and these curves will already be labeled with some combination of country, currency, and rating. But the processes and definitions underlying these curves—for example, what is considered the correct rating to use, or what methodology is used to derive a curve—vary from bank to bank enough to compromise reported figures.
  • Unification of business hierarchies such as counterparty, book, or business units, is a well-known problem that comes to the foreground when integrating information across entities. If the combined data pipeline is scalable enough to deal in transaction-level data, this problem is mitigated, because transactions can be aggregated as necessary into the correct hierarchies. But integration can be very difficult if pre-aggregated data, already rolled up into one or the other entity’s hierarchies, is widely used.
  • Standardization of business terms—especially poorly-defined but common terms, such as “distressed”—across the entities is important both to ensure consistent and correct reports, and to identify data assets that can be successfully merged or retired in the wake of the merger. A global taxonomy is a powerful accelerator here, but much of the work is at the level of data asset or report, mapping key data entities to standard definitions.
  • Identification or creation of join keys that can be used to relate data across marts from the different entities is important but difficult, especially if data is held at an aggregated level. Such keys are a prerequisite for scalable, versatile data warehousing and a strong self-service business intelligence function. The expansion of data warehouses across multiple data domains and use cases is something the BCBS identified as a positive and encouraging trend in overall data health.

To improve risk data aggregation capabilities, several banks are focusing on internal control enhancements and the improvement of automation in the data aggregation process, including expanding the scope of entities or transactions subject to automatic data linkage.

It’s important to note that, like any stress event of business change, a merger event could well accelerate the proliferation of end-user computing (EUC) solutions and aggregation-layer fixes and adjustments. The result is to complicate the information architecture, introducing poorly defined data marts and new terms in the form of adjustments that may have no firmwide validity. Fortunately, this is one area where newly emerging tooling can help a business; identifying, cataloging, deduplicating, and productionizing EUCs in an automated way is becoming more feasible.

Finally, both physical integration and normalization can be made more difficult by the loss of skilled resources and the reorganization of responsibilities that often accompany even low-friction mergers. The merged bank may well need extra resourcing, in the short term, in order to be able to take on urgent integration activities while simultaneously dealing with staff movement.

Forward Planning

In our opinion, the predictors of success in post-merger data integration are present early in the process—ideally before the merger takes place. The characteristics that make banks likely to prosper during post-merger integration are broadly those that make banks more compliant with BCBS 239 and more able to de-risk their data processes in general. Banks are well-positioned to merge their data estates if they have, or can create, levers such as:

  • A strong governance function that can create rules and standards for business units.
  • A well-defined set of data use cases, and a mapping of repositories and feeds to those use cases.
  • Performant, well-designed data warehouses that can take on multiple reporting use cases and provide clean, conformed data.
  • A strong understanding of their existing information architecture, expressed via taxonomy, lineage, and data productization.
  • A realistic understanding of the product- and process-level detail that underlies many key financial data sets.

Read More of Our Recent Insights on M&A:

Banking M&A Landscape and Post-Merger Integration Considerations – Treliant

Practical Considerations for the Merging of Two Banks – Treliant

Author

Ben Peterson

Ben Peterson, Treliant’s Data Lead for EMEA, is a technology leader with more than 20 years’ experience in Financial Services and fintech.  He understands the role that strong data management plays in increasing revenue and reducing risk, and believes that data management can have a compelling RoI at both program…