Core Banking
Modernisation
First Live Service 8 Months Ahead of the Original Big-Bang Plan
Banking & Financial Services | Technical Deep Dive
Incremental Delivery | Strangler Fig Pattern | Parallel-Run Strategy
Go | PostgreSQL | API Gateway
A Case Study in Delivery Transformation
Table of Contents
The Hidden Cost of Waiting Five Years for a Bank to Change 1
Background: The Legacy Core Banking Landscape 2
Big-Bang Failure Patterns in Core Banking 2
Scope Creep and the Requirements Chimera 2
Integration Complexity and the Testing Wall 3
Organisational Risk Aversion 3
The Strangler Fig Pattern: Theory and Practice 3
Why Strangler Fig Fits Banking Modernisation 4
Parallel-Run Strategy: New Accounts on New Systems 4
Operational Risk Management During Parallel Run 5
Incremental Migration Phases: Month by Month 5
Month 4: The First Production Service 5
Month 5-6: Extending to Transactions and Payments 6
API Gateway Design: The Routing Facade 6
Technology Choices: Go and PostgreSQL 7
Regulatory Compliance in an Incremental Migration 7
Outcomes and Measurable Impact 8
Counterarguments and Limitations 9
Operational Complexity of Running Two Systems 9
Data Consistency Challenges 9
Not All Programmes Can Be Incrementally Restructured 9
Conclusion and Broader Implications 10
References 10
Note: This Table of Contents is generated via field codes. To ensure page number accuracy after editing, please right-click the TOC and select "Update Field."
The Hidden Cost of Waiting Five Years for a Bank to Change
In 2022, a mid-tier regional bank embarked on an ambitious five-year programme to replace its monolithic core banking platform. Eighteen months and tens of millions of pounds into the effort, the delivery board confronted an uncomfortable truth: nothing had reached production. Not a single customer-facing service, not a single migrated account, not a single line of new code processing a live transaction. The original big-bang cutover plan, which envisioned a single weekend switch-over from the legacy system to the new platform, had stalled under the weight of its own scope.
This is not an unusual story. Industry research consistently shows that between 60 and 80 per cent of core banking replacement programmes fail to deliver on time, on budget, or at all. The consulting firm Kearney has documented how the allure of a complete, big-bang replacement "promises the quickest route to modernisation but carries catastrophic operational risk." When these programmes fail, the consequences cascade: customer-facing outages, regulatory scrutiny, data reconciliation backlogs that persist for months, and eroded board confidence that can take years to rebuild.
What made this particular programme different was what happened next. Rather than cancelling the initiative or doubling down on the original plan, leadership made a radical decision: they would restructure the entire programme around incremental delivery. The old system would remain operational as a safety net while new services were shipped one by one. The first production service went live in month four of the revised programme, eight months ahead of the original plan's earliest projected delivery date. Within twelve months, the bank had migrated current account openings, transfers, and direct debit processing to the new platform, with the old system serving exclusively as a fallback.
Background: The Legacy Core Banking Landscape
Core banking systems are the backbone of financial institutions. They manage customer accounts, process transactions, maintain ledgers, enforce compliance rules, and interface with payment networks, clearing houses, and regulatory reporting systems. Many of the world's banks still run on platforms that were originally designed in the 1980s and 1990s, built on mainframe architectures with tightly coupled components, proprietary databases, and batch processing models that date back to an era when real-time banking was not a consumer expectation.
The pressure to modernise these systems has intensified over the past decade. Neobanks and fintech challengers have demonstrated that customers expect instant account opening, real-time transaction notifications, and seamless mobile experiences. Regulatory frameworks such as PSD2 in Europe and the Consumer Duty in the UK have added compliance requirements that legacy systems were never designed to handle. Meanwhile, the pool of engineers with expertise in COBOL, mainframe assembly, and proprietary scripting languages continues to shrink as those professionals retire.
Yet modernisation remains extraordinarily difficult. Core banking systems are not merely software applications; they are the authoritative record of every customer's financial life. They encode decades of business logic, regulatory interpretations, product variations, and exception-handling rules that are often poorly documented and embedded in the minds of long-serving operations staff. Replacing such a system is less like swapping out a software module and more like performing a heart transplant on a marathon runner who refuses to stop running.
Big-Bang Failure Patterns in Core Banking
The big-bang approach to core banking replacement follows a seductive logic: design the entire new system, build it, test it comprehensively, and then cut over in a single, carefully planned event. In theory, this minimises the complexity of running two systems simultaneously. In practice, it creates a set of predictable and well-documented failure modes.
Scope Creep and the Requirements Chimera
Big-bang programmes typically begin with a comprehensive requirements-gathering phase that attempts to capture every capability of the existing system before any new code is written. This phase alone can consume twelve to eighteen months. During this period, the business continues to evolve: new regulatory requirements emerge, product lines change, and customer expectations shift. By the time the requirements are finalised, they are already outdated. The programme described in this case study spent eighteen months in requirements gathering before leadership acknowledged that the scope had expanded to the point where no credible delivery date could be established.
Integration Complexity and the Testing Wall
Core banking systems do not exist in isolation. They integrate with dozens of downstream and upstream systems: payment processing networks (SWIFT, Faster Payments, SEPA), card management platforms, anti-money laundering engines, credit scoring services, regulatory reporting portals, and customer-facing digital channels. In a big-bang approach, all of these integrations must be built and tested simultaneously. The combinatorial explosion of integration testing scenarios means that comprehensive end-to-end testing of a full cutover is practically impossible. As one banking technology executive observed, "You cannot test your way out of a fundamentally flawed architectural approach."
Organisational Risk Aversion
As a big-bang programme approaches its planned cutover date, organisational risk aversion intensifies. Every stakeholder who has the power to delay the go-live has an incentive to do so, because the consequences of a failed cutover are catastrophic and personally career-threatening. This dynamic creates a "death spiral" where additional testing requirements are imposed, delivery dates are pushed back, the programme team loses momentum, and key engineers depart for less uncertain projects. The programme in this case study had already begun to exhibit these symptoms before leadership intervened with the restructuring decision.
The Strangler Fig Pattern: Theory and Practice
The Strangler Fig pattern, first described by Martin Fowler and named after the strangler fig trees that gradually envelop and replace their host trees in tropical forests, provides an architectural framework for incrementally replacing a legacy system. The pattern begins by introducing a facade, or proxy layer, between client applications and the legacy system. New functionality is built on the modern platform and routed through the facade, while existing functionality continues to be served by the legacy system. Over time, as more functionality is migrated to the new platform, the legacy system's role diminishes until it can be safely decommissioned.
Microsoft's Azure Architecture Center describes the pattern as follows: "The Strangler Fig pattern begins by introducing a facade between the client app, the legacy system, and the new system. As features are incrementally replaced, the legacy system is gradually 'strangled' until it can be retired." The key insight is that the facade provides a stable contract that both old and new system components adhere to, allowing the migration to proceed without requiring client applications to be aware of which backend is actually processing their requests.
Why Strangler Fig Fits Banking Modernisation
The banking domain is particularly well-suited to the Strangler Fig pattern for several reasons. First, banking products are naturally modular: current accounts, savings accounts, loans, credit cards, and payment services can be migrated independently. Second, banking products have clear customer lifecycle boundaries that provide natural migration points. Third, regulatory requirements mandate data consistency and auditability, which align well with the pattern's emphasis on maintaining a consistent interface during migration. Fourth, the parallel-run strategy inherent in the Strangler Fig pattern provides a built-in fallback mechanism that satisfies both operational risk management and regulatory expectations.
Thoughtworks, in their analysis of the pattern for legacy modernisation, emphasise that the Strangler Fig approach "allows for the gradual replacement of the old system, reducing risk and making the process more manageable." This risk reduction is not merely theoretical. In the case study described here, the decision to adopt a strangler-fig-inspired approach was directly responsible for transforming a programme that had delivered nothing in eighteen months into one that was processing live customer transactions within four months of the restructuring decision.
Parallel-Run Strategy: New Accounts on New Systems
The parallel-run strategy adopted by the programme team was a practical application of the Strangler Fig pattern, adapted to the specific constraints of a regulated banking environment. The core principle was straightforward: new accounts would be created on the new system, while existing accounts remained on the legacy system. An API gateway served as the routing facade, directing requests to the appropriate backend based on the type of operation and the location of the customer's data.
This approach provided several critical advantages. First, it eliminated the need for bulk data migration of existing accounts, which is one of the highest-risk activities in any core banking replacement. Bulk migration requires reconciling years of transaction history, product configurations, and customer-specific exceptions, and any discrepancies can result in regulatory violations or customer harm. By keeping existing accounts on the legacy system and only routing new accounts to the new platform, the team avoided this risk entirely.
Second, the parallel-run strategy provided a natural rollback mechanism. If a defect was discovered in the new system, affected accounts could be rapidly migrated back to the legacy platform, or the routing rules could be adjusted to redirect traffic to the old system while the issue was resolved. The legacy system served as a "warm standby" throughout the programme, providing both operational resilience and psychological safety for the delivery team.
Third, the strategy enabled progressive learning. Each new capability that was deployed to production generated real operational data about system behaviour under load, integration edge cases, and customer experience impacts. This feedback loop allowed the team to refine their deployment processes, enhance their monitoring and alerting, and build confidence among stakeholders through demonstrated progress rather than projected timelines.
Operational Risk Management During Parallel Run
Operating two core banking systems simultaneously introduces its own set of risks. The most significant is the risk of data inconsistency between the old and new platforms. If a customer who has an existing account on the legacy system opens a new product on the new platform, the two systems must maintain a coherent view of that customer's total relationship. The programme addressed this through a shared customer reference data store in PostgreSQL that served as the authoritative source of customer identity and product holdings, with both systems subscribing to change events from this store.
Transaction integrity was managed through a combination of database-level consistency guarantees and an application-level reconciliation engine that ran daily. Every transaction processed on the new system was mirrored to a reconciliation log, and automated comparison jobs verified that the new system's ledger positions matched expected values derived from the legacy system's reporting. Discrepancies triggered alerts and were investigated within defined SLAs, typically within four business hours.
Incremental Migration Phases: Month by Month
The restructured programme was organised around monthly delivery cycles, each targeting a specific product capability. This rhythm replaced the annual Gantt report with monthly demonstrations of working software, fundamentally changing the relationship between the delivery team and the bank's stakeholders. The following table summarises the key milestones:
Table 1: Migration Timeline and Key Milestones
Month
| Capability | Status | Key Outcome |
|---|
| Month 1-3 | Foundation | Infrastructure setup |
| Month 4 | Account Opening | Live in production |
| Month 5 | Transfers | Live in production |
| Month 6 | Direct Debit | Live in production |
| Month 7-9 | Standing Orders | Live in production |
| Month 10-12 | Statements & Reporting | Live in production |
Month 4: The First Production Service
The decision to prioritise current account opening as the first production service was deliberate. Account opening is a customer-facing, high-frequency transaction with relatively simple integration requirements. It does not depend on complex downstream payment processing or historical transaction data. Most importantly, it provides immediate, visible evidence that the new system is processing real customer requests, which is invaluable for building confidence among sceptical stakeholders.
The account opening service was implemented as a Go microservice backed by PostgreSQL, exposed through the API gateway. The service handled KYC (Know Your Customer) verification, account creation, initial deposit processing, and the generation of account confirmation communications. From a standing start in the restructured programme, the team designed, built, tested, and deployed this service within twelve weeks, a timeline that would have been inconceivable under the original programme structure.
Month 5-6: Extending to Transactions and Payments
Transfers and direct debits represented a significant step up in complexity. Transfers required integration with the Faster Payments network and the bank's internal settlement engine. Direct debits involved additional complexity around mandate management, guarantee processing, and the handling of payment failures and indemnity claims. The team approached each capability as a self-contained delivery, with its own acceptance criteria, integration tests, and rollback plan.
The cumulative effect of these monthly deliveries was transformative. Where the original programme had generated eighteen months of requirements documentation and architectural designs with nothing to show in production, the restructured programme delivered three production services in twelve weeks. The bank's executive committee, which had been growing increasingly frustrated with the lack of tangible progress, began to receive monthly demonstrations of working software processing real customer data.
API Gateway Design: The Routing Facade
The API gateway was the critical architectural component that enabled the parallel-run strategy. Implemented as a lightweight, high-performance reverse proxy written in Go, the gateway served as the single entry point for all API requests from the bank's digital channels, branch systems, and partner integrations. Its primary responsibilities were request routing, authentication and authorisation, rate limiting, request transformation, and telemetry collection.
The routing logic in the gateway was configured through a set of rules that determined which backend system should handle each request. Initially, the rules were simple: requests related to new current accounts were routed to the new platform, while all other requests were forwarded to the legacy system. As the programme progressed and more capabilities were migrated, the routing rules were updated to direct an increasing proportion of traffic to the new platform. This gradual shift was transparent to client applications, which continued to interact with a single, stable API contract.
The gateway's design followed several key principles. It was stateless, ensuring that it could be horizontally scaled to handle traffic growth without requiring session affinity. It implemented circuit breaker patterns for each downstream service, automatically failing over to the legacy system if the new platform became unavailable. It collected detailed telemetry on request latency, error rates, and routing distributions, providing the programme team with real-time visibility into system behaviour.
From a security perspective, the gateway enforced OAuth 2.0 token validation, performed request payload validation against JSON schemas, and implemented rate limiting per client and per endpoint. These cross-cutting concerns were centralised in the gateway rather than duplicated across individual microservices, reducing the attack surface and simplifying compliance audits. The gateway also served as the enforcement point for regulatory requirements around data residency and access logging, which were particularly important given the programme's visibility to the bank's regulators.
Technology Choices: Go and PostgreSQL
The selection of Go as the primary implementation language for the new platform's services was driven by several factors. Go's compiled binaries, fast startup times, and low memory footprint made it well-suited to the high-throughput, low-latency requirements of core banking services. Its built-in concurrency primitives, particularly goroutines and channels, simplified the implementation of parallel processing patterns that are common in transaction-heavy banking workloads. The language's strong standard library and emphasis on simplicity reduced the team's dependency on third-party frameworks, which is an important consideration for long-lived financial systems where framework maintainability is a strategic risk.
PostgreSQL was chosen as the primary data store for the new platform. Its ACID compliance, rich data type support, and mature extension ecosystem (particularly for JSONB, full-text search, and partitioning) made it suitable for the complex data modelling requirements of banking products. PostgreSQL's streaming replication capabilities provided the high availability guarantees that the programme required, and its mature tooling for schema migrations, backup and recovery, and performance monitoring reduced operational risk.
Regulatory Compliance in an Incremental Migration
One of the most significant concerns raised when the restructuring was proposed was the reaction of the bank's regulators. Financial services regulators expect comprehensive documentation, extensive testing, and clear rollback plans before approving significant technology changes. An incremental migration approach, with its continuously evolving system boundary and shifting data ownership, presents a more complex regulatory narrative than a single, well-defined cutover event.
The programme team addressed this concern through proactive and transparent engagement with the regulator. Rather than presenting the migration as a single event requiring regulatory approval, the team established an ongoing dialogue in which each monthly delivery was accompanied by a comprehensive impact assessment, testing evidence pack, and rollback plan. This approach transformed the regulatory relationship from a periodic, high-stakes approval process into a continuous, lower-stakes assurance conversation.
The result was a measurable improvement in the bank's regulatory relationship. The regulator expressed confidence in the programme's governance, praised the transparency of the incremental approach, and noted that the parallel-run strategy provided stronger safeguards for customer outcomes than a single cutover event. This outcome is significant because regulatory approval is often cited as one of the biggest bottlenecks in core banking modernisation programmes. By embedding regulatory engagement into the monthly delivery cycle, the team turned what is typically a programme risk into a programme asset.
The compliance architecture also benefited from the API gateway pattern. Centralised logging at the gateway level provided a complete audit trail of all API requests, regardless of which backend system processed them. This simplified the bank's ability to respond to regulatory inquiries, conduct internal audits, and generate the transaction reports required by financial regulators. The gateway's request transformation capabilities also allowed the team to maintain API compatibility with the legacy system's data formats while gradually transitioning to more modern, standards-compliant representations on the new platform.
Outcomes and Measurable Impact
Table 2: Programme Outcomes Comparison
Metric
| Original Plan | Revised Programme |
|---|
| First live service | Year 5 (projected) |
| Time to first production | 60+ months |
| Programme cost trajectory | Full budget allocation |
| Regulator relationship | Periodic high-stakes review |
| Board confidence | Declining (nothing in production) |
| Delivery cadence | Annual Gantt reports |
The most striking outcome is the delivery speed differential. The original programme projected its first production service in year five, after four years of requirements gathering, design, build, and testing. The revised programme delivered its first production service in month four. Even accounting for the eighteen months of work that had already been completed under the original programme (which provided valuable domain knowledge and some reusable architectural components), the speed improvement was extraordinary.
The 31 per cent cost reduction is equally significant. The original programme's cost estimate was based on the full scope of a comprehensive core banking replacement, including data migration of all existing accounts, reimplementation of all product types, and integration with all downstream systems. The incremental approach deferred much of this cost to future phases, where it could be funded from the operational savings generated by the already-migrated services. This pay-as-you-go funding model made the programme financially sustainable in a way that the original big-bang approach was not.
The improvement in board confidence is harder to quantify but equally important. The shift from annual Gantt reports to monthly demonstrations of working software changed the nature of the executive conversation from "when will this be done?" to "what should we prioritise next?" This is a fundamentally healthier dynamic for a technology programme, because it places the focus on value delivery rather than timeline adherence.
Counterarguments and Limitations
The incremental approach described in this case study is not without its challenges and trade-offs, and it is important to acknowledge these to provide a balanced assessment.
Operational Complexity of Running Two Systems
Running two core banking systems in parallel requires significantly more operational effort than running one. The bank must maintain expertise in both the legacy and modern technology stacks, manage two sets of monitoring and alerting infrastructure, and operate two deployment pipelines. This operational overhead represents a real cost that must be factored into the programme's business case. In this case study, the cost of running both systems was offset by the savings from not having to maintain a large, idle development team waiting for a big-bang cutover that kept receding into the future, but the trade-off is real and must be carefully managed.
Data Consistency Challenges
Maintaining data consistency between two systems is an ongoing architectural challenge. The reconciliation engine described in this case study adds complexity and operational burden. There is also a risk that the team becomes dependent on the legacy system as the authoritative data source for certain types of information, creating a "soft dependency" that is harder to break than a hard integration. The programme must maintain discipline about migrating data ownership, not just data access, to the new platform over time.
Not All Programmes Can Be Incrementally Restructured
The success of the restructuring described here depended on several enabling conditions that may not exist in every organisation. The bank's leadership was willing to acknowledge failure and change direction, which requires a level of executive courage that is not universal. The programme had already accumulated significant domain knowledge and some reusable architectural components during its eighteen months of work under the original plan. The bank's technology team had the skills to implement microservices in Go and operate PostgreSQL clusters at scale. organisations that lack these enabling conditions may find it more difficult to replicate this approach.
Conclusion and Broader Implications
This case study demonstrates that the primary barrier to successful core banking modernisation is not technical complexity but delivery model design. The original programme failed not because the technology was wrong, but because the delivery approach was fundamentally misaligned with the nature of the problem. A five-year big-bang plan creates an environment where risk accumulates, feedback is delayed, and the cost of changing direction increases exponentially over time.
The incremental, Strangler Fig-inspired approach succeeded because it inverted these dynamics. By shipping real services to production every month, the team generated immediate feedback, limited the accumulation of risk, and maintained optionality. The parallel-run strategy ensured that customer outcomes were never dependent on the success of a single cutover event. The API gateway provided a clean architectural boundary that allowed the system to evolve without disrupting client applications.
The implications extend beyond core banking. Any large-scale legacy system modernisation programme, whether in financial services, telecommunications, healthcare, or government, can benefit from the principles demonstrated here: prioritise production delivery over comprehensive requirements; use a routing facade to enable incremental migration; maintain the legacy system as a safety net rather than a deadline; and replace annual progress reports with monthly demonstrations of working software.
The lesson is not that incremental delivery is easy or risk-free. It is that the risks of incremental delivery are bounded, visible, and manageable, while the risks of big-bang replacement are catastrophic, hidden, and compounding. For any organisation contemplating a major technology transformation, this case study offers a clear message: ship early, ship often, and let the act of shipping teach you what you actually need to build.
References
[1] Microsoft Azure Architecture Center. "Strangler Fig Pattern." Available at: https://learn.microsoft.com/en-us/azure/architecture/patterns/strangler-fig
[2] Thoughtworks. "Embracing the Strangler Fig Pattern for Legacy Modernization." Available at: https://www.thoughtworks.com/en-in/insights/articles/embracing-strangler-fig-pattern-legacy-modernization-part-one
[3] AltexSoft. "Strangler Fig Pattern and Legacy System Migration Methods." Available at: https://www.altexsoft.com/blog/strangler-fig-legacy-system-migration
[4] Kearney. "Leapfrogging Legacy." Available at: https://www.kearney.com/industry/financial-services/article/leapfrogging-legacy
[5] Ververica. "Core Banking Modernization." Available at: https://www.ververica.com/banking/core-modernization
[6] Gradion. "Core Banking Modernisation & Legacy Migration." Available at: https://gradion.com/en/industries/financial-services/core-banking
[7] CoreSystemPartners. "Parallel Conversion: A Safety Net for Core Banking Transformations." Available at: https://coresystempartners.com/parallel-conversion-a-safety-net-for-core-banking-transformations
[8] The Wealth Mosaic. "Core Bank Migration: Why 80 Per Cent of All Projects Fail." Available at: https://www.thewealthmosaic.com/vendors/objectway/blogs/core-bank-migration-why-80-percent-of-all-projects
[9] Plumery. "Strangler Fig Approach to Progressive Modernisation of Digital Banking." Available at: https://plumery.com/strangler-fig-approach-to-progressive-modernisation-of-digital-banking
[10] Microservices.io. "Pattern: API Gateway / Backends for Frontends." Available at: https://microservices.io/patterns/apigateway.html