£15M
revenue uplift from improved marketing effectiveness
Individual-level targeting replaced segment-level rules
Unified decisioning across inbound and outbound channels
Continuous champion-challenger testing embedded in production
A Next Best Action engine replaced a UK insurer's static rules and generic scripts with uplift-model-driven, individually optimised recommendations, delivering an estimated £15M revenue uplift. The system combines per-customer, per-action uplift scoring with a constraints engine enforcing eligibility, budgets, and contact limits, and operates across both inbound agent calls and outbound campaigns with continuous champion-challenger testing.
The Problem
A large UK insurance provider ran its customer journeys on a combination of business rules and generic scripts. When a customer phoned in to renew a policy or ask about cover options, the agent followed a predetermined path. When the marketing team ran outbound campaigns, they selected audiences using broad segmentation and static offer tables. Every customer in a segment received the same offer, regardless of individual circumstances or propensity to respond. Inbound and outbound channels operated independently, so a customer calling the retention line might be offered a discount that contradicted the cross-sell message they received by email the day before.
The Solution
A Next Best Action (NBA) engine was built to replace static rules with a data-driven decisioning layer. The system combines uplift modelling with a constraints engine and delivers ranked, optimised recommendations to agents in real time and to outbound campaign systems.
Unlike traditional propensity models (which predict how likely a customer is to buy, conflating those who would have bought anyway), uplift models answer a sharper question: what is the incremental effect of showing this specific action to this specific customer? Separate models were trained for each action in the offer catalogue (retention discount, cross-sell product, upsell tier, service intervention), producing a per-customer, per-action uplift score.
Raw uplift rankings alone do not produce a viable plan. A constraints engine encodes eligibility rules, regulatory contact limits, channel capacity, budget caps, and vulnerability restrictions, filtering the action space before optimisation begins. The optimiser then solves for the best allocation of actions to customers across the full population: for example, if budget allows only 10,000 retention offers this month, it selects the 10,000 customers where incremental retention uplift is highest, not the 10,000 with the highest base churn probability. For inbound contacts, the recommendation appears on the agent desktop with a brief, human-readable reason. For outbound campaigns, the optimised action list feeds directly into campaign execution.
The NBA engine runs in a continuous champion-challenger framework, with a proportion of customers always served by an alternative model or strategy. Automated monitoring tracks model drift and offer fatigue, triggering recalibration when response rates decline. This closed loop was essential to sustaining performance beyond initial deployment.
Results and Impact
| Metric | Value |
|---|---|
| Revenue uplift | Estimated £15 million increase from improved marketing effectiveness |
| Offer relevance | Shift from segment-level to individual-level targeting across inbound and outbound |
| Channel coordination | Unified decisioning replaced siloed inbound and outbound processes |
| Agent adoption | Recommendations accepted by front-line staff after explainability features were added |
| Budget efficiency | Constrained optimisation ensured spend was allocated to highest-uplift customers |
| Testing coverage | Champion-challenger running continuously across the full customer base |
Key Takeaways
-
Explainability drove adoption. Agents ignored early versions of the recommendations because they could not see why an action was suggested. Adding brief, human-readable reasons tied to specific customer attributes transformed acceptance rates.
-
Optimisation mattered as much as modelling. The uplift models were necessary but not sufficient. Without the constraints engine and population-level optimiser, recommendations violated eligibility rules, exceeded budgets, and over-contacted customers. The real value came from solving the allocation problem properly.
-
Offer fatigue appeared faster than expected. Within weeks of go-live, response rates on the most frequently recommended actions began to drop. Automated fatigue monitoring and action down-weighting became a permanent part of the platform.