~60%
reduction in expensive channel spend
Multi-dimensional engagement score replaced binary classification
Campaign effectiveness improved with each successive wave
High-potential dormant members identified via lookalike modelling
Data-driven targeting and structured test-and-learn experimentation cut acquisition costs for an African grocery retailer relaunching its loyalty programme. By replacing binary engagement classification with a five-dimensional score, applying lookalike modelling, and systematically testing offers, messages, and channels, expensive channel spend was reduced by approximately 60% while programme engagement rose.
The Problem
A major African grocery retailer was relaunching its loyalty programme. Millions of customers had enrolled, but far fewer were meaningfully engaged: downloading the app, completing profiles, scanning at till, or redeeming rewards. The programme had reach without depth.
The Solution
We built a targeting and experimentation framework that replaced the broadcast-and-hope approach with precision channel selection and iterative learning.
First, we constructed a multi-dimensional engagement score that replaced the binary active/inactive flag. The score drew on five behavioural dimensions: app adoption, profile investment, redemption behaviour, transaction linkage, and channel responsiveness. Each dimension was scored independently and stored in a Spark-based feature store, so campaigns could target specific engagement gaps (for example, high app usage but zero redemptions) rather than treating engagement as a single number.
Second, we used lookalike modelling to extend targeting beyond known segments. Models trained on the behavioural profiles of the most engaged members scored the remainder of the base on similarity, producing a ranked list of high-potential dormant members. The binary classification had written off a significant portion of these as "inactive"; lookalike scoring revealed that many shared the profile of highly engaged customers and simply had not been reached with the right message through the right channel.
Third, we designed a structured test-and-learn programme varying offers, messages, and channels in controlled experiments with proper randomisation and holdout groups. Each campaign wave was a multi-cell experiment; results fed directly into the design of the next wave. The channel dimension produced the largest cost savings. Controlled testing revealed that many customers who responded to SMS would also have responded to a push notification at a fraction of the cost. SMS was incrementally valuable only for members who did not use the app regularly; outbound call was justified only for a narrow, high-value reactivation cohort. This allowed the business to restructure its channel mix: push and in-app messaging became the default for app-active members, while expensive channels were reserved for segments where no cheaper alternative worked.
Results and Impact
| Metric | Value |
|---|---|
| Reduction in expensive channel spend | ~60% through reallocation from SMS/call to push and in-app |
| Programme engagement | Increased across activation and reactivation cohorts |
| Campaign effectiveness | Improved in each successive wave as learnings compounded |
| Engagement scoring | Five-dimensional score replaced binary flag across the full loyalty base |
| Lookalike targeting | Identified high-potential dormant members previously classified as inactive |
| Channel intelligence | Incremental value of each channel quantified by engagement segment |
The 60% reduction in expensive channel spend did not come from cutting contact volume. It came from routing contacts through the right channel for each member's profile. The total number of members contacted remained similar; the cost of reaching them dropped substantially.
Key Takeaways
-
Engagement is not a binary state. Replacing the active/inactive flag with a multi-dimensional score was the single biggest analytical improvement. Decomposing engagement into its component dimensions made targeting specific behavioural gaps possible for the first time.
-
Expensive channels are not inherently more effective. SMS and outbound call had the highest raw response rates, which created a self-reinforcing belief that they were the best channels. Controlled testing revealed that much of that response was not incremental: cheaper channels would have achieved the same result for a large portion of the audience.
-
Compounding learning is the real return on experimentation. The value was not any single experiment but the accumulation of findings across waves. Each wave narrowed the targeting, refined the channel mix, and improved the offer. By the later waves, campaigns were cheaper and more effective than anything the business had run before.