Incentives

This article will focus on the service industry, organizations, and NGOs working with rural Africa user base to provide services such as education, insurance, health messaging, agronomy advice, financial inclusion, etc.

Incentives are an important part of product design and implementation to drive uptake. Having worked with many user groups, this article intends to share insights into each group and how they react to incentives. It will also look at the important drivers in making sure the incentive is given is both adequate in driving the needed behavior change but also not too expensive to be off the budget or skew the data. 

There is a delicate balance between these two variables. Being able to give the correct amount of incentives to the user to drive action and results sought by the organization, and being able to be within the budget and promote consistency. 

One of the pitfalls is not being able to get this rather elusive balance. Most organizations go very low on their incentive amount, leading users not to take action. This in turn causes failure in meeting the targets or driving the required behavior change. Sometimes they go too high which may cause false positive and run the organization off-budget at scale. 

Incentives for End Users 

Incentives for end users come in different forms. Incentives are any reward, monetary or otherwise that is able to keep the user engaged in the technology or product being used by the organization. This can either be information or service-based, where users would be interested in different technologies such as SMS, USSD, Mobile applications, or IVR (uses voice to deliver information)  or by calling the users.

Organizations need to spend a considerable amount of time testing the flow of information or service so as to figure out how to keep the end-users interested and engaged enough to enable behavior change. 

For example, I worked for an organization that sent information to mothers at every developmental stage of their growing children. In this case, the information we sent them was incentive enough to keep the mothers interested in learning more about bringing up their children.

The key driver in our messages was customization and personalization. We created a system that would keep the moms glued to their phones. We were able to engage with them at an emotional level.

One of the ways we achieved this was by registering the name of the child and their date of birth. Using this information, we created an algorithm to help us track the development of the child over a period of time. Working with doctors and nurses to develop content, we were able to send out messages based on the data provided. 

The level of customization was very deep, the moms thought they were chatting with a real human. The messages “knew” the parents and the child. For example; “Good morning Jane, if by now Kipkemei is not able to creep or crawl, he has a minor delay in his development. Stimulate him by placing toys just out of his reach” This type of message was sent out to a mother with a 9-month-old child. First, we were able to address the mother by her name, second, we addressed the child by his name, third, we knew exactly what the child was supposed to do at this point of his development (crawl), fourth we mentioned that its normal for a child to delay slightly in their development and fifth we informed the mum what to do to enable the child to activate this stage in their growth. 

We also sent out a second message to gather information if the child has achieved the milestone or not. By using a two way SMS system, a mother could respond with a YES or NO. This enabled us to track the child at an individual level. 

Getting very close to the users by knowing them and engaging them at a very personal level made most of the mums gain trust in us. Since we worked with both government and private clinics in the community, we observed that some mothers would take these messages and show them to the nurses or doctors and say ‘the message said this, but my child has not been able to do so.” This also helped the doctors know exactly where to start, as it provided an essential data point into the development of the child. 

We were able to achieve changes in user behavior in the following ways: 

The response rates improved; with better customization and personalization, most moms started responding to our messages. 

The general health of the children improved under our program compared to those, not in the program. This was according to the maternal clinics we worked with. 

Constant feedback enabled us to detect the children’s developmental challenges before they could get worse. The Yes or No responses based on the developmental stages of the children enabled us to identify developmental challenges earlier thus impacting lives directly. 

In this example, we used an emotional incentive as opposed to a monetary incentive.  The ability to connect with the mums at an emotional level made our users believe that our messages -addressing one issue after the other- were being sent by a human.  It takes time to build this kind of complexity and automation in the process flow. However, the outcomes and impact created are totally worth it!

Another end-user group is farmers. Most farmers I’ve interacted with are interested in what they understand. Some of them keep doing what they have been doing, never changing their methods since it’s tried and tested to be true. This makes it difficult to convert a smallholder farmer from buying product X to Y or changing their methods. Most farm input organizations create ‘demo plots’ within the communities. A demo plot forms an important part of the marketing department. Farmers within the community are organized to come for “demoparties” at different growth stages of crop farming. The most important meeting is harvest demos, where the farmers ‘witness’ harvests. Why is the harvest important? Because it shows the yields, and this is important to the farmers. 

Unlike the mothers in the previous example, the farmers would not be very responsive whenever there was no disaster like floods, droughts, or pests and diseases. They would only be responsive or contact the organization whenever they had a problem.

How then does an organization working with smallholder farmers create a space where they can be incentivized to take on the products and services, use them, and eventually get better outcomes? 

In our user experiments, we tested content with these farmers, to try to figure out what would make them act.

Some of the ways we used to test were by showing our users how our products and services benefited other users like them. Bringing the outcomes and impacts closer to the users made it a reality. One of the things that an organization needed to create with these users was trust. Smallholder farmers are among people that have been lied to, stolen from, and taken for a ride by different individuals, governments, or organizations. Because of these “historical injustices”, these user groups needed more than one training or one conversation to trust again. Thus, an organization working with them should be able to take time to create a relationship and engage. Creating a relationship with trust is better than trying to “buy” them.

One of the ways we did this in the microinsurance sector is by doing pay-out events at a community level. When the users saw that the organization kept its promises by giving seeds to farmers that were affected by drought, to enable them to replant,  the registration for the insurance product improved in that specific area, without much push. These farmers also talk among themselves and a bad name for an organization would push them away. It is a very tricky balance.

I also observed that airtime incentive works very well with this user group. Again, promising them and being able to keep the promise.

At some point, we were running AB tests on different incentive amounts in Zambia, Malawi, and Kenya, asking smallholder farmers to refer their friends to join our program and buy the seed from a company we were partnering with. During the test, we offered USD 0.02 for farmers to refer five of their friends to us. We used the same amount across 3 countries. We also tried different messaging content at the same time. We would send 3 messages to 3 user groups with the same characteristics, and the same amounts.

The experiment worked very well in Malawi and Kenya. In these countries, we used only English. We did the same in Zambia but few bothered to respond by dialing the USSD to share the information requested.

In trying to figure out how to initiate action for our Zambian users, we changed the language from English to local languages, the challenge was that there are three main languages in Zambia; Tonga, Nyanja, and Bemba. Being able to classify users according to their regions and also being able to send the messages in the language that they would understand was a big challenge. Even if you could classify and use one language per district, it was impossible to know the language used by the specific farmer. This was a big barrier to the Zambian market. It took a number or alterations in message content and nudge words to get these smallholder farmers to respond. Compared to Kenya and Malawi, the incentive amount was not directly correlated with the responses or desired behavior changes. 

One of the mistakes many organizations make is assuming that what works in one country would work in a different one. Without taking into consideration these variables and dynamics that are specific to a country, success in one country may be a failure in another even if the user group hasn’t changed. 

In conclusion, being able to get the correct incentive for a user group would be able to drive both desired actions and user behavior changes. Organizations looking to venture into working or already working with rural African communities should be able to invest in research and user experience experiments to get the exact incentive that would push their users to act. I strongly advise investing in other forms of incentives as per my examples: personalization, emotions, trust, hope, etc rather than monetary incentives for end-users. Again, it really depends on what the organization is looking to achieve, and these incentives eg rewards, airtime, certificates, and points would also work and can be used. Budgets should also be considered here, bearing in mind the amounts may change at scale. Setting a specific higher amount may drive action initially but it will skew the data and give false impressions. This would be possible for hundreds of users but the amounts may be too high to sustain at a larger scale. 

It is one thing to be able to invest in research and user experiment tests to get the right incentives and it is another to implement it. In my next article, I will be looking into the implementation of incentives for end-users looking into the pitfalls that should be avoided to make it a success.