The ability to come up with an incentive that is both attractive to users and within the budget (and technological barriers) requires time, research, and extensive user testing.
As stated in my previous articles, incentives can be anything that encourages users to use a specific technology. This can be achieved by customizing and personalizing content to address the specific issues being faced by the user so as to create trust and build an emotional connection with them.
After setting these incentives, the second step is implementation. There may be a number of challenges with implementation such as factors emanating from the technology being used.
SMS, for example, may have a low delivery rate. This was observed in Malawi and Zambia which performed at about 49% and 50% delivery rates respectively. Even when the SMS was delivered, it was difficult to know if the users will take the desired action. In most cases, they were prompted to change their behavior. On the other hand, USSD was a great platform to implement the project and direct the incentives because the organization was able to check the user interactions live and give better options than one way SMS.
Another common incentive among users in rural Africa is the airtime. You will be surprised by the actions or changes that can be brought about by only 10 cents USD. On implementing airtime incentives in Kenya and Malawi, one thing was clear, the users knew exactly what was being promised and if it was given or not. One example was when we needed users to send us phone numbers of their friends who were also smallholders farmers so that we could extend our services to them too. The content of the messages was tested a number of times to make sure it resonates with the user base. In the countries where this was successful, we received feedback immediately the messages were sent out.sers acted and sent us the referral information requested in the messages.
One of the insights generated here was that the amount offered to the users did not really play a major part. The messages tested with two variables, the content and the amounts. With time, we were able to narrow down to content plus an average amount of airtime that was within the budget. If an organization does not have content that resonates well or connects to the users, setting high budgets and using high amounts of money will not generate the desired results. The users will be there for the money and will not be willing to change their behavior or provide random information to be able to gain, basically game the system.
Most organizations would use a semi-manual system in the provision of airtime to the users. Automating this yields even better results. There is nothing wrong with using semi-manual platforms like mteja to distribute the airtime to your users. However, this will need additional resources as it is a multi-step process that involves checking the amount of incentives per user, generating a list, and loading this into a payment platform.
It works perfectly for testing but when scaled, there is a need to automate the process in order to refer several new users and unlock an amount of airtime.
There was a time when I was running user acceptance testing on the automated airtime top-up feature. The platform generated airtime to users immediately based on the number of referrals. The system worked great since it was able to pay the airtime each time one-person referred 5 unique users. This came to 10 shillings for the 5 referrals. About USD 0.02.
We had been using a semi-manual system to send out the same amount of airtime. It would however take 2-3 days to compile, process, and pay users. Since the number of referrals had reduced to <100 per day, we decided to leave the automated platform running after completing our user acceptance testing. We did not think the amount we had in our platform could be used up, and definitely not overnight.
In the morning, the next day, we got the shock of our lives. USD 600 had run out! How did we know this? The platform started sending us emails that we had insufficient funds. We received 30,000 new unique referrals! This could have been even more if we had more funds loaded in. Remember this platform is based on USSD A user enters a code, then enters phone numbers of neighbors who are also farmers. The system checks for the uniqueness of the entered number and accepts or rejects it accordingly.
I am not able to confirm if some of the information provided by these users were among our target group but all were unique! This was a big win, being able to drive action that generated more than 30 times referrals in a day was a tipping point in the understanding of the user’s behavior.
We were not really hoping to burn this out on this test but there were some key learnings drawn from our user base. They were keen, they appreciated a system that was able to reward instantly and could engage longer if the incentives were right. The learning here was clear; if an organization wishes for action and to impact change among end-users, it is important to make the process transparent, simple, and fast.