How we measure success
How we measure success


How we define and measure impact on farmers’ productivity and incomes.

Impact is our North Star. In everything that we do, we seek to generate positive impact in the lives of farmers. Therefore it's critically important that we measure the impact we have on farmers’ productivity and incomes. This allows us to prioritize the highest impact products, phase out low-impact offerings, and constantly improve the impact of the products and services we offer. Below is a detailed look at our approach to measuring our impact.

Defining Impact

For every programming area (each country, each unique program), we attempt to measure the following:

Total Impact = (Number of farmers) x (Impact per farmer)

For every single program, we seek to understand how much Total Impact we generate for farmers. This enables us to allocate resources to programs with the highest impact, and to define how to improve that impact going forward (for more detail, see this Stanford Social Innovation Review paper). For most of our programs, we define impact as $USD of new profit generated for farmers, as this is a highly comparable metric and is centrally important to farmers, who take out credit to pay for our services. However, we are increasingly looking at quality-of-life metrics such as hunger and school attainment.

The Four Reasons We Measure

We measure impact for four reasons:

  1. Prove. We have an obligation to farmers and to our donors to prove our impact. We also use impact data to make resource allocation decisions.
  2. Learn. We’re constantly learning and evaluating so we can improve each individual programming unit.
  3. Improve. Impact data helps us develop new life-improving products.
  4. Maintain. We use our impact data to maintain operational consistency across all geographies.

1) Prove – to ourselves and to our donors

Some of our measurement is intended to prove our basic impact on farmers’ lives, mainly because we have an obligation to know that the loans they take out are providing healthy returns. In addition, donors are increasingly savvy and demand solid data upon which they can make informed decisions about how to allocate their dollars. Lastly, understanding the overall impact of each of our individual programs helps us to allocate resources between different programs.

For a detailed look at our methodology, please see our impact studies. The basic principles are:

  • Data quality: We have invested heavily in data quality. For example, every year we physically weigh the harvests of thousands of farmers. Around 10 percent of farmers receive “back checks” from a second survey agent to ensure data integrity. All data collected on paper is data-entered twice to eliminate errors. We also have an analysis and methods expert on staff who focuses on ensuring rigor and quality in our measurement.
  • Credible comparison group selection: We are increasingly addressing “selection bias” through several methods, including comparing “treated” clients to newly enrolled clients (highly similar people), treated clients to “likely to enrolls” (who are checked for comparability using high-quality techniques), diff-in-diff of a newly treated client year-over-year while controlling for time trend, and when possible, randomized control trials (RCT).
  • Coverage: We shoot for maximum sampling coverage across all of our operational units so that we can learn about each one. It would be possible to only do several hundred harvest surveys as an organization and still generate quality impact data. But we are interested in results for each country, and each region within each country. We apply rigorous methods across as much coverage area as possible, which entails sampling thousands of farmers.
  • Plus rigor: We also occasionally conduct more rigorous and expensive evaluations of small areas, in order to fine-tune and to verify the accuracy of our wider-coverage methods. 
Farmers attend a farmer training in Burundi.

2) Learn – learning so we can improve each individual programming unit

When we “prove” our impact, we also gather a wealth of information that can be used to improve each of our programming units. This is best illustrated through the following example.

Burundi case study: Improving through M&E
The first step in improving is to discover the problem. In our first season working in Burundi, our M&E department uncovered that farmer compliance with our planting methods was very low. Burundi fields were extremely “messy,” and farmers were largely ignoring our farmer training. This resulted in low harvest results, and our program produced near-to-zero improvement in farm profit.

Since then, we have placed heavy focus on planting compliance. We required every farmer in our program to plant at least one “model garden” (100 square meters) using the ideal planting methods. Over time, farmers saw strong harvest results and began planting more and more of their land using “model” techniques, eventually covering most of their fields. Throughout, our M&E helped to guide us. It showed us how much better the “model gardens” were performing, and how much coverage we were attaining of “model” planting techniques in each operating area. Today, Burundi has dramatically improved, providing significant increases in farm profit per client.

3) Improve – developing new life-improving products and services

Separate from our regular program M&E, we also have an entire product innovation department that constantly experiments with new products. We complete at least 40 studies every year, each with a minimum of 100 farmers and some with thousands of farmers.

Solar light case study: Improving through M&E

Currently, farmers use small, dim, smoky kerosene lamps to light their homes. These lamps cost a lot of money to operate each day and produce poor quality of light. In 2011, One Acre Fund began a series of studies (including two RCTs) to evaluate the household impact of selling solar lamps to farmers. In each study, farmers signed up to buy our lamps, and we randomly selected a control group from those farmers who would be delayed in receiving the light. We then tracked the household energy expenditures daily for the test group and the comparison group. There were striking differences and clear energy cost savings for the solar light owners, as well as a significant increase in child study hours due to higher-quality light.

One Acre Fund is now one of the largest retailers of solar lamps in Africa.

Our product studies include a wide variety of experimental methods, but in most cases we randomly assign the product to farmers to rigorously assess the true impact. We have done daily logs of energy expenditure when measuring the impact of solar lamps, A versus B testing on thousands of individual farms to test one planting method versus another, and participatory user testing and human-centered design at the early stages of product formation, among other methods.

A farmer holding three solar lights

4) Maintain – operational consistency across geographies

One Acre Fund has full-time field staff in over 4,000 rural locations, spread across nine East African countries. Making sure that we implement a high-quality program in every single geography we serve is a major challenge. To ensure that our program, which is tailored to suit local contexts, is implemented in a consistent way, we collect weekly key performance indicators (KPIs) that are scrutinized by team managers. We use these data (e.g. percent repayment, percent attending training) to see which areas are struggling so that we can take corrective action, and to identify areas working particularly well for program learning. KPIs serve as a quality control mechanism for our program.

Not only do we collect reams of data from our program staff, we also collect crucial information directly from our farming clients. We directly observe thousands of farmers planting their crops to understand the degree to which our training is actually changing behavior. We also run customer service phone lines, giving farmers a direct line to share their complaints, compliments, and ideas with us.