Toolkit

Power your mission: How to measure success

Measuring technology implementation is not just about whether or not a product functions. It’s important to measure whether or not you built the right thing that can achieve the desired outcomes.

A white woman uses a laptop.

These toolkits were developed with the WIC Technology Resource Group, a resource and learning partnership between the National WIC Association and Nava, with contributions from Code for America. Many thanks to Mandy Brown, Genevieve Gaudet, and Sasha Reid for their invaluable contributions.

Technology can be used to achieve meaningful program outcomes when you focus on what people need, approach projects one manageable part at a time, and measure outcomes.

This toolkit is the last of four that will walk you through human-centered and agile processes to help you achieve meaningful program outcomes with every new technology project. We recommend using them in this order: Setting goals, Building technology, Engaging users, and Measuring success.

This toolkit can help you:

  • Use technology to achieve meaningful program outcomes
  • Plan and implement measures of success throughout a project

Human-centered design is:

a methodology for designing and building unified, clear, and respectful products and services that incorporates feedback from the people you are designing for throughout the design process. Its core principles are to build an explicit understanding of users, involve them throughout the development process, test and iterate frequently, and address the whole user experience of a product or service.

Agile development is:

a collection of project management and software development practices that shorten feedback loops and rely on constant feedback, iteration, and collaboration to improve products and help teams quickly, safely, and cost-effectively build software that meets people’s needs.

Measuring success

Once you’ve launched a new technology product, you need to know whether or not it’s successful. But measuring technology implementation is not just about whether or not your product functions. It’s important to measure whether or not you built the right thing and whether or not that thing can achieve the desired outcomes. As outlined in Setting goals, technology product success should be linked to program success, and your measurements should likewise be linked. To make sure technology product success and program outcome success are linked, there’s a handful of measurement best practices that can be implemented during product development and implementation cycles.

Using a logic model

One visual representation of the connection between technology products and program outcomes is a logic model. Each government program makes logical assumptions about the inputs, activities, and outputs that will achieve desired outcomes. If you start with the right inputs, you can perform the right activities. If you perform the activities, you can produce the outputs. If you produce the outputs, you can achieve the desired short-term outcomes. If you hit short-term outcome targets, you can achieve the desired intermediate outcomes and so on. The same assumptions should be made about a technology project.

A logic model flow chart showing how inputs lead to activities, outputs, short-term outcomes, intermediate outcomes, and long-term outcomes.

A logic model is a visual representation of the connection between technology products and program outcomes.

Technology product success is linked to program success as a key output on the path towards desired short-, intermediate-, and long-term outcomes and so should be developed, implemented, and measured in a way that improves the chances of achieving those desired program outcomes.

Asking the right questions

While measuring technology implementation is not just about whether or not your product functions, that’s still an important thing to measure alongside whether or not you built the right things and whether or not that thing is achieving the desired outcomes.

Here are some broad questions you can ask to measure the success of your technology project as it relates to successfully achieving program outcomes. These questions should not just be asked just at the final stage of a project implementation but throughout development and testing, so that you have the opportunity to correct course and make refinements as needed:

  • Function: Does the product work?

  • Experience: Is the product easy-to-use for your target users? (i.e. participants, staff, etc.)

  • Problem-solution fit:

    • Do users engage with the product as you expect them to?

    • Does the product achieve the outcomes you intended? (i.e. staff burden reduction, reducing barriers to sharing documents, etc.)

Defining indicators

To measure your answers to these questions you can look at markers of progress and success called indicators. Here are a few examples of indicators for each question:

  • Function: Does the product work?

    • Indicator examples:

      • Number and type of error messages users receive

      • Unexpected service experience flow

  • Experience: Is the product easy-to-use for your target users (i.e. participant, staff, etc.)?

    • Indicator examples:

      • Themes in qualitative feedback (interviews, free-form responses in a survey)

      • Page analytics like average time to complete an application or make an appointment

  • Problem-solution fit: Do users engage with the product as you expect them to?

    • Indicator examples:

      • Qualitative feedback themes

      • Page analytics like % of people that start an application and stop at document upload

  • Problem-solution fit: Does the product achieve the short-term operations and experience outcomes you intended?

    • Indicator examples:

      • Staff spend less time scheduling appointments

      • Call center volume decreases

      • Percentage of no-shows decreases

  • Problem-solution fit: Does the product achieve the intermediate-term and long-term goals you intended?

    • Indicator examples:

      • Enrollment

      • Recertification

      • Health outcomes

Determining intermediate- and long-term success often requires a more sophisticated measurement plan and study design to attribute these kinds of indicator outcomes to just the technology project. But, there’s still value in understanding how these indicators relate to your technology project and in tracking them to understand if the technology project might be contributing to them as part of the larger narrative for your project.

Try it: Define indicators for your project

Looking back at your technology product vision, the user needs you identified, and the problem you wanted to solve (learn more in Setting goals), what indicators will help you know:

  • Function: Does the product work?

  • Experience: Is the product easy-to-use?

  • Problem-solution fit: Do users engage with the product as you expect them to?

  • Problem-solution fit: Does the product achieve the outcomes you intended?

Once you’ve defined your indicators, you’ll be better prepared to track the metrics that show you whether or not your technology project is successful — and ideally, your program outcomes as well.

About the WIC Technology Resource Group

The WIC Technology Resource Group is a resource and learning partnership between the National WIC Association and Nava, a government technology consultancy and public benefit corporation, and with contributions from Code for America, a nonprofit focused on digital tools for government.

Power your mission with human-centered, agile technology is a series of four toolkits that will walk you through human-centered and agile processes to help you achieve meaningful program outcomes with every new technology project. It includes: Setting goals, Building technology, Engaging users, and Measuring success.

For more resources, visit the WIC Hub.

Written by


Martelle Esposito

Partnerships and Evaluation Lead

Martelle Esposito is a partnerships and evaluation lead at Nava. Before joining Nava, Martelle managed a WIC services innovation lab at Johns Hopkins University and worked on public policy development and program implementation at non-profits.
Nava Logo

Eleanor Davis

Associate Program Director of Insight & Impact, Code for America

Eleanor Davis is the Associate Program Director of Insight & Impact at Code for America. Before that, Eleanor worked as a program manager in the non-profit space.

PublishedNovember 5, 2021

Authors

Partner with us

Let’s talk about what we can build together.