How to Use Analytics Data and Smart Experiments to Optimize Agency Services

Eviart/Shutterstock.com

Keep in mind that optimization is a never-ending process.

Any government agency that has data-driven goals in place, as part of an analytics framework, is off to a good start. (You can learn more about setting those goals in this Nextgov article.)

Those searching for more ways to improve citizens' digital experiences should look toward setting up experiments and a testing culture to optimize results over time.

The Right Goals and Metrics Are Still Crucial

Trying to test and optimize without solid goals and metrics will be, well, not optimal. Have a high-level goal or a purpose for a product or service before you collect any data. 

A good example comes from the Government of the Netherlands. A senior researcher from the agency responsible noted that, “We publish information, and in our line of work the conversion is when someone reads the information and finds it helpful – when it answers their questions.” This purpose is in line with citizens' expectations. For instance, American citizens visit government websites for information (49%), documents (41%) and services (33%), according to data from Pew Research.

It's then easy to take such a purpose and create data-driven goals around a set of metrics. In this case, when we think of time-efficiency and user-friendliness of a digital product, we'd suggest looking at the following categories and metrics:

  • Navigation: referral pages, page click-through rates and bounce rates
  • Content: time spent, scroll depth and website search rate
  • Services: form conversion rates and funnel completion rates

Testing Around User Paths and Drop-offs

By the time you've set the whole analytics framework up, you’ll have a good idea of where your main problems might be. It’s tempting to start the optimization process right away. But it’s an even better idea to make sure your optimization plan is in line with team culture and expectations. The importance of this will be clear when you start building experiments. Even if you believe that you can improve everything on the digital experience, it’s important to establish a testing culture first.

To do this, have a testing case conversation around each critical touchpoint with your agency or its services. The cases should be simple and in line with the purpose and goals you set out at the beginning.

It’s also critical to compare the expected digital path users take with the actual path they’re taking. Those paths may also involve offline transactions that are a natural part of the process. In any case, you need to be able to narrow down to where users get stuck along the way. The best approach is measuring conversion between steps. You should see drop-offs, which are the best places to test ideas for improving conversion rates.

Let’s take the example of a government agency that wants to increase the number of people who register for a driver’s license online. Any friction in that process should be quantifiable with the help of analytics data collected during the process. So the data about probable causes for drop-offs in registration is already there. For example, we might see that the largest group of users abandon the process when clicking on an outlink during registration. 

With each identified drop-off and its possible causes, there should be a clear goal for the team to minimize. Using the above example, maybe we could move the outlink to the end of the process or have it open in a new tab in the background. You would then track the change in drop-off rates and overall registration rates compared to the benchmark of results before the change.

Prioritizing Tests

It will be tempting to generate ideas for new tests by comparing your results to other cities, countries and agencies or by implementing tests used by other teams in the agency. These can be good additions as long as you prioritize the metrics that you are trying to improve. To start, get the following information for each test: 

  • The time and resources you have to create the experiment.
  • The alignment with your team's goal.
  • The traffic for the particular service and its statistical relevance compared to the total number of users.

Create a scale of 1through 10 for each of the three criteria and score any new idea for a test. Use the overall score as a way to prioritize tests. 

Keep in mind that optimization is a never-ending process. And that's a good thing. Each optimization you make based on testing could be further improved in the future. One of the worst things you can do to your testing setup is to tear the whole thing down and start from scratch (some use the euphemism “redesign”). This is rarely needed. The historical data from old tests is most often a solid source of information for future tests.

Piotr Korzeniowski is chief operating officer at Piwik PRO and Ben Rometsch is chief executive officer at Flagsmith.