Experiments of the Activation stage are focused on working with the "aha-moment" - when the user receives the benefit or understands the future benefits of the product. This moment can be compared to trying on shirts in the store.
The "Aha-moment" is crucial to your business - it lets you find the point after which people realize their unfair advantage that your product gives them and they stick to you. At the Activation stage, you also work with user onboarding, which is a vital part of the customer journey. People just usually can't understand the value if you don't walk them through every important feature of your product or don't show them any use cases. Without proper onboarding, many users might not be able to even get to the "aha-moment", which will strongly affect the Activation stage, as well as others down the funnel.
Tom knows that there is a path on the site, which more often than others leads to payment. Therefore, the purpose of experimenting with the Activation is, on the one hand, to try to make all users take this path, and on the other, to constantly optimize it.
To build a funnel, Tom only needs to type the events of the funnel and separate the stages, with the word ‘then’.
This way, for example, a funnel of three phases ChooseCity -> TripCard -> Purchase has the Conversion Rate of 4.2%. However, the path ChooseCity -> TripCard -> ReadReviews -> Purchase has Conversion Rate of 6.9%.
Tom decided to test the hypothesis saying that if the customer reviews are shown in a visible place in the card of the journey, this could increase CR. For the experiment to be performed, it was necessary to simplify its implementation. For this, Tom’s team decided to experiment with one city only. It greatly reduced the time of reviews selection process.
City data was transferred to Google Analytics along with ChooseCity event in the EventLabel parameter. The funnel for Paris, in this case, looked like this:
After the test is completed, Tom saw that visible reviews increased the conversion rate to 4.8%, which allowed him to extend the experiment to all cities.
The choice of the city for travel was a big barrier for users, which was the reason for a lot of people to drop out. But one idea once appeared in backlog, saying that it would be nice to choose the city automatically. However, Tom's team remembers that user behavior was different for the desktop and mobile versions. On the desktop, people choose the journey beforehand, while mobile purchases were often immediate because people want to start using the tour here and now. Therefore, the hypothesis about automatic city suggestions was decided that it would be tested for mobile traffic only.
After the experiment succeeded, it was decided to extend the autodetection of the city to the whole website and test the hypothesis about the desktop traffic as well. It failed, by the way.
Okay, but what if you have no clear conversion actions, such as buying? For example, it is important for your product to simply be used after the user passes your onboarding. In this case, you should look not at how many people reached the conversion action, but how many users came back the next day or in a week.
Tom's team decided to give users functionality similar to the quest, after the purchase of the tour, so the crew had to put photos with the most important points of travel in their users’ accounts. Inside the account, a few slides were used to explain the meaning of the new feature, after which the user gets to the feature itself.
To test the effectiveness of these slides, Tom’s analyst built three funnels.
The first one consisted of only two steps - from the first slide to opening the quest window. The second contained three stages: first slide, second slide, and the quest. Funnel three contained all three slides + opening the quest window. The next day Retention for the first and second funnel differed significantly, but the second and third did not differ at all. From this, we can conclude that the third onboarding slide isn't valuable enough. We should test its value by comparing retention, with new content and without it.
To display this funnel, Tom entered the following parameters
Importantly, here we use Retention metrics to assess Activation hypothesis. However, it's not so important what we use it to measure the experiment, but at what stage we make changes. In the last example, we made them at the Activation stage of the new feature.