Evaluating and Implementing a User Engagement Tool
By Daniel Ramalhosa, Lead Solutions Engineer @ Resilia
You have a great product and you have a vision on how to use it to provide maximum value but how do you translate that to your users? When a user first comes to your page, how do you know they will be able to understand what you do or will they get lost in it all? That is where the need for a user engagement tool came in. With a tool we can teach each other how impactful our product is. We can guide them through what it is and they can show us what resonates with them or if a feature is falling flat. This article will go over the criteria we defined, how we will evaluate each contender, making the decision, and what our first use case is. Before we can dive into the tools, we need to know what is most important to us
Criteria
It’s easy to say everything is equally important and the tool has to do everything perfectly but in reality it is never that straightforward. First we need to know who are the primary stakeholders and what we are trying to solve. In our case it is our Product Managers (PM) and Product Marketing Manager (PMM) which means we want them to be able to design and implement experiences for our users without needing any assistance from other teams. As for what we are trying to solve for, we want to be able to learn what brought our users here, guide them through our platform, and get feedback on what is working well and what needs improvements. With that in mind, our criteria became the following:
Internal Team Needs
It is important that we can release experiences, such as a tour on a new feature or follow up surveys, without any friction. Having to go through Engineering to implement an experience then work with Data to make sure that we are tracking relevant metrics can make releasing just a single experience take up to a week. We need to be able to iterate fast with as little roadblocks as possible.
First, we will need a single space where all the experiences can be built and deployed. They should be able to log in, build out a new flow, perform tests on our staging environment, deploy to our production environment (without the need of any code changes on the platform), and see how users are interacting with it.
Second, it needs to be able to learn who our users are, by sourcing activity data from our platform. New users need to be automatically created in the tool and it should receive events near real-time as they engage with various features on the platform.
Third, it needs to be able to export data. Information collected here will be important in other systems like those driving marketing campaigns.
Currently we use Segment to send platform activity out so having an integration with it becomes a hard requirement. Not only would this be able to populate our user table and give us the activity they are doing (second requirement), we would be able to install and deploy experiences with zero code (first requirement) and send data back through Segment (third requirement)
User facing capabilities
For user facing capabilities it is important to guide users with a friendly UI, receive timely feedback, and know how they are doing. Having a single source of truth for all user engagement is critical to getting constructive feedback on how we are doing and what we need to improve. We don’t want to bounce around systems to launch NPS surveys, questionnaires, or product updates.
First, it needs to be able to streamline onboarding experiences onto the platform. The tool should be able to accomplish all of the following:
- Product tours to guide users on initial logins and features
- Questionnaires to guide the in-app experiences
- Product surveys upon completion of high value actions (HVA)
- NPS surveys
- Banners (for new feature announcements)
Second, it needs to trigger experiences based on user activity on the platform or previous experiences they completed. If we update our Storybuilder feature which helps our users to create visually appealing stories capturing their impact, we want the users who have used the feature before to know what’s new. If we send out a questionnaire asking what feature they are most excited about, we want to be able to guide them through how to use that feature in the most effective way possible.
Third, it needs to be able to send out surveys in real time. If a user completes a HVA, we want to know how they felt about it while it is still fresh in their mind and not when they revisit at a later point when they might have lost relevant context. If they complete a story, we want to know then how their experience is and not when they are scrolling through our academy.
Now that we know what is important to us, we can begin the evaluations.
Evaluation
With the criteria in mind, we completed research on several different tools. There are many articles claiming to have it all and why they are better than the competition. It can be very easy to become overwhelmed by the volume because what works best for some products might not be the best fit for your use case. That is where the importance of criteria comes into play and quickly eliminates some of the options.
For those that aren’t quick eliminations, we had 15+ remaining. Going through their documentation we are able to see what they did well and what is secondary to them based on how much they highlighted certain features. We are also able to see who does and doesn’t have a native Segment connector and how seamless the integration with Segment is. With that, we were able to bring the number down to 6 and start doing demos and trials with those contenders.
Since we knew what we were looking for, we were able to tailor the demos from the remaining 6 contenders to our outstanding questions. They showed us what they do well and thanks to the trials, we got first hand experience of the nuances of each, how we can accomplish what we are looking to do, and how seamless the overall experience was. In the end, we found 3 contenders that ticked all the boxes and these were: Intercom, UserPilot, and Appcues. With 3 seemingly identical options, how could we make a decision between them?
Collaboration and Final Decision
From the very beginning, Intercom was the frontrunner. It has brand recognition, the product is robust, and very beginner friendly. Their biggest feature though is their widget and since we weren’t intending on using it, we lost access to a chunk of the product. UserPilot checked most of the boxes and had more customizable options when it came to collecting data. Their UI wasn’t intuitive though and had a higher learning curve. Appcues was the middle ground between UserPilot and Intercom. It didn’t have as much customization as UserPilot but gave us everything we needed and wasn’t as beginner friendly as Intercom but after a couple of clicks it was easy to see how everything came together. It was time to work with the Stakeholders
Between different working sessions with the PM and PMM, we went through the good, the bad, and the ugly for the top 3. The reason for keeping the group small is because they would be the power users and most affected by the decision. They were able to login and build experiences to get the feel of how it’s done. We tried to be as nitpicky as possible because while we will get a great tool regardless of the outcome, we want to make sure it will work out the best for us. Intercom’s emphasis on Customer Success tools (specifically the chatbot), especially while we have our suite and the cumbersome nature of building a series of flows were dealbreakers for us. UserPilot was not intuitive enough while navigating the UX and that was the dealbreaker for us. Appcues, while not having an obvious way to do multi-select surveys, didn’t have a dealbreaker. With that, we made the decision to go with Appcues. The long journey had been completed and it was time to put Appcues to test by building out the first user experience
First Use Case
Before we can run, before we can walk, even before we can crawl we need to know what brings our users to our platform. One of the reasons we started this whole process is so we can learn about our users. With that in mind, our first use case is just that — seeing what brings our users to the platform.
We broke down our platform into 5 key areas, which we believe will give the users the most value. With this modal, we will be able to see how each user intends to use our platform, where they go after making a selection, and does it align with our hypothesis. After the selection, we surface a thank you message to the user for giving us additional insights. Previously we were able to see what activities are getting done but now with this additional information we can start understanding why. For example, when a user says that they want to increase fundraising but then goes to our academy content instead of donations, this could tell us that they are looking for help getting started first by learning more about fundraising and are not yet ready to set up a donations portal on our platform.
Next Steps
This is just the beginning. We can then use this data to build curated guides, provide more self-serve options, and give a better onboarding experience to our users. We will be able to continually improve the flows we build to get more valuable feedback from our user base which in turn will help us give them a better product and experience.