Macbook image (wide).png

Problem Space

Pivotal Tracker was experiencing customer attrition due to an outdated pricing model that was punitive to users who want to add additional collaborators. The model was based upon large tiers in which customers had to pay for collaborators in bundles (which scaled larger as the total number of collaborators grew).

Current Pricing.png

We were receiving downgrade feedback on a weekly basis, in which customers explained that they were leaving specifically due to pricing. Our Customer Support team was also inundated with requests to have additional collaborators added to their projects.

“I was wondering if there is a way to add a few additional users instead
of doubling up what i'm paying monthly. Right now we are on the 150 plan and are looking at 100% increase to add ONE new staff to the account as a collaborator. Please do share as we are going to need to add a new user asap.”

Not only were we suffering from revenue loss, but we were also providing a poor user experience, in which we prevented teams from focusing on what really matters, which is building the right product, faster. They were constantly slowed down by our pricing structure, and especially for price-sensitive startups, this was a big problem. Upon my arrival to the team, I was tasked with fixing this. Following is how I approached the problem.


Kick-off

I led a kick-off meeting with a group of eight colleagues, consisting of leadership, customer support staff and the PM on my team. In the kick-off I led activities to:

  • Frame the problem statement

  • List hypotheses

  • List problems to solve (user experiment, pricing models, payment processor, etc.)

  • Ideate on measurable outcomes

Kick-off.png

In-App Experiment

Changing a company’s pricing is expensive, so it was critical that I find a way to run an experiment that was low cost, yet realistic enough to gather unbiased results. Getting unbiased results with pricing is especially hard, because there is a disparity between what users say they will pay, versus the reality of entering a credit card and clicking “submit.”

To solve this, I ran an in-app experiment asking Account Owners if they are interested in our new per user pricing plan. I then displayed the new plan and asked if they would like to be part of a beta. We ran the experiment with multiple models, including per user pricing and small bundled pricing. I made the distinct decision to follow the stylistic paradigms that already existed in Tracker. This allowed the team to build faster, since we could use existing components. We captured two types of data from the experiment:

Quantitative

My PM configured MixPanel events to fire on each page and with each mouse click. This allowed us to capture how many users were (or were not) interested, both initially and after seeing the new plan. We then queried the specific accounts who responded to understand if there was a correlation between account size and response.

Quant Data.png

Qualitative

On the first iteration of the modals, I gave users the option to be contacted for an interview. This allowed me to give color to the quantitative data we obtained and make decisions about which model to experiment with next. For example, the one finding of the first round of interviews was that, while users did not like the large tiers Tracker was tying them to, there a psychological component to having “free” seats left on the account. This led us to try and A/B test with “small bundles” of 5 collaborators.

Additionally, in the interviews allowed me to learn:

  • How many collaborators a particular team wished to add to their team, but could not add due to pricing

  • How customers perceive a proposed model

  • What exchange of value customers expect (more features versus more collaborators)

  • If customers make decisions based upon the price they pay now or the price they will pay as their company grows


Competitive Analysis

I conducted a competitive analysis of similar companys’ pricing models, with a special emphasis on Jira. They have a very sophisticated algorithm for calculating the price of collaborators over 100, which I wanted to explore. I mapped out several possible price points per user for Tracker. Then I drew a “line of best fit” based upon three data points: quantitative survey results, smoothing the cliffs of our current tiers, and competitive price points.


After finding the line of best fit, my PM wrote a script to analyze the impact my lines had on revenue. Because of the nature of the model change, we knew we would have to suffer an initial hit to revenue. However, the goal was to reduce the impact down to the amount we would gain from user willingness to purchase more collaborators and from reduction in attrition due to pricing.

Revenue Models


Survey Experiment

I sent out a survey attack from a different angel the question of what price point users are willing to pay. This was critical for further validating our experiment findings and determining how much wiggle room we had in the price point so we could gain optimal revenue. Tangentially, I also wanted to have a better grasp on what copy would help market our new pricing the best. The survey ran for one week and yielded 37 results.

However, in order to really understand the results of questions #5 and #6, I had to match up the answers with the number of collaborators in the respondent’s account. After all, a user who says $4 is a good deal for Tracker, but has 275 collaborators on their plan, means something different from another user who says $7 is a good deal, but only has 50 collaborators on their plan.

Survey Data.png

User Experience

Research

Once the pricing model and price point were decided, I began work on crafting the user experience. I started by mapping out the flow of other companies that utilize per user pricing. I wanted to ensure that my flow matched the mental model users would already have of the pricing flow.

Per User Flows_Competition.png
 

Lo-Fi Wireframes

I designed lo-fi wireframes to map out the flow and determine content organization on, and among, the pages.

 

Hi(er)-Fi Wireframes & Desirability Testing

I transformed the lo-fi wireframes into a few higher fidelity concepts. I took some stylistic elements of Tracker, while updating older components to give the pages a fresh look and feel (per our current visual design strategy). I then conducted a Desirability Test, with my team and the other designers, to understand sentiment and to determine which design best match our brand identity.

Final Flow & usability testing

Based on results of the Desirability Test, I iterated on my designs and completed the remainder of the flow. Since I continued adding form fields to the flow as new information became available, I conducted a final round of usability testing so as to catch any remaining glitches in the flow.

Usability Testing.PNG

My PM drafted a plan for a phased roll-out of the new pricing. While the experiment mitigated a bulk of the risk, it is still critical to measure learning objectives during a set period of time to ensure that it is worth the developer cost to continue building post-MVP. For example, in the first phase of the roll-out, we are considering our learning objective met if 20% of users who have the feature flag enabled convert to the new plan within 3 weeks (240 total). Further, 20% of those converted plans must add at least one more user within 2 weeks of upgrading.

Part of the developer work includes changing our payment processor to Stripe, because Stripe offers easy ACH payment, simpler implementation of per user pricing, and an easier API for the devs to work with.

Phased Roll-Out Plan


I lost the battle of how many collaborators should be offered for free, with the result being one free collaborator. I disagree with this, because one person cannot collaborate. If the purpose of a free collaborator is to let users demo the product, why not offer a different type of incentive, such as gifting them an additional 10 days of free trial for every collaborator they invite to the project?

I am currently ideating on a new experiment that will allow me to prove my hypothesis that one free collaborator does not significantly impact conversion.

Final Thoughts