Anton Antonov
Hypothesis-Driven Development

In practice, a hypothesis-driven development process is an implementation of ideas in the form of a series of experiments to determine if the expected result can be achieved. The process is repeated until the desired result is obtained or until the idea is considered unsustainable.
It sounds simple, but you need to change your thinking process to consider the proposed solution to the problem as a hypothesis, especially when creating new products or services.
What we have seen many times in the past (especially when working with startups), Product Owners tend to extend the scope of the project. It sounds quite natural when you want to implement something significant to add a lot of features, and it seems to you that all of them are critical and must-have. So, you can't stop and continue to add more and more. As a result, some of the initially simple ideas and solutions become quite complicated. The overall scope of the project increased, but everybody happy because they believe they know what they are doing.
Mobile Alerts
As an example, let's imagine we'd like to introduce mobile Alerts in our mobile banking app. Assuming we already have a backend solution to send them. Also, the end-users previously were able to set up alerts on the website, but what we see, based on the analytics, is that they rarely actually turn them on for mobile. In general, we want our users to stay informed of critical money operations and consider this functionality as necessary. So, one day, we decided that we need to develop a new Epic: Mobile Alerts.
We see that our current web-based setup process for Alerts is not optimized for the mobile. So, after the initial discussion, we decided to improve the user experience on mobile.
A list of Jira Stories for such Epic may look like:
Epic: Mobile Alerts
Stories:
Design a new Mobile UI design and flows for the mobile Alerts setup.
Define the new object model required to support the brand-new mobile Alerts user experience that we want to develop.
Implement the new object model on the backend.
Design new REST API for Alerts set up and edit.
Implement Mobile UI.
Implement REST API on the mobile side.
Implement REST API on the backend side.
Sounds great so far. Backend developers are super excited about the idea of the new object model. We need to support various types of Alerts, and the previous version (for existing web experience) was not so flexible to support all those new ideas and possible alert settings that we see as important.
The Design Team also excited about the opportunity to rethink an old user experience and introduce the new mobile journey.
Raw estimates from teams:
Designers estimated the amount of work needed in two weeks.
Backend engineers estimated all backend activities (support for the new object model and new APIs) in one month.
The mobile team also evaluated the changes in the client-side in one month.
+ a couple of weeks to test
Since the mobile team can start development only when the required API documentation is provided as well as the UI designs delivered, the overall Epic development time, according to the provided estimates, should take two months.
Sounds great so far. The management team is happy, it looks like the users will have a great user experience for mobile Alerts in just two months.
Two weeks later...
The backend team realized that it would be hard to implement the new object model without breaking the backward compatibility, which is not acceptable, so an additional two weeks of development are required to support both versions. Also, further changes will affect deployment scripts. That's potentially a critical change and may affect the old clients, so at least one additional week of testing is required.
Backward compatibility, which was not taken into consideration, affects the design as well. The Design Team now needs to provide additional UI designs, and user flows for the old clients (for mobile). So, already provided designs can't be used as is, and a further two weeks will be required to address new issues.
All those changes will affect the mobile development schedule, as well. Since the mobile team now needs to support both versions of the backend object model (old and new one). At least one additional week is required for development and probably more testing...
Not noticeably, but the initial two-month project turned into a four-month project.
A month later...
The news about the new Mobile Alerts project (now it looks like a major release) reached the marketing department. They noticed that those substantial changes in Alerts experience can't be released as a mobile-only since a lot of users are still using only the website. So, what about backward compatibility for web users? Old web user experience does not have any support of new features and a new object model for the extended alerts. Do we plan to support both versions? What if the user sets up a new type of Alerts using mobile and then opens Alerts settings on the web?
The CEO decided to do not release the mobile-only changes. We must address the web-users backward compatibility issues. After additional hot discussions, the management team agrees on a further two months of the web-development for improving the website and developing the same level of user experience.
The Mobile Alerts project now affects the entire company. The expected project development time is rapidly approaching nine months, involves more and more people, money, and temporary freezing or shifting all other releases. It looks like this will be the one and only one release of the product this year.
Three months later...
The company decided to close the Mobile Alerts project as an inefficient waste of time and money.
End of story.
Hypothesis-Driven Development
At some point, we all facing with a significant over-engineering. So, the hypothesis-driven approach is like a breath of fresh air. The basic ideas are quite simple: let's stop thinking of Epics, Stories and Features and switch for a moment to the Goals. Do we have a clear goal for the proposed Epic?
Goals definition
We come to the first step of the process: Definition of Goals. We always start with some ideas; in our case, we prefer to track them in Jira Backlog of Ideas. So, the goal definition process consists of the following steps:
Select an idea from the list of Ideas you have.
Formulate the clear and concrete Goal from the initial Idea.
The Goal must be measurable.
It is super important to define measurable goals, otherwise, you will never understand whenever you achieved it or not.
Impact Analysis
The next step is the Impact Analysis. Use impact mapping to identify all visible actors, their impact, required deliverables, and possible ways to achieve the defined goal.
Impact mapping is a planning technique. It prevents companies from getting lost while developing and delivering projects, by helping teams align their activities with overall business goals and make better roadmap decisions.
Especially if you are a startup, on this step, you need to choose the fastest way to achieve the goal with minimal impact. The shortest path is your roadmap!

After the impact-mapping analysis and outlining all possible ways to achieve the required goal, you need to select the shortest path and come up with an accurate short manifest using the following template:
We believe that building
{deliverables}
for these (people)
{actors}
will achieveÂ
{goal}
We will know we are successful when we see
{criteria of success}
Criteria of success is a set of new analytics metrics you need to implement on both ends (mobile client and backend) to have some living numbers for subsequent retrospective analysis.
Everybody from the team can revisit such a manifest, read it, and get the answers to the following questions:
What are we doing?
Why are we decided to do it in such a way?
What are our criteria for success?
In one of the subsequent articles, we will describe how to run such a project using Jira and Confluence. Stay tuned!
Retrospective analysis
Retrospective analysis is the final step of the process. It depends on the goal and defined analytics requirements so sometimes you can make your retrospective analysis in two weeks, one month, or after a few months. The exact date is floating. But, in any case, right after the release, you have to schedule the retrospective analysis. By that date, in the future, you already will have some real analytic data to make your review based on numbers, not feelings. After that, you will be able to adjust your product, see what is actually required to be improved, and come up with a new idea and goal.
Mobile Alerts 2.0
Let's get back to our example. Again we'd like to introduce Mobile Alerts in our mobile banking, but now we will use a different approach.
Idea: We want our users to stay informed of critical money operations and see this functionality as necessary. It is especially natural to have notifications on your mobile device, which pretty much always in your pocket. That's our hypothesis.
Now, we need to define a clear and measurable goal for us. Our one billion dollars advice - avoid using feeling. GOD feelings, or whatever other feelings you have - use numbers, they are unbiased. So, to have some numbers for the initial review and analysis, you must have mobile usage analytics. Do not move forward if you do not have the real numbers. If you don't have any mobile analytics in your app... Well, you need, first of all, fix this.
In our example, we assume that we have some real analytics data. So, based on analytics, we see the two main reasons why people do not use mobile alerts at all:
Surprisingly the main reason: users considered to do not enable push notifications at all. It was not evident for them why do they need it.
The Alerts set up experience is not great on mobile; people do not want to use a web interface from the mobile device.
Remember the first time we also had in mind the reason #2, but this time we will ask ourselves again: what's our goal?
Do we want to develop the best mobile experience for editing and set up the alerts, or we want our users to stay informed of critical operations?
Goal: we want our users to stay informed of critical operations.
Ok, the next step is impact mapping analysis. We need to outline all the ways how can we achieve this goal. After the impact mapping analysis, we found that the most straightforward and shortest path is just to set up the critical set of alerts and enable them by default. Instead of improving the editing experience, we do not require any action from end-users. To fix the first issue, we also performed an impact analysis and decided to try to improve the onboarding experience. Change the copy, and mention why do we actually need push-notifications permission from our users, to motivate them enable it and stay tuned on all critical money movements notifications.
As a result, our manifest could be:
We believe that
Define the minimal list of critical alerts that we believe essential for our product and end-users.
Improve the wording during onboarding (We need clearly describe why do we need push-notifications permission and what the primary motivation for our clients to enable it).
Enable the defined list of critical alerts by default on the backend.
for these (people)
End-Users
will achieve
Our users will stay informed of all critical operations.
We will know we are successful when we see
# of clients with push-notifications enabled increased.
# of opened push-notification alerts on mobile devices increased.
# of opening Alerts web-page and editing the existing alerts decreased.
etc
Now, to try our hypothesis, all we need is just to:
Improve the copy on the onboarding screen (ask users again for push-notifications during the next major app update, but now we will clearly describe what the benefit for them to enable it). 1 day
Define the list of alerts that we think is important: 1-2 days;
Update deployment scripts to enable the defined list of critical alerts by default. 1 day
One week of QA
Roughly speaking, two weeks later, we can see it alive in production, and a few weeks later or one month later, we can revisit this subject - perform the retrospective analysis and analyze the new analytics data to understand if we achieved the goal or not.
It is out of scope for this article and will be covered in subsequent materials, but it also makes sense to think about two additional aspects:
Roll-outing such changes to the subset of the end-users only (to try it before affecting all users)
Defining a rollback scenario, to be able to abort made changes if the initial hypothesis was wrong.
Thanks for reading!