Our Methodology

THE RISE 2.0 METHODOLOGY

A proven process for growing online businesses using conversion rate optimization

If you are not sure how to increase your conversion rate further, you'll find this useful.

The below is an overview of the Shivook CRO RISE 2.0 Methodology, which we use on all of our clients.

Step 1: Observation
star
Observation helps us define a problem that we’d like to diagnose. We can do that with quantitative analysis.
Quantitative Analysis Tool = Google Analytics
Reviewing google analytics conversion rate data, we can find where we’re earning the greatest revenue / least revenue and find opportunities in the purchasing funnel where drop-off may be particularly high.
Step 2: Research
star
Research helps us define potential ways we could diagnose the problem. We can do that through a number of qualitative tools.
Qualitative Analysis Tool = HotJar
How to use HotJar:
done
Setting up surveys on high volume pages
done
Reviewing click/scroll/heat maps for high volume pages
done
User testing on userbob.com (using random audiences) on high volume pages
Step 3: Ideation
star
Using the research we’ve done in the previous step, we start to ideate.
At this stage, we list out the different categories/themes that our ideas fall into, which we can use to start building wireframes.
Once we have those ideas, we start conceptualizing the actual wireframes.
Step 4: Hypothesize
star
Benchmark: (Here, you’d add a link to data source)
Goal: (XXX)% Increase → Absolute Increase
Potential Impact:
Step 5: Wireframe
star
We take the ideas that we had during ideation phase and start building into wireframes.
Step 6: Design & Prototype
star
At this stage, we take the wireframes we’ve built and have them designed into useable UI’s (user interfaces).
Once they’ve been built into UI’s, we use a website like figma.com to turn them into live prototypes that we can use them for usability testing (explained next).
Step 7: Specification (for Engineers)
star
At this stage, we build a specification for engineers to build us a value test.
Unlike a usability test which tests for page functionality, a value test is built to test for likelihood to convert (i.e. actual conversion rate optimization).
This is the stage which would often be label as A.K.A “A/B Testing”.

The specification is built so that an engineer can take the entire project and turn it into a functioning version which can be used for testing.

For complex platforms, engineers are required to get past this stage.

For less complex platforms, or ready-to-build platforms (i.e. Shopify), some of these tests can be performed without any kind of engineering support.

Note: This is not the same as the specification we’d need later if the experiments are successful and we implement a change site-wide.
Some of the steps you often take when putting together a specification include:
done
Gathering PSD/Sketch design files & relevant images and labeling them / placing them into a cloud-based folder for sharing
done
Full description of the process logic, including acceptance criteria for the engineers that will be working on the product. The logic should include a description of what elements are dynamic, and what happens when changes occur
done
Clear explanation of the tracking that we desire to have in order to perform our analysis at the end of the experiment.
Step 8: Value Test (AKA A/B Testing & Experimentation)
star
The value test is often set up using a testing & experimentation tool.
The tool that I use most often is Google Optimize.
Using this website, you can see exactly the uplift that you will need in order to achieve the desired result in the desired timeframe.
Your experiment should be maximum 1 month, and ideally less than 2 weeks.

Conversion Rate Control = Current conversion rate

Expected Improvement = The Improvement (in %) you are trying to achieve

Unique visitors on test page weekly: # of unique visitors going to the test page weekly

Max number of weeks for test: 1-4 weeks (depending on page traffic volume)

Hypothesis = One-sided

Power = 75%

Required Confidence Level = 90% Confidence

Require uplift = ______ (TBD based on the above inputs)



The required uplift tells you what uplift (in %) you should look for within the time frame you’ve defined in order to have 90% confidence that you’ve increased your control conversion rate by your expected improvement with statistical significance.
Step 9: Analysis
star
There are 2 stages to testing analysis.
Stage 1 =  Quantitative analysis
Stage 2 = Qualitative analysis
Stage 1 is higher priority and defines the test results.

That is, from the variation/s tested, did you meet the required uplift during the time defined? Was there a winning variation? Did multiple variations have statistical significance?

Once we’ve identified which variation/s had the greatest impact, we can move on to stage 2 to better understand why they performed better & why the losing variation/s performed worse.

This can be done via HotJar session recordings, heatmaps, clickmaps, etc.
The two concluding questions to every test are the following:
Did we achieve the desired impact from this test?
What else did we learn about our customers from this test?
IF we’ve achieved the desired impact, we can implement the winning variation site-wide & continue optimizing further.
IF we have not achieved the desired impact, we want to document the changes that we’ve made, and keep this information in a handy place to avoid performing the same experiment in the future.
Step 9: Analysis
star
There are 2 stages to testing analysis.
Stage 1 =  Quantitative analysis
Stage 2 = Qualitative analysis
Stage 1 is higher priority and defines the test results.

That is, from the variation/s tested, did you meet the required uplift during the time defined? Was there a winning variation? Did multiple variations have statistical significance?

Once we’ve identified which variation/s had the greatest impact, we can move on to stage 2 to better understand why they performed better & why the losing variation/s performed worse.

This can be done via HotJar session recordings, heatmaps, clickmaps, etc.
The two concluding questions to every test are the following:
Did we achieve the desired impact from this test?
What else did we learn about our customers from this test?
IF we’ve achieved the desired impact, we can implement the winning variation site-wide & continue optimizing further.
IF we have not achieved the desired impact, we want to document the changes that we’ve made, and keep this information in a handy place to avoid performing the same experiment in the future.