Peep is a conversion optimization expert, and in this episode he talks to us about the importance of developing a user theory as opposed to simply trying random conversion tests and hoping they work. He also walks us through 11 common A/B testing mistakes.
TOPIC PEEP COVERS
- His conversion optimization expert
- The importance of developing a user theory for random conversion
- How does he define conversion optimization
- He cares about new traffic sources or getting more traffic
- When should a company start thinking about conversion optimization
- The order of importance with conversion optimization when he sees a site
- His insights on qualitative trump
- His article called 11 Mistakes in AB testing
- And a whole lot more
LINKS & RESOURCES
WATCH THE INTERVIEW
READ THE TRANSCRIPTION
Bronson: Welcome to another episode of Growth Hacker TV. I’m Bronson Taylor and today I have Peppe Leo with us. Pepe, thanks for coming on the program.
Peep: Thanks for having me.
Bronson: Yeah, I think we’re going to have a really interesting conversation. You are a conversion optimization expert and you’ve done a lot. You’ve been doing digital marketing for over ten years in Europe, Middle East, Central America, U.S.. You ran a software company in Europe and SEO company in Panama, a real estate portal in Dubai. And you’ve worked with an international nonprofit. Is that about. Sum it up.
Peep: That’s about it, yeah.
Peep: Now of mixed all those experiences and you know, I’ve become a conversion optimizer. That’s what I’ve been doing for the last full time, like only conversion optimization for the last three years. But of course, I was already doing conversion optimization back in 2006 while working on my portal in Dubai. Didn’t just but I didn’t call it conversion optimization.
Bronson: Yeah, and that’s great. And you have to have a company and agency markets act and so businesses come to you when they need help with their conversion optimization, right? Correct.
Peep: So anything from start ups due to e-commerce sites to, you know, whatever transactional sites where, you know, of course, SCA dials and ECM dials that in turn to do the max, you know, PPC costs so expensive SEO increasingly harder to do. So the only way to increase profit is really conversion. Getting your traffic to buy more, sign up more. So that’s what they come to ask us. Conversion optimization is not an art. It’s not about, Mm, let’s run this test. It’s, it’s a science, it is a methodology, it’s a process. And not many people are very skilled at it yet.
Bronson: Yeah, I think you’re right. When I look online, you’re one of the few people that have really carved out a niche for conversion optimization. That’s where you’re going to be and that’s what you’re going to be about. And when we talk to people before this interview, everyone had nothing but great things to say about your abilities and what you actually did for their sites. So I’m actually excited to kind of get into some of the details. But tell us first that kind of a high level, how do you define conversion optimization? What is it?
Peep: Hmm. It’s a great question. And there is no one single to answer to this. It’s a mix between problem solving. You know, a customer has a problem and they. Do you help them solve it or not with the least amount of friction as possible? And it’s also about user experience. You know, people go to people buy a lot of stuff from Amazon, not necessarily that the layout is the best, but a call to actions are the best. It’s because they’ve had so many good experiences with it. It’s so easy to do returns, you know, tons of reviews. It all adds up to, you know, a nice customer experience. And that, you know, drives their sales. So, so conversion optimization is part management consulting, meaning how can we create awesome experiences for people that buy from us and to how can we lower friction in the purchasing process to help solve problems, you know, help people get what they what they want.
Bronson: Yeah. And it kind of goes back to what you already said, that it’s not an art, that it really is a science. You know, you mentioned Amazon. Amazon’s not going to get any awards for their UI and UX, but I mean, they have tons of sales because they really turn the levers of what works scientifically. And that’s kind of what the way you see the world. Right?
Peep: Right. Exactly right. And then Amazon has tapped out on traffic. I mean, it’s growing. But, you know, they’re in their phase where they’re optimizing for more revenue from a single buyer. So anything any new improvements you see in Amazon, it’s all about how to get, you know, more money out of you.
Bronson: Mm hmm. Now, as a conversion optimization expert, do you actually think about or care about new traffic sources or getting more traffic? Or do you go into a company and say you already have traffic? That’s done, let me optimize it?
Peep: Well, of course I care about traffic cause, you know, you can’t run statistically significant tests without traffic. So you need to have some. And we’re all impatient in this world. We all want to test as fast as possible so that, again, traffic is a necessary here. But so many companies are pouring down money, you know, on down AdWords, whole or SEO. And while they could actually invested money in conversion optimization, get their site, convert better and then their customer acquisition costs go lower, they make more money now, Dubai AdWords ads or whatnot. So it’s, it’s, it’s a cycle. So if your site converts better, you can also afford to buy more traffic.
Bronson: Yeah. You know, it’s interesting. The more I think about growth and growth hacking and marketing and all these things, it seems like there’s traffic and there’s conversion and you cannot ignore either one. I mean, it’s just two sides of the same coin. And the companies that are really going to kill it are the ones that are going to do both extremely well, because they can pay more for AdWords, they can pay more for high quality inbound content because they got their cost down, like you just said, through conversions. Right. They really work together, don’t they?
Peep: Exactly. Right. So it’s all about cost per conversion. Like how much do you pay to acquire a customer? And if you can spend much less than a competitor, well, that’s how you dominate.
Bronson: Mm hmm. Now, you mentioned that, you know, traffic matters because you need enough to do statistical, you know, relevant test and things like that. When should a company start thinking about conversion optimization? Day one, they put up a website.
Peep: Let’s looking for conversion optimization? Yes, day one. It’s absolutely critical that even when you’re thinking about the design of your website, that it’s built from a ground up to to sell. Yeah. So when to start doing it test is another matter. It all depends on two factors like how much traffic do you have and b what’s a conversion rate? So. So if you’re asking for money right away, a convergence are going to be in single digits, typically, you know, like 1%. So if you have, you know, 5000 visitors a month and 1% of them by, you know, how many is that in absolute numbers? In order to run one statistically valid test, you need at least, let’s say, ballpark of hundred conversions per variation, you know, A and B, so 200 transactions per month. Now, how much traffic do you need to get? 200 transactions per month sometimes, you know, takes you six months. But if you’re a new so then it is not reasonable to do a bit testing. You can do other stuff instead. But if that math works out that you can do at least one test every 30 days with significant results.
Peep: Then you know, you should go for testing.
Bronson: So when you start out, you’re thinking about conversion optimization, but you’re just thinking at a high level. Best practices. Let’s build it to convert. Let’s do what we think will work. And then at some point you have the traffic and the conversions to actually run some test and get more scientific about it. So you think about the whole time, but you drill down a little later.
Peep: Right. Exactly. So. So testing helps you validate idea and it’s kind of, you know, lean startup and conversion optimization, you know, very similar. Concepts. It’s like you have an hypothesis as to your design and copy as to what will work. You put it out there real life traffic. That’s what works. So with a B testing, you can just validate ideas faster and multiple ideas. Yeah.
Bronson: Now, here’s a question that I’m really excited to ask you, because I think this is going to help get inside your mind. You know, when a new client comes to you or when you start a new project of your own, I’m sure there’s some kind of either implicit or explicit checklist that you’re going through with the site. And I want to know, what is it you look at first? What do you look at second? Are you looking at the headline first? Are you looking at the size of the elements? Are you. I mean, how do you see the world? Walk me through the order of importance with conversion optimization when you see a site.
Peep: Right. So. So first first thing is, you know what I call a heuristic analysis. So you I walk through the site, walk through the the funnel or the home page and pricing based features page. You know, try to see, put myself in the shoes of the user and I mark down the URLs and what’s the flow like through the site. And then each page that I look at, I associate for several things. I associate for instant reaction, for design, like, oh man, this blows. This looks like, you know, my grandma designed it or versus oh, that’s nice because, you know, people make design decisions within, you know, 50 milliseconds. Hmm. And if we dislike the design, if it looks amateur crappy, then we’re instantly the amount of friction the user feels gets higher. You’re less likely to give the money. So it’s like subconscious. So the, you know, design, so pure gut feeling, emotional reaction. And then I assess it for clarity of value proposition. Value proposition is the offer to copy on the website. The offer, like our software, helps you solve this problem. Mm hmm. So, you know, the user needs to identify. Yes, this is for me. I understand how I benefit, and I understand why I should use this service instead of the other guys. Mm hmm. So by assessing value proposition, how clear it is. Is it jargony? Is it clear? You know, do I even understand what the site is about? So, number one, do I associate for friction? No. Let’s say that you’re you know, they make or maybe the copy makes all kinds of claims, but there’s no proof to back it up. They say we’re number one. Well, according to whom? You know, and maybe the sign up flow is ridiculous, like seven steps and 25 fields and, you know, so that kind of stuff. What’s the friction then? Distraction. You know, the best websites focus on a single goal. You know, every page on your website should have a single goal. I want user to do this on this page. You can have secondary goals too, but one primary goal. So assess. Do they have a clear primary goal? And what are the things on the page that don’t contribute directly to people taking that one action? Mm hmm. If there’s a lot, you know, like, maybe there are, like, I don’t know, an overload of testimonials and news and, you know, our history and partners and whatever, like stuff that doesn’t make people don’t make people more likely to click on a button, then ideally that stuff should be removed. Now, of course, in conversion optimization, there are no rules that this works every single time on every single website. So this is just the beginning. Mm hmm. So when I ask this email proposition, friction, distraction, what I do is that I mark down. I’m taking notes. You know, as I’m going through the website, I’m marked down, marking down areas of interest or problem areas. Mm hmm. And now everything that I see is my subjective view on what I think, you know, what are the problems? So this is not the absolute truth. So now I go on and I collect data to compare my findings with what the data says. So it’s just the problems that they identify. Tell me where to look for problems or what kind of data do I need to gather and assess to understand that something is really a problem or not? So let’s say let’s say a home page is super, super long. And I’m thinking it’s a lot of distraction, a lot of useless content. So and increases the load speed of the website because speed is also a factor. So my hypothesis is if we have less stuff on the page, people would click more, take more action. You know, this one primary action click on about the now how to make sure whether the page length is actually a problem. You know, of course they’re running a split. This is the best way. But if I don’t have enough traffic, I would add I would add most tracking saw. For work session cam or click Taylor crazy I one of those and they give me a scroll map information I’ll see how far down people scroll. That’s one way. Also, I look I would look at in Google Analytics when you open up behavior and the list of pages, the page and I’m looking at, I would go click on that your URL and click on navigation summary and I would see all the links that people click on from that particular page and I will see if anybody is clicking on the links to the way, way, way below. You know, that says if people care about this. So there are many different points how I could assess where the page length could be an issue or not without a B testing.
Bronson: Yeah, I mean, that’s an incredible answer. And, you know, in some interviews are those moments that need to be listened to over and over. I want people to go back and listen to that over and over, because I think you just described what you do for a living. I mean, obviously, there’s a lot of details to it. Obviously, there’s a lot of skill and expertize that goes into those things. But at a high level, that is what you do. You look at a site, you have gut impressions, you work through the flow, and then you test to see if your hypothesis is correct with real data. That’s right.
Peep: And exactly. And there are two sides of there are two types of data. Well, what I just mentioned is, is quantifiable data. So analytics, you know, heat map data and so on. But there’s the other side of it, which is even more important. Mr. Qualitative Data So user interviews, user surveys, you know, all their customer development, work, use of user testing. So all of the stuff. So once you identify all the problem areas I put together, you know, for instance, for user testing, I put together a user testing plan that I would have people walk through and complete tasks that are going through those friction areas. And I’m just seeing if they’re experiencing the same friction that I’m thinking they will or not.
Bronson: Now, you said qualitative is more important. You know, I mean, you’re a conversion optimization expert and yet you’re actually doing qualitative trumps quantitative. So why is.
Peep: That? So the thing is that qualitative leads to more insight. You know, when we look at Google Analytics, we see what people are doing, but we don’t see why they’re doing it. Mm hmm. Okay. So conversion optimization is, you know, we have a customer theory that is the main thing that we are doing with conversion optimization. Yes, we are going for uplifts and increases in conversion, but that’s actually a byproduct of improving customer theory, validated learning, as Eric Reece says right back. So we’re always after learning improving customer theory, you know what works for this customer, what this customer does or doesn’t want and so on. So qualitative surveys, user tests, interviews, they give us more insight so we can understand why something is working or not. And then we implement the quantifiable just, you know, it helps us basically double check whether our conclusions are echoes from the qualitative data or correct.
Bronson: Yeah, that’s great. And I’m so glad you’re saying that because a lot of people watching this, they just want to stay in the data. They don’t want to watch somebody use their site when that might be the moment of breakthrough of, oh, that’s why this is happening. But we may have never gotten there with the data alone, even though it’s in the data.
Peep: Right, right, right. So so in my line of work, I’m never working with a single source of data. So what they tell me, you know, use their surveys. I compare that to my analytics data. I compare that to user test. They compare that to most tracking heatmaps. Mm hmm. Now, whatever other data I have available, so. So it’s always coming together. Plus, of course, a B tests a B test of A learning. So. So all of this, you know, I look at the data, I form a hypothesis as to, hey, maybe based on what the users told me and what I see in analytics, I think, you know, doing these changes with the lower friction and, you know, minimize distraction and improve clarity of the value proposition. So I created a treatment and run test and as is often the case, it loses my mind winning treatment based on all these insights. Loses. So what do you do now? Oh, well, back to the drawing board. No, no. So. So you need to segment the data. So it’s very important. So guys will use optimize the OR visual offset optimizer for the tests you need to set up. You need to tick the box that says integrate with Google Analytics, this particular test. So it sends custom information into Google Analytics. And now when you look at the test results, let’s say that they converge at 4% and be at 3%. Now you apply segments on this test results and you say, Hey. Well, my losing version actually works better for new visitors. Just returning visitors is worse or I mean, so many other segments. So I always look at segments and that’s where the insights are in the segments. And that’s, you know, you improve your customer theory, run a follow up test, know happens all the time when we start optimizing a client site, run a test, first test loses maybe second test, you know, no difference only with third or fourth test on the same page, you know, second, fourth iteration we get, you know, about 50% or whatever.
Bronson: Yeah, that’s really encouraging to hear that you still felt the first test and maybe the second. Yeah, that experience is not abnormal when we have the same experience. And this is kind of a perfect segue way into an article that you recently wrote called 11 Mistakes that you see in AB Testing all the time. And I was reading this other day and I thought, this is so good. I mean, this is like a clinic on AB testing. And you actually already mentioned one of them, you know, kind of doing the same test over and over on the same page there. But let’s walk through these mistakes and talk about them a little bit. Yeah, the first one is AB tests are called early. So what do you mean when an AB test is called early and that’s a mistake.
Peep: Yeah. So you set up an HIV test A versus B and you know, B starts winning is like, oh, and optimize or whatever. So, you know, it’s at 75% confidence the B is going to win and set it up. So stop the test, you know, opt implement version B, make that the default version and run a new test. So now I called it early. You know, your statistical confidence was only at 75%, which is not enough.
Bronson: So you should it would be enough, right? I mean, to just wait minutes, you know. So think about it this way.
Peep: 50% is flipping a coin. So 75% is just slightly better than flipping a coin. And also, we’re always dealing about dealing with sample sizes. So statistical confidence p value is only one factor here. You know, you don’t want to call a test before it’s 95% confidence or more higher. But you also want to look at the absolute conversions, not visits, but conversions. You want a minimum of 100 conversions per variation before you because it is optimized that you show up to optimize their tools. They also lead you astray. Certainly, you know, you’re you have, let’s say, 27 conversions on one and 32 on the other one. They might already say, hey, this is so this is really significant. But from experience, if you just give it more time, you let more traffic flow the same test, the numbers can completely change. I have seen plenty times a test that says, you know, with 99% confidence that this will win it, it actually ends up losing if you just give more traffic to the test. Yeah. So it’s very, very important that at the minimum 100 conversions per variation ideally to 50 or higher. And also if there are a high traffic site, another factor to consider is how many days has the test run? So you should not let a test run less than seven days and you should actually test in weekly cycles.
Bronson: Why is that?
Peep: The thing is that these are not brothers. Every day is different and you have fluctuation in your conversion rate on a daily basis. So your Thursday might convert to time is better and Friday it’s Friday, people go partying, they don’t buy a product. So now if you and your test on a Thursday, it’s like on a high note, it’s like, yeah, awesome, you know, but if you have running one more day, new data would come in. So if you start to test on Monday, you should always end it on a Sunday. So you have a full, full week. And if within the first week you don’t have any conversions, probation, you don’t have 95% confidence, you let it run another seven days and then another seven days and another seven days. So you always test weeks at a time? Yeah, the week. Of course, for some businesses the cycle is different. So you need to look at your Google Analytics traffic and you see this curves. You know, it’s predictable. You measure that curve. Sometimes that curve is not exactly a week. It typically is, but sometimes that’s ten days, sometimes it’s two weeks. So so that’s your business cycle. So you also need to always need to test one business cycle at a time. Yeah. So that is so if you call test early, basically what ends up happening, you said you think version B one, while in fact there’s a high chance that he would have lost had you given it more traffic. Now you come up with this in said, oh, well, we’re going be one. So that means our customers are like this and this and this while you’re making wrong conclusions and then you run follow up test, you know, using these insights that you got that are actually wrong and you end. Wasting, you know, many months of work and tests because your customer theory is actually wrong. Wow. So that is the problem. And I see one thing all the time with startups. They test a lot, which is great. They have the test driven mentality, but they call the tests early all the time. So after 12 months of the running 25 tests, the conversion rate is exactly the same like it was a year, a year ago.
Bronson: Which means something’s wrong.
Peep: Something is wrong, actually. You know, even though their tests were winning, there’s no actual improvement in conversion rate, you know? So that that it that comes from implementing versions that actually didn’t win. And I work with startups sometimes they’re bought their investors are like we need more progress faster, faster, faster. So they they, you know, they feel the pressure to test call tests early. So also, startup founders, marketers, they need to educate their board, whoever is, you know, cracking the whip to explain the we can’t test faster, you know, we test as fast as we can, but we can’t sacrifice, you know, data that we if we call it test early and it’s, you know, we declare the loser the winner, then we actually. And wasting more time. Yeah, it’s.
Bronson: Counterproductive. Now, you write that visual website optimizer and optimizer will lead you astray. And that’s partly because they have inbuilt limits in their software where they think a hundred visits or a thousand visits is enough to call a test winner when really, if you just go by their kind of set numbers, you won’t have enough traffic because you’re saying you need 100 conversions on each test before you can call it. And it needs to be at 95%.
Peep: Right. Right. So, you know, of course, significance is a complicated issue. There are no absolute rules, no conversions are.
Bronson: The rule of thumb.
Peep: Creation is not a, you know, universal rule. It’s a ballpark. It’s a very good ballpark to go go by. So they use a set of formulas and they don’t lead you astray on purpose, although some people say they do, you know, they want to get addicted to tests and then pay their monthly fee. I don’t believe that’s true. But, you know, so. So you just need to be a savvy tester. You know, I understand that statistics was not your favorite subject in college, but, you know, I need to brush up, spend, you know, a half a day learning about P values and and sample sizes and that kind of stuff.
Bronson: Yeah. So the first mistake was AB tests are called early. The second mistake was they’re not run for four weeks for all the reasons, you know, brothers are you know, days aren’t brothers. The third mistake that you see is AB split. Testing is done even when they don’t have enough traffic. And that’s kind of a theme of our conversation so far, really. So I don’t.
Peep: Know. Yeah. So so if you’re an early stage startup, you don’t have a lot of traffic and you want to be data driven. It’s like, yeah, let’s run tests. So actually, you know, your monthly conversions are like, you know, 2020 transactions per month or 30, so. So even if your, um, if your version B is 15%, better than losing 80. So instead of, you know, 25 transactions per month, now you’re getting 27. I mean, you don’t notice that actually in every test plus the sample size has to get really, really huge before you can call the test. So meaning it will take you two more years to run this test. Yeah. So. And that’s a wasted time. Yeah. So as a startup, you don’t have much time. You need to do something. So as a startup, you know, you need to calculate the needed sample size beforehand. And that will tell you how many days this test would run. There are many tools online for this. Just go for it. So I would say that if you can, if one test takes longer than 60 days, I mean, it’s not worth it. I usually go for 30 days, but I mean sometimes more than 30 days is alright for a test as well. So what you have to do is I as look at something nonscientific and course we’re in the business of making money. Nothing, the business of science. Right. So I advocate in this case sequential testing. So what that means is that you have erosion of your website, you and you still do all the data gathering, analytics, user surveys, heat maps, you know, interviews. You develop a hypothesis, this a treatment, and now you switched your life, you know. So version B is now the default, the versione nowhere to be seen. Mm hmm. And what you’re going for is instant improvement. Something that you can right away see either in a number of sign ups or, you know, cash on your bank account. So you were doing 25 transactions per month, and now you’re doing 75 with this new one. You. No, it’s better. You don’t know exactly. You know. 35% or 200% better. It doesn’t matter. It’s making you more money. So. And that’s why also the diversion be the treatment that you put out there. It has to be radically different. Don’t change me or things like, oh, let’s change the color of the button or the wording. Call to action on this button. You know, that’s not going to give you a 200% lift. You need to go for radical, bold, completely new approaches. If you want to see a result that you will notice in a bank account right away.
Bronson: Yeah, it seems like startups, they think they’re Amazon, right? And they want to test. Little changes to where the button is or where they put the reviews. When really for a startup they don’t have the data to know if it matters. And then you do, like you said, just make these huge radical changes and instantly either have more money in the bank account or less. And that’s really test.
Peep: Right, exactly. So you test radical approaches first. And when you find one radical approach, it seems to be way better, significantly, noticeably better than others. Now you start optimizing that one with, you know, smaller changes later on. Yeah.
Bronson: No, it makes a lot of sense. And the fourth mistake that you see is that tests are not based on a hypothesis. And you’ve mentioned that hypothesis a number of times already, and then you move into treatment. But why do you have to have a hypothesis before a treatment? What does that do for you?
Peep: Right. Right. So as I said earlier, we’re improving our customer theory, validated learning. This was what we’re after. So we need to have a clearly articulated hypothesis for for a for a treatment. And the hypothesis is, by changing these things, I think it will have this effect on the user and which will result in whatever, you know, number of increased number of signups. Mm hmm. And so you you basically, you write down what you know about your customer and your hypothesis are still, you know, that these changes will minimize friction, improve ifollow, you know, and copy copy and improved copy makes it more clear why our service is beneficial. And that’s hence, you know. Mm hmm. More signups follow. So once you get the test results, you know, let’s say your treatment wins. Now you have an idea as to why.
Bronson: Mm hmm.
Peep: And everybody in your company, you know, it’s a small team. You need to be aware of your customer theory, what you know about the customer. So it it’s not that one guy who just randomly sets up tests and they say version b10, and the rest of the team is like, yeah, yeah, I can, you know, so, but we have no, no idea where to go from here. So we’re always improving a customer theory bit by bit. Yeah. So let me bring an example. So I’ll, any piece we were working on, we were first optimizing. Mm. Copy. We’re writing different headlines and some headlines and bullet points until we found that we tried simple, straightforward language. You know, this is the survey, this what you get we try very marketing is like all this benefits and and we put the benefits that deep our target market told us in the survey that these are the benefits that matter the most of them. So it wasn’t made up because they actually wanted this benefit. So we made a benefit oriented headline. There were other versions and we run, I think, two or three follow up tests on it and the simple, straightforward language one every time. But okay, so now our customer theory, you know, improved by thinking, hey, simple, straightforward approach works for this particular customer. Mm hmm. Now, what we did was, what if we cut down our page length, made it shorter, so it would make it more simpler? We ran a test we want, like in the 45% left or something. So your test.
Bronson: Is based on your theory now?
Peep: Exactly. So and again, we learned while our hypotheses were right, making the page simpler worked and now it was like a seven step one or something. And so using that insight, simple, you know, simple, direct, that works. And that’s how all the follow up tests were also bringing in results. Had we not had a hypothesis, just randomly test, you know, button here or there, you know, you would just end up wasting so much time because you’re testing random crap. That doesn’t matter.
Bronson: Yeah, it seems like hypothesis is kind of in the same category as the user testing, watching them on the site, doing the heatmaps because you’re actually getting insights into why not just what exactly.
Bronson: Exactly right. Because now when you have insights and real knowledge, you’re able to make decisions at a high level and execute a direction. And you’re actually leading, you’re moving. You’re not just being a slave.
Peep: And it’s not about convergence anymore. You can make better product decisions, you know, because you know. Oh. What the people want and how they want it.
Bronson: Yeah. Because when it comes to creating a new feature, you can’t AB test big features. It’s going to take months of development. You have to yeah. You have to bet the farm on where you’re going and there’s no way to ab test tested sometimes. And your insights are what’s going to allow you to succeed, right?
Peep: Exactly. So it’s always about learning and insights. Yeah, that’s.
Bronson: Great. The the next mistake you see and this is a great one, is that test data is not sent to Google Analytics. You know, we have data inside of optimize early and yeah. So why we need to send it to Google Analytics.
Peep: Yeah. So that’s a point I mentioned earlier that even if a test loses, we want to know, you know, average is lie averages are never correct, you know, because if we have a 30 year old male from Toronto and a 16 year old female from California, that average is a, you know, sexually confused 23 year old in Iowa or something. Uh huh. Right.
Bronson: That’s awesome.
Peep: So so average is low. So we need to go deeper and see what is happening in segments and segments like new versus returning segments like traffic sources, you know, came from Google search came from this paper click came from this landing page. Whatever segments like demographics, you know, if you have it switched on in Google Analytics, you can do that or a custom segment of returning buyers, you know, like lots of segments and you want to see how version a insertion be performed across different segments. Because again, when you see that your version on average, your version B won’t. I’m not lost. But it wasn’t especially high for returning visitors or people who come from your app per click campaigns. That’s an insight. Yeah. And hey, that copy we have on all our AdWords ads that seems to be working in a driving qualified traffic. So why don’t we use the same copy on, you know, across mediums?
Bronson: Yeah, no, it makes a lot of sense. The next one is that the mistake? Number six, precious time in traffic or wasted on stupid test run. What’s a stupid test? So we don’t do this.
Peep: Test is where you, you know, do you do this spaghetti method? It’s like, yeah, let’s test something. I mean, why don’t what if we make the button green? So now test callers ever. I mean, callers are never going to give you a lift. Yeah. I mean, you’re always like, it’s probably maybe test, right? It will find comparable tests on Google search, but there’s always no brainer because it’s about visual hierarchy. Those are button stand out or not. If your web layout is green and your button is green, it doesn’t stand out by making it red or whatever other color it stands out more. So if you had tested green button versus, you know, red button, you know the button that sends out more and would have one. But you don’t need to test it. Just implement right away. Yeah. So, so stupid test is a random idea. Don’t test random idea. And you need to also know which page to run and tests on. So that comes down to funnel and help. Everybody has a funnel set up in their Google Analytics because then, you know, you can see the drop off per step. And where is the biggest drop off? You know, that’s that’s where the problem is because every website is like a leaking bucket, and you need to know where the holes. Mm hmm. And so. And also, you want to test closer to the money. So not your home page first, but your checkout page first.
Bronson: Because they got further. So if you optimize your traffic there.
Peep: So if you’re a read, say on checkout page, right now you have 75% people putting the credit card and, you know, pay you if you could increase that to 90%, which is very typical actually for transaction pages, you can do 95% conversions there. Mm hmm. That’s way bigger impact on your bottom line right away than testing the headline because. Oh, sorry. Yeah. Headline on your home page. Okay.
Bronson: So. Yeah, so that’s the headline. Seems like it’d be the place to start because it’s just kind of that’s what you see when you come to the site, you know?
Peep: Yeah. And of course there are multiple approaches here. So if you have a funnel, also some people say that, you know, you should start during the funnel, you know, from the beginning so you’ll more people in and drive down. But if the goal is to, uh, you need to see what are the drop off rates at each step. So you’re, let’s say a home features pricing a credit into credit card ad done so you don’t look at the drop off rate so. If on a credit card page I would see conversions 75%, then the paying, I would say, oh, this is where we test. And then on the home page, how many people from the home page go on, you know, home page? Of course we’ll have the most traffic. It’s easy to test. Mm hmm. But you need to assess their drop off rates for each step, you know, individual.
Bronson: Now, those are great insights. The next one you I talked about, they give up after the first test fails. So you test just tweak it, redo it, try again, because the second is so.
Peep: Forth, improving your customer theory and run follow up tests because your hunches might be right and the loser might be a winner if you just change some little things in a tweet, a copy, add something, remove something.
Bronson: Yeah, it seems like it’s a patience game. Like this is it. Move fast and it’s really fast. It’s slow. Let’s watch the paint dry, but get real results, isn’t it?
Peep: It is. It is. Because even guys with a lot of traffic, you know, I have customers that do millions of uniques a month. Mm hmm. Even their area, you know, it’s a maximum of four tests per month. You know, it’s because you can’t have overlapping tests. Yeah. Trying to test with overlapping traffic. Of course, if there are isolated areas within your site, didn’t, you can run multiple tests at the same time.
Bronson: Yeah, that’s actually one of your mistakes is, you know, running tests on overlapping traffic. Now, why is that? I tried to test on this part of the site and that part of the site with the same traffic. What’s the problem.
Peep: There? So so yeah, it’s, you know, typical sized site. You know, you run a headline test on your home page and then some test on a credit card payment page and features page. Seems reasonable tests running. But actually what’s happening is that the people who will finally end on the credit card page and totally you’ll be off the great good page. You don’t know which home page they saw and you don’t know which pricing page they saw and so on. So actually there is no learning here. Your your test data is skewed because there are too many variables here.
Bronson: Yeah. So you know what made the change and what be exactly right.
Peep: So because it matters like what they saw on each step before paying matters to the conversion. So sometimes, sometimes let’s say we have a four step funnel and version B gets more people from home based on a second step and actually version eight gets more transactions at the end. So that happened. So you need to look at the the whole funnel here. Mm hmm. So. So that is why. So, of course, you know, optimized the has a great feature called multiple page experiments where it can change multiple pages at once. So meaning we’re testing a completely new funnel versus the old funnel.
Bronson: So you need to see.
Peep: Right. Yeah. Yeah, exactly. So users see either only the old funnel pages or only the new funnel, which is traditional mix and match here. So that is a great way to test, you know, more pages at a time.
Bronson: Now that really helps understand why and then how to do it if you’re going to. Another thing you mentioned is a mistake that people don’t understand false positives. And what you just said leads to false positives, doesn’t it, when they test a lot of things.
Peep: If you have three, you know, test run and with overlapping trap traffic, there’s a very high chance that your winner is actually the false positives. Not a false positive is not an actual winner. Or another way is that if you’re doing a B testing, but you get greedy. Hey, why and why stop there? Why not do ABCD ESG testing in a more version of the time? So the problem with that is that with every additional variation that you add, you increase the chance of false positives. That’s just how statistics work. So I recommend that you you limit yourself to either AB or ABC testing because so you learn faster because your test duration is shorter. B be wins. And you look at the segments and you improve the customer theory and you can create a better test, a follow up test that’s better. Otherwise, if you test five different six different variations, even if you’re you, you get a real winner at the end. The test takes times longer. You could to learn so much more in between. Yeah.
Bronson: Especially with your threshold of wanting 95%, you know? Yeah. You know, accuracy it. If you’re going to test ten different variations, that’s gonna take forever to be all right. That one of them is going to work.
Peep: I don’t recommend that. Yeah.
Bronson: And then number ten, the mistake is ignoring small gains. You know, something is 3% better, and we’re 95% sure it’s 3% better. So. Right, right. They don’t want to implement it because it takes time. And who cares about 3%, right?
Peep: Mm hmm. Exactly. So, you know, if you’re just starting out, you can go for the really big wins, you know, like 100% win or 50% win. But eventually your site is going to be pretty good, you know, after after do. This for a while on, you know, after you know, quite a bit about your customer and the run test and so on. So then you stop achieving those, you know, 25% lifts. Mm hmm. Law of diminishing returns kicks in, but that testing is not over. So even now, you start getting wins like this. You know, 4% improvement, 5% improvement, 8% improvement. And, you know, compounding interest. The way it works is that if you improve your site conversion rates 5% per month for 12 months, that’s 80% uplift for the whole year. 80% is a lot. Mm hmm. So. So law of compounding interest. So don’t ignore the small gains. Uh, implement them and just build on it. Yeah. Yeah.
Bronson: That leads into the next mistake, which is not running test all the time. If you want to test all the time, it’s okay if the gains are small because you’re always adding to that pool of gains.
Peep: Exactly right. So you’re like like, you know, some tests takes a long time to implement. Like if you’re testing a completely new design, you know, it takes time for the designers to do it, then whatever. So meanwhile, while you’re waiting for the big tests or you know what you’re designing treatments, you can test small but influential stuff. You know, like a big change is really easy to set up a copy test. Mm hmm. But how do you think a hypothesis? Mind trying to understand which language speaks better to to the customer? Because, you know, traffic is a very precious resource. And we’re after learning here. So every day that you have a traffic coming to a site that is not going through a test, you’re not learning, you’re just wasting the traffic. Yes. So you should always have a test going in and you know, know that not every test has to be complicated, you know, so it’s more easy. Tests like testing copy can lead to very great insights.
Bronson: Yeah, that’s great. Thank you so much for kind of walking us through those 11 mistakes. And I mean. Yeah, my pleasure. This has been such a great interview. And I have two final questions for you here. The first one might be the wrong question to ask, so forgive me if it is, you can tell me if it doesn’t have a place. But in your experience with conversion optimization, is there a simple thing that a lot of people can do to get gains? And I guess I’m asking, is there a silver bullet, silver bullet to start with? Is there one thing that it’s almost always wrong? It’s pretty simple to fix, and it usually comes out pretty well.
Peep: Yeah, there is one thing that is kind of universal, and that is the clarity of your value proposition. And your offer on your home page or any major landing page with a traffic is entering your site. You know, do users understand what is the site about what they can do in here and why should they do it? Sometimes it’s like, you know, great local marketing sign up here. I mean, sign up for a what? Know it’s like if confused mind hesitates and buy improving clarity in your copy your headline subheadline is whatever copy you have there you’re you know, clarity trumps persuasion and clarity, you know, leads to a higher conversion. So the easiest thing to do is say, look at your go to Google Analytics, open up the landing pages report, look at your top ten landing pages and go to those pages like if I’m called traffic, never have been to your site before landing on the page. Do I understand what the site is? What can I do here and why should I do it?
Bronson: So that’s great. What was it? A confused minds hesitate. Is that the quote?
Peep: Yeah. Yeah, exactly.
Bronson: You know, it reminds me, I remember somebody there talking about why Gilligan’s Island, the intro theme music always saying about the entire story up to that point. So it talks about their their travels, it talks about their shipwrecked. And the guy they asked him, why is that? Because he had multiple shows and they all did that. He said, confused. People can’t laugh. So he had to catch them up on everything that happened before that episode every time. And that’s what his theme songs are about, the whole experience. And that’s kind of the same thing. If they’re confused when they’re on your landing page, they hesitate. They can’t get in the flow of what’s going on because they’re always wondering, well, what does that mean?
Peep: Right. I wasn’t doing that. People don’t bother to figure it out. So, yeah, you know, if you.
Bronson: Give them work, they can’t laugh, they can’t buy, they can’t do anything productive. Don’t give them work. Tell them exactly what you’re going to do for them as clearly as humanly possible. My last question I have for you What’s the best advice you have for any startup is trying to grow? And it’s a vague question. You can take it any direction you want.
Peep: Yeah. Best I, I guess I’m going to have to go with the typical talk to your customers on over here because that is where you get the best insights. Um, you know, uh, any person, you know, in person interviews and in person user tests, that’s the way to get the biggest insights for improving conversions and, you know, getting more customers.
Bronson: That’s awesome. Pat, thank you so much again for coming on Growth. A.V..
Peep: My pleasure. So, uh, if you have any follow up questions, comes in at Conversion, Excel, Dotcom.
Bronson: Yeah. Incredible blog with great content. Thanks again.
Peep: Thank you.