Shaun works at Atlassian where he has created a machine for testing product features that drive growth. In this episode he talks to us about what growth hacking is, the scientific method, and machine learning.
→ Atlassian is an Australian software company that makes tools to help product teams plan, build, and launch software
→ The most famous products are JIRA, Confluence, and Brain Hopper
→ Been around since 2002, profitable for 40 quarters, with 25,000 customers in 130 countries
→ Growing revenue at over 35% year on year, exceeded $100 million a year in bookings
→ Over 450 staff with hockey stick growth
→ Growth hacking is a process of finding product features or acquisition channels that increase growth in users, engagement, or revenue per user
→ It is a rigorous, scientific process that is data-driven and not based on opinions
→ It is not just a series of tactics or ways to game systems, but a method to understanding the process of user engagement and growth
→ And a whole lot more
Bronson: Welcome to another episode of Growth Hacker TV. I’m Bronson Taylor and today I have Shawn Clouse with us. Shawn, thanks for coming on the show.
Shaun: Morning, Bronson. Great to be here.
Bronson: Absolutely. You say good morning because here in Australia it’s actually the end of my workday. So we had to pick a time late in my day to make the the timezones work. But thank you so much for coming on the show. And you are currently the senior product manager at Atlassian, a company that has a few wins under their belt. And you focus a considerable amount of your energy on growth hacking. So I think it’s going to be kind of a perfect conversation that we have today where you’re doing growth hacking for a company that is just acquiring users like crazy. But first, tell us what is it lazy? And tell us about maybe some of your more well-known products.
Shaun: Sure. Sure. So this thing is an Australian software company, as you mentioned. We what we do is we make tools that help product teams in any given company plan, build and launch software. So we’re most famous for two tools, probably JIRA, which is our tracker or issue tracker and Confluence, which is our team collaboration and knowledge sharing platform. And also we have another quite major product called Brain Hopper, which is an agile planning tool which is used by Agile teams.
Bronson: Yeah, absolutely. Now, you all have had, like I said earlier, just an incredible amount of growth as a company. I don’t know what you guys disclose in terms of numbers, but give us some numbers. Give us an idea of how big you guys are in terms of staff or revenue or capital raise. I think you guys just closed around or users or what can you tell us about the size of Atlassian?
Shaun: Sure. Well, I mean, we’ve definitely been growing pretty incredibly so we’ve been around since 2002. So we’re probably not really what you’d call a young company. We were founded with credit card debt, you know, the traditional bootstrapping approach to the world. But we’ve been profitable for 40 quarters. So that’s ten years since we have over 25,000 customers in 130 countries. We’re growing revenue at over 35% year on year. We’ve exceeded $100 million a year in our bookings. We have over 450 staff. You know, the growth rates are astonishing. I mean, when I joined just a little while ago, there were half as many staff that there are now. So, you know, the whole story of hockey stick growth, basically.
Bronson: Yeah, no, it’s quite a story. And in a minute, we’re going to talk about how you actually growth has added Lezion because that’s going to be really interesting seeing how you’re promoting that growth. But before we do that, I want to talk about growth hacking kind of in general, because like we were saying before the show, not many people can talk about it philosophically. They can tell you how they’re doing it for their product, but they can’t really zoom back and talk about it as a whole. And that’s why I want to dig into this a little bit. So in your opinion, how do you define growth hacking? What is it?
Shaun: So so in my mind, growth hacking is a really explicit search for product features on one hand or acquisition channels on the other. That must increase growth. That must lead to a form of growth. And when I mean, when I say growth, I mean, it could be any combination of the number of users, the engagement of those users or the revenue per users. And what I like to think about when it comes to our thinking is the rigor of it. All right. It really is a process. And it should be reproducible and it should be something that you can develop as a core competency. It’s not a series of tactics or, you know, I don’t know. It’s very it’s not a series of just, you know, ways to game systems or to or to take advantage of loopholes, etc.. So if we think about it as a rigorous scientific process as well, which is that it’s extremely data driven, this, you know, it’s not really about opinion. I mean, to some degree, the growth hacks that you develop are part of the process of your understanding of your your process one standard user.
Bronson: So it kind of sounds like when you talk about a process, it reminds me of like the scientific method. There’s a method to it that you kind of run things through. And yeah, you might have results as the end product of an individual experiment, but there was a process that led to that experiment is not just about the end results being, you know, applied wherever you want, however you want, without understanding how you got there.
Shaun: Yeah, that’s exactly right. That’s exactly how particularly how we do it here. We literally everything that we do has a hypothesis. So it is literally the scientific method that you’re talking about. So what will happen is we will use various data, right? So data is both the beginning and the end of the process. So you use the data that you have about customers and their behavior and their interest and what they want. And you use that to form a series of hypotheses that you think might lead to growth in number of users, engagement or revenue. And then you execute that experiment. And most of the time it doesn’t work. And so so it does not it does not prove you have policies. It doesn’t even necessarily disprove your hypothesis because most of the time when you execute an experiment. A result that all you don’t get a statistically significant result, and that’s frustrating. But on the other hand, it’s better to know that when you do get a result, it’s it’s a rigorous process. You absolutely know that the results are real. And what’s even better is that as you get better and better at this, you can execute more experiments. And so the competency that you’re developing is not a series of tactics. It’s the ability to do this repeatedly. And it becomes, you know, quite a significant competitive advantage, I would argue, because, you know, you can just move faster than everybody else.
Bronson: Yeah, for sure. Now, I think this is going to be kind of obvious, but I want to ask it anyway. How is growth hacking different than, let’s say, guerilla marketing or even just normal product development with special about growth hacking?
Shaun: Yeah. This is actually something that I really try and emphasize to people because a lot of the most, you know, common anecdotes or the most outlandish anecdotes about growth hacking are all around with user acquisition and using guerilla marketing tactics to achieve, you know, user recognition. The problem with that is that what you’re really doing is you’re confusing the outcome of the activity with the activity itself. So the activity is basically the outcome is a set of guerilla marketing tactics that some people have found. Right. So, you know, the most obvious example is on the people with mentioned, you know, Airbnb using Craigslist, Zynga using Facebook. But what they have done there is that they have growth hacks, but the growth hack that they run is they have realized that their problem at that time, their growth constraint, was to do with acquisition. And then they’ve gone out and conducted a search. They found a, you know, a good way to it to acquire a lot of users very quickly. And they’ve executed that. But the fact that those tactics no longer work, which is kind of the nature of the beast, you shouldn’t you shouldn’t go, okay. That means growth hacking has run its course. Like some people say, oh, yeah, if they’ve tried all these tactics, now they spamming everybody, etc.. But just because those tactics don’t work anymore doesn’t mean that tomorrow we won’t see a whole bunch of amazing, even better acquisition growth hacks because it’s the search not not the destination of the you know, and and I think that but the other thing to emphasize there is that if you look at guerilla marketing tactics and that’s all about cheap user acquisition, right on reaching a target audience with their target audience cheaply there, preferably for free, usually by exploiting some loophole somewhere. And that’s cool. But you know, you can grow any at any point in the funnel and you know, the focus is really on on acquisition and you know, particularly the startup community, the focus is, is quite wholesaling on acquisition. But if you look at the other places, you know, activation, retention, revenue, referral, then all of those are great candidates too. And for some people, that’s actually where the problem lies. You know, like if you look I mean, you probably heard Aaron Battaglia, who was one of the founders of LivingSocial, he’s talked a lot about the fact that they got a million users, you know, with no great difficulty and then went, what do we do? Right. So so their growth problem was absolutely not getting users. It was and taking the activation revenue, you know, referral retention stages, etc.. So you can definitely do more than just acquisition and guerrilla marketing. And I guess the reason I also like to compare and contrast it to normal development is that I think I’ve kind of mentioned it once already, but this is the growth I think is the very explicit search for, you know, growth avenues. Right. Either either through channels or product features that cause people to like your stuff. More traditional product development. If you boil it down, it’s all about who will normally say this. It’s about solving customer pain or about solving user pain. But the problem is that solving a user’s pain, building a feature that solves a user’s pain, doesn’t guarantee that anyone will ever use it or ever see it or and in fact, it’s possible that, you know, and certainly in a lot of cases, a lot of features are never used. So you may be solving one person’s pain, but in fact, you’re introducing pain for a whole bunch of other people who have no idea what this additional widget is. It’s sitting there on this screen and you’re making it worse for a bunch of other people by complicating it. So, you know, you kind of see growth hacking as the explicit and scientific section of that. And, you know, I also think it’s kind of weird when when when you look at this, I think we’re looking at this with hindsight now. I certainly think we are here, the Bayesian, which is that, you know, you look you look back on the At-Home movement and you go, okay, that’s obviously awesome. But it’s much better to be able to iterate quickly and to deliver what customers want and be sure it’s what they want. And I feel a little bit that way about growth hacking. You know, I look back and I go, I can’t believe that we had we weren’t explicitly looking for ways to grow the number of users revenue engagement. Right. Like why was it nobody’s job to do that? You know, clearly that’s something that we all should be doing now.
Bronson: Those are phenomenal points. I hope people just go back and listen to the last 5 minutes over and over and over, because I think you’ve summed up growth hacking and, you know, comparing it to the scientific method as well as anybody could. I think you’ve hit upon the heart of what growth hacking is, and I think you’re absolutely right. It’s not going away. Like I can’t possibly imagine a future where someone is not in charge of growth and any meaningful company. I mean, why would we ever return back to waterfall development? Why would we ever return back to not growth hacking and actually thinking about our funnel in very scientific ways? It would be return to the Dark Ages, which would make no sense whatsoever. So I agree 100% with you there. Let me ask you this. When does it make sense? To try growth hacking people listening to this. When do they try it? When should they stay on the sidelines?
Shaun: Yeah. So this is one of those things that people often ask and ask me and ask at every given conference and stuff. And and I think, you know, the standard answer, the playbook answer is that you have to have a chief product market fit before you can really do this. Because up until that moment, what you what would you achieve would be you might be acquiring people or getting them to do whatever it is you’re asking, but you still want something that they desire. And, you know, you can’t really do anything from that. But I would say that as soon as you achieved product market fit, you really, really, really need to have somebody who at least part time is thinking about this. Because your problem, you know, as I mentioned already, right, your problem once you put your problem in a framework book, for example, where your growth problems are, then you can attack them. But up until that moment, you’re kind of flying in the dark. You’re going, Oh, let’s do a bit of work here, let’s do a bit of work here. Let’s keep the foot there. And you’re never really sure if, in fact, you’re pulling in opposite direction with your initiatives going. So, you know, I’d say the growth hacking, at least as a part time thing should should be something that you look at very early on. As soon as you’re sure, you’ve got something people to know.
Bronson: Yeah. Now, let me ask you this. How does growth hacking actually work in terms of leveraging it? Talk to us about that process a little bit.
Shaun: Oh, okay. Cool. Yeah, like I mean, as I said, we kind of have a very explicit process here and a very explicit framework for thinking about growth hacking here. So it’s I wanted to give you a bit of a breakdown of what that looks like. So as I’ve said, basically growth hacking is is the way we talk about an explicit search for growth avenues across different parts of the funnel. And so the first step is basically understanding where you’re going to get bang for your buck. So what you need to be doing is you need to understand your funnel, whatever it might be. Typically you’re trying to put your users in a series of states. So in other words, you might want to move them from having just signed up to having having achieved the best features of your software, having used the best feature, so forth. Then you know, from there you can get them to refer somebody if that might be your current recurrent search for growth. So once you understand where you could get your for buck, then you have to use all of the data available to you. And so in this area, kind of growth hacking is, as you’re putting it to you, bring it earlier. Right. It’s a no brainer because, you know, what we’re saying is we’re in a big data world when a big data, agile development, lean startup world and growth hacking brings all of that awesome stuff together and lets us really leverage it and make it something actionable. So, so in this case, what we’re doing, we understand whatever our problem is, we take all the data we have available to us, every single bit of analytics we can get our hands on, you know, crunch it through all the machine learning and all that cool stuff that everybody is really into right now. And then just an understanding or at least a set of hypotheses about what it is that’s stopping people from getting from that state to that state. Then we try and execute as many different tests as we possibly can. So in our case, we will try and generate, you know, 20, 30, 40 different hypotheses and then you just want to execute them as quickly as you can. So you want to build though, build something to test the processes and then you want to run AB test, multivariate test, whatever it is, whatever type of testing takes your fancy and then you want to execute all of those and let the data talk for itself. And, and indeed when you really also want to be sure that your how do I put this you really want to be what you want to be as an. So really what you want to do is you want to try and get any results. Kind of the way I think about any results is a good result other than no results at all. And so so when you’re executing your experiments, if you have 20 or 30 of them, you want those experiments to be reasonably. I guess maybe forceful is the way to put it. If you believe something to be true, then you want to be testing it. And, you know, in the most in the most obvious to the use the way you possibly can. You really want to get it right in front of the user, right in front of them, because it would be better for you to see that users hate it and that it clearly gives you the opposite result of what you’re looking for than it would be to see that it does nothing at all. Right. You don’t get it any sort of it doesn’t it does not prove or disprove the hypothesis. So yeah. And then and then one of the thing you need to do in order to truly be rigorous about this is that you need to be sure that you don’t just measure the very immediate outcome, which is something that I see people often do, right? So they go, okay, I want to drive people to refer. Let’s say they say, I want people to take my email button which will share something in my product with some other person over email. And then I’m going to spam that person and then I’m going to put a little thing in the footer like Hotmail did. That’s going to be. Yes, I love you, you know, and it’s going to be great. Right. And that’s okay. It might well be great. That might well work. But you need to be very careful when you’re executing experiments like that. So you’re not measuring the direct outcome, which is the use of that feature. Because if you force people hard enough, they will do it because they don’t have a choice to do it, but they might well hate you for it, right? So you always have to have a series of other things that preferably are as subjective as you possibly can that will tell you if you if you’ve, you know, in fact, killed, you know, killed the result rather than rather than getting the positive result you’re hoping for, even though it looks like it. But at the.
Bronson: End, you have to see kind of how different parts of the funnel interact with each other because yeah, you may force them to use that feature, but then your revenue might plummet. So yeah, you got one number to go up into the right and another one that’s also very important to go down into the right. And so it’s always a balancing act and really look at the whole funnel as one big equation and seeing how everything affects everything, not just the immediate thing right in front of you, right?
Shaun: Yeah, absolutely. Absolutely. And I guess, you know, to some degree, I think that the funnel does not capture one thing, but at least to be the funnel that everybody talks about, which is the idea of happiness. Right. So so I understand that that is the point of retention. And retention might be seen to a degree as as happiness. But you really need to we need to be very careful that you have some idea of what you think a regular user’s behavior is. And if you do have an idea of what a regular user behavior is, then you can compare apples to apples. If you don’t, then then you know, even the funnel and cohort analysis can hide reasonably significant problems because it’s from one state to another. So you do need to be careful that you’re not in fact, you know, people who are already in the state, you want them to be causing them to regress back up the funnel. So yeah, that’s makes a.
Bronson: Lot of sense. Now let’s dove into some specifics about how you’ve actually applied growth hacking to Atlassian a little bit. First, you mentioned this a little bit, but does growth hacking, in your opinion, work on B2B products? Because that’s what Atlassian is. I mean, it’s a whole suite of B2B products. Does it work in that context as well as B2C products? Is it the same as a totally different how do you kind of see that?
Shaun: Yeah, the answer I mean, the first answer is yes, it definitely works. Um, the challenge is that we’re still kind of struggling to understand all of the implications of growth hacking in B2B because so if you think about what the difference is that there are two main differences for us, and the first one is the event and a whole nother dimension. So in a traditional B2C product, your goal is to get the user that one person from one state to another, right? In our case, our goal is to get a set of people who are inside a company to like ask us. Right. And they might be one decision maker and three users or three decision makers and, you know, or ten people or 100 people. And so the question is, how do you so that level of interaction actually makes it quite a lot more complex to understand what it is that that makes people happy and drives growth. And in our case, we actually have it even a little bit harder because we have a series of products like a suite of products, phone phones, zero grain offer, etc.. So it’s not even necessarily clear, you know, when you’re when you’re when a user is working with the software, what they’re here for, I mean, are they here for collaboration or are they here for, you know, issue tracking or are they here for agile planning? So that has a whole bunch of other dimensions. But if you look at this from the alternative side, which is that which is that when you have these ambiguities, right. That’s where that’s where the gold lies. Right. Like to to to a degree when yeah. When there is this confusion about all these problems. If you can answer these, these questions, then you’re almost certain to find a pot of gold. On the other side of it, you’re going to find great growth acts or.
Bronson: Done a lot. So it does work with B2B. It has a new set of questions that have to be answered, but there’s a lot of room for growth because there’s questions that have to be answered. I totally agree there. Now, give me some specifics. What kinds of growth hacking do you find yourself applying at Atlassian? And maybe you can even give us specific anecdotes like, hey, with this product, I tried this, this app or that product, I tried that that happened. Talk me through some of the things you’ve done there.
Shaun: Sure. Sure. So so in my team, specifically, my growth, I think team we specifically looking at in product related experiments. So basically this is at the point of the funnel after a company has signed up to try out a hosted version of the software. We do have other people who are doing more experiments, higher up the funnel acquisition, etc. But because we’re inside software, most of our experiments are around trying to ensure that those new users who come in through our software, that they understand it, that they become engaged and they stay active with us over a period of time, basically, that they remain our customer. And also we look at ways to encourage them to try other things that we might offer that might be useful for them or get other people inside the company to want to use to use of software. Right, to make the collaboration platform expand. So if we look at this from the engagement front, then like I said, we we try and be rigorous. So what we use is we use a mathematical model. So it mentioned machine learning earlier. And what we do is we use a mathematical model of all the events that occur in the first seven days that somebody is using software. And what that does is it predicts for us, it gives us a number between zero and one that this person will remain our customer for a long period of time.
Bronson: Based on their actual events they trigger through their activity. Okay. So if you get like point eight, they’re probably going to stay around for a long time. If they get 0.1, they’re probably not.
Shaun: Yes, absolutely. Right. And so the advantage of this model is that this gives us an extremely objective way to measure the outcome of what it is that we do. Because so basically what happens is when we run an experiment, we can generate this number for the people who sold this phone and the people that did not see the see the experiment. If the number is higher for the people who saw the experiment, then that means more of them will remain our customers in the longer period of time. And the implication of that is not just that they were made a customer, which is right, but it’s actually that you have to assume that people who are going to stay with you longer like yourself for them. All right. So the goal here is, in fact, to make people love your software, to make really great engaging software. The way to measure it just happens to be that they remain your customer for for a long period of time. And this is really objective. So that means that pretty much any given experiment that we run can be measured using this model. I mean, there are some types of experiments that that doesn’t make any sense, but this means that there’s no opinion, you know, in the outcome about about discussions about whether or not an experiment worked or not. But looking at some anecdotes to give you some anecdotes of.
Bronson: Things, if you follow questions, then we’ll get to anecdotes because I think what you just said is so interesting. I want to dove in there for a second. So one thing this does is it seems like it really shortens the time frame needed to know if something was successful, because in a sense, you already have historic data with that 0 to 1 number that you’re using. You don’t have to wait four months of this new feature being out to know if it’s successful. Does that mean is that right?
Shaun: Oh, absolutely. So so in our case, we would probably develop an experiment. Most experiments would be developed in about 4 hours between deciding what we’re going to do and having it built. Then deployment is pretty much instantaneous. And then and then basically what happens is that model, as you know, seven days to calculate, but periodically it can be calculated through the through the experiment and then at the end of it. So it’s all automated infrastructure. And we have a experiment controller that we call Houston. And basically if you go to Houston, it will show you the experiment. It describes what the experiment was and shows you all of the results, you know, all the great mathematical stuff, standard deviation mean what events were triggered by who did you manage to drive people to do what it was you were hoping they would do? And then also did that did that change the model? Did that move the model? Yeah. So that it certainly makes it easier and we don’t have to wait a long period of time to know whether or not it was going to work.
Bronson: Yeah, new categorizes as machine learning. So I’m assuming it’s actually taking data that’s being produced and rolling it back into the equation are ongoing basis, is that right?
Shaun: Yeah. So so what happens? What happened was we have generated this model using a series of different machine learning algorithms over the over the historical period. So we have obviously a long a lot of data for historically whether or not people stayed our customer or not. And so we we built this machine learning model. And then as time goes on, periodically we take all the new data that we have, we feed it back into the machine learning model and regenerate a new model. And then that model becomes the active model. And we also have something that compares the real life, what actually happened to what we predicted. And in some cases, you.
Bronson: Have all of your.
Shaun: Yeah. Yeah. Well, in some cases, we. What you find is that something that we thought was not successful was. We haven’t. Well, because. Yeah. I mean, realistically, the model is only x accurate. It’s not. It’s not. Yeah. Yeah. And that’s a good, good moment. Right. When you happen to have an accidental success rate, that’s an ethical dimension.
Bronson: Yeah, well, the fact that you guys built in these fell saves to know if the model actually reflected reality. I mean, that’s a level beyond what I’ve heard of companies doing, to be quite honest. I mean, I’m kind of surprised that because, you know, I think about machine learning a lot and I enjoy learning about it, but I’ve never had somebody come on the show and say, we’re actually building machine learning into our growth hacking itself. I mean, that’s a new insight, but I’ve never seen anybody do. And then the fact that you’re comparing the machine learning algorithm to reality after the fact to see how well your process actually works. No wonder you guys are growing like you are, and it makes it makes sense now. I So let’s get to some anecdotes, but tell tell us some of the anecdotes. What are maybe some of the maybe some of the biggest wins that you guys have had using these incredible tools?
Shaun: Sure. Sure. You know, we’ve had a lot of different kind of successes, but to some degree, I actually also really appreciate the counterintuitive results that we find, um, a little bit more than necessarily the successes where you drove, you know, some growth of X. And that’s because, you know, I think that it’s amazing when you let the numbers speak for themselves, you often find out that all that you knew was wrong. And and so one that I thought I would mention was related to our agile planning product, which is great hope and great help, as I mentioned slightly earlier, was is basically an add on to JIRA, and it allows them to do beautiful visual agile printing. And we know that lots of people love it, right? A lot of people buy it. It’s got a very high rate of attachment at 2 to 0. But we thought we would run an experiment anyway where people who had who did not have green help out. We ran an experiment where when they created a new project in JIRA, we we gave them the opportunity to see what the agile options would be if they if they had grown up in the instance and if they were interested. It said you can enable Green Helper by doing by doing the following things. Now, you know, I mean, a lot of people might call that an upsell type of experiment. You know, you’re trying to encourage people to try something else. And, you know, to some degree, there’s a lot of cynicism around that. And people are like, oh, you’re just trying to, you know, encourage people to do something. But the results of that experiment were completely the opposite of that which which is what I find so incredible, because, yes, it is true that we got, you know, a very basically we doubled the rate at which people would would add green Hoppa onto JIRA. So that’s, you know, an amazing outcome by itself. But the bit that I loved the most was that not only did we did we achieve that particular growth outcome, we turned out that that mathematical model that I talked to about that engagement, it shows that we actually increase the likelihood that those people would stay with us by over. But we by over 10%. Right. So basically, not only did so I guess the way I think about this is, is that we were doing this, the customers who we did not tell about these things a disservice. Well, I kind of feel like, you know, I look at our behavior and go, we were kind of making life harder for them by not telling them about this. That’s obvious, because once we told them they liked us, having told them and they wanted to remain a customer.
Bronson: Is such a good insight because so many product people that are just like purists, they feel like anything you do to, you know, increase growth, increase retention, that somehow it’s always at the cost of the user, that everything has to be borderline spam. But it’s not so many times what we do to grow our product is actually what the end user wants us to do for other reasons than growing the product. So they go hand in hand more than people realize. For every path that spams users, there’s a bunch of other companies that are doing things that are in line with the interest of their users, right?
Shaun: Yeah. You know, I completely agree. Right. Almost, almost all of the results we and the bit that I love about the model I referring to is the fact that in the end it is a proxy for people’s engagement, how much they like the software. It’s not a proxy for anything else. Right? So if it goes up, then then, okay, there could be arguments that in some way there might be some landmine somewhere that you’re stepping on as well. But most likely the answer is you have made them up yet. So. So I guess my point of view is we should all start having making opinions and going, oh yeah, that is spam. That is too far. That is, that is, you know, you know, bad for the customer. And I’m like, let the customer speak for themselves. If you trust the data and you have a rigorous approach, then you don’t need to ask these questions or second guessed yourself. Just let let the numbers speak, you know? Yeah. So another interesting anecdote that I would share with you is, is around. Email. You know, this is a little bit of a weird example, but I wanted to mention it because I was stunned by the outcome, which was, you know, we a lot of people have had a lot of success with growth email. We haven’t done very much of it here. But in one case, we sent customers out an email that was telling them for their instance of whatever their products were. We gave them some statistics of how they’d been doing, the various actions that have been happening in their instance. And then we compared them. We said, compared to other people who are like you, you are 20% more active or 30% less. So you might have seen these sorts of emails and they come from Salesforce, for example. Those does some of this right. And we thought, you know, this might inspire people to try out things they hadn’t previously used, which might be great. And it also might inspire them to, inside their organization, drive up engagement and evangelize things into their engagement. And we only sent this out to not very many people and maybe a couple of thousand, and we included two shipping buttons at the bottom. This is a test. How much do you like it? Would you like to receive it again? Yes or no? And they were really big buttons. And what I couldn’t believe was that like this email got an open rate of over 40%. So I mean, unbelievable. Open, right, for this out of the blue email. And then 98% of people who click the button chose. Yes. Wow. So that so some people might say, oh, you spam, blah, blah, blah, blah, blah, you’re going to fan the customers. It turns out that their view of this was, in fact, exactly the opposite of that. And what any. And what. Yeah, exactly. Right. And what’s even better is that this this email developed a life of its own. And some of our executives were in a presentation with a customer in the next week, and that customer presented that email back to them and said, And so this email is just the best thing ever. I’m so glad you guys have started sending it. And the executives are going, what? The email. We’ve never sent this email. Right, because we’re just an experiment. Right. So no one else had ever seen it before. Then, you know, yeah, it turns out that there are lots of cool things you can do that that that will make your customers very happy, I guess.
Bronson: Yeah. I love the inside of letting the customers decide what works and what doesn’t and not having to preemptively handicap yourself because of bizarre views on what spam is.
Shaun: Yeah, yeah, definitely.
Bronson: That’s great. Now, let me ask you about something else on your else products, at least the ones I’m familiar with that I’m not familiar with all of them, but it seems like you’re transparent with the prices. You can go and you can see how much things cost. Is that a part of your growth hack? Is that a tactic to let people feel at ease with coming on board?
Shaun: Oh, yeah. I mean I mean, so we’ve been around for a long time and we’ve always had full, full transparency. So certainly at the time when we first started doing that, you know, enterprise software companies kept everything as big a secret as they possibly could. And I guess to some degree today, a lot of them still do. Yeah, but but at that time, the open, open approach to pricing in the modest, really quite modest approach to pricing made it possible for lots of small teams to use our use our software, which was a huge growth improvement for us. But where it really exploded was in 2009, and that was when we introduced our $10 for ten users for a perpetual license where we don’t get the whole $10 to charity license. And we had expected at that time to sell $25,000 worth of those licenses in a five day period, which was our intention to run that promotion for. In the end, we achieved $25,000 in 18 hours rather than five days. We didn’t cancel the particular promotion and it’s been running ever since. You can still buy a license $10 for ten years of perpetual. We still don’t endorse. To reiterate, we gave them over $2.5 billion last year. And you know, what this does is this gives all these small teams, the smaller teams, the advantage, the ability to use our use software. And without friction, they can even if they’re in a big small team, in a big organization, they can take it on without needing to go through procurement and, you know, all these other processes that make it very difficult and they can really just get things done, be successful.
Bronson: That’s great. Now, not only is the pricing transparent, but the products themselves seem to be self-serve. You don’t have to do a lot to get them installed and get them up and running. And Atlassian’s president has said, quote, really good white papers will sell the product. No need for a form. I like that quote. Tell me, what do you think he means by that? Because I think there’s something insightful going on there.
Shaun: Oh, yeah. I think I think, you know, a lot of people in the industry are into this thing right now where, you know, you have some white paper or some e-book or something and then you’ll have an email form in front of it so you can get their email and then kind of spam them forever to try and encourage them to convert or to do something with you, do something further with you. I think our point of view is is very different, which is that what we are trying to do is be able. A really great software, right. And the software that we build is to help people build really great software. So our point of view is that our learnings, we’re trying to be the best, the best we can at what we can do. So we want to share that knowledge with everybody else because that will help them be successful and that will obviously want them to want them to consider our offer so they can be even more successful. So it’s more along the lines of, you know, I was thinking about this and I think that, you know, this is about company marketing rather than product marketing. This is about the fact that Atlassian lives the life that of its of its customers. And so we are we are trying to align with them in every way and in terms of encouraging their success through sharing our methodologies with them and then taking the best of the industry methodology and putting it back into our processes and our software as well. So this is kind of a symbiotic relationship going on.
Bronson: Yeah. Now, Forbes last year published an article and they had a, I think, kind of a humorous quote in the middle of it about the Atlassian products that says their products may not win any design contest, but they are so easy to use that they’re flooding the world’s businesses with collaborative software. So let me ask you first, do you agree with that assessment that they’re not going to win a design awards, but they are that easy to use? Do you agree with that or is that just wrong?
Shaun: Yeah. I mean, gear and confidence have been around for a long time, so I can understand why people would feel that way. You know, there certainly was a feeling of a bit older, but, you know, we have quite a stellar design and UX team here now. So I think if you were to look at some of the lighter stuff, Jurassic Conference 5.1, I think most people would take that back. But you’re right, there has been a journey to modernize and really make the beautiful as well as functional.
Bronson: Yeah, but you guys grew a lot even before the UX team was really in place there. So let me ask this. Do you think that function should Trump design for the sake of growth?
Shaun: Yeah, that one. That one’s really interesting. I kind of would say that it’s a little bit the opposite. Right. And what what? What, what? I mean, that is because is because really beautiful design is about helping a user engage with the software in a way that feels perfectly natural. And, and so in a way that means they’re inextricably linked together rather than rather than contradictory. So I think the really, really good design that leads people to become so engaged that those lighter portions of the funnel are like obvious, right? Of course, I’m going to be retained because I love this thing. It helps me exactly what I want. Of course I’m going to pay you for it because this is so awesome. Like, I cannot imagine why or any charging me x dollars for it. And I’m going to refer to other, you know, first all the people because this is so awesome. It’s changed my life. You know, I’m going to want it posted on Twitter. You’re not going to have to build some integration which spans on my Twitter. Follow with like, you know, I’m going to want to post that that stuff. So. So I think that I would say that no. Basically what I’m trying to say is no. I think design really is very important and it’s getting more and more important because now we’re moving towards a world where, you know, people expect beautiful, great interaction and software to know what they want to do. But I on this particular question, I do think it’s important to distinguish growth hacking, the activities you conduct in growth hacking versus, you know, a real product or, you know, what would be a final feature. So when we talk about design and growth, I can only say that the goal is to test your hypothesis, whatever it is. So if your hypothesis is about some particular button somewhere in the user interface that will lead into some path of behavior, then the very first thing you need to do is test that bubble. And and I don’t and a lot of people are quite into should the button be blue or red and all that sort of stuff. And my point of view is that most of the time that bubble will never get fixed anyway. Right. So, so I really don’t mind if it’s blue red, if it looks exactly like all the other buttons in the interface. But I think when you’re in the experiment phase and you’re testing these ideas, you don’t need to have a beautiful, amazing, you know, perfectly rounded edges design before you can test the idea, because my style is just won’t work. So it’s important to be efficient with what you’re doing rather than getting too deep into it.
Bronson: Yeah, no, that’s a great point. So let me ask you this. Have there been any lessons you learned the hard way? I know as you as you go through this process, there’s going to be some things you try. And not only that experiment work or not work, you actually learn how to do growth hacking itself better because of a mistake. What have you guys learned the hard way there?
Shaun: It definitely is. Definitely one of one. I kind of resist referring to them, which is the fact that, you know, when you’re when you are conducting this search, your goal is always to test. I refer to it as the kernel of truth, the very center of what it is you believe is the user’s behavior. So, you know, if your thought is, you know, that people want to let’s say it’s in JIRA and you think that, okay, people will like DRM or if when they first create a project and they get the opportunity to do the workflow for that project, and then the ability to choose what the steps are in that workflow, you might believe that to be true. Now, if you do believe that to be true, then the first step of your hypothesis is actually that somebody, when they click, when they click the create project, things that if you offered them a button or a list of available workflows that they will choose for. Right? So your test should always be just that. Your first test should always be the minimum necessary to answer that question. Because as I said, most of the time the user is not going to do it or they’re not going to care about it there. And if you and if that happens to you, then you have two choices here. Either you accept that your hypothesis was wrong, which is not necessarily true, or if they didn’t click it, you don’t know one way or another. But then you can conduct the search for where they might want that feature. So you can go, okay, maybe I’m, maybe I’m, maybe I’m still right. So you might go, okay, I’m going to put that selective workflow button on the project page or I’m going to put it here, or I’m going to email people and say, Now that you’ve created a new project, would you like to consider one of the following workflows so you can do the search for where they might care? And then eventually, after you find that, no, they really don’t care at all, then you can ditch it. Or you might have found a great place. So. So it’s better to much better the test, really what you’re trying to get out there. And the other thing I think that I would add, I really emphasize to people and sometimes I really emphasize this to product managers because they have a very we we actually referred to this earlier, but people in our experience with our experiments, people are much less sensitive then than we think they are. Right. There’s always this view that, you know, if you put a button there, you know, people are going to hate it. It’s going to cause them to, you know, to read as hate mail. Like, you know, it’s going to be disastrous. You know, if you send them that email, they’re going to be like, oh, man, you know, these guys always spam me. Like, there’s always this pushback of like, that isn’t a beautiful design that isn’t a, you know, complete feature, that isn’t a whatever. But, you know, in our experience, as I said, the vast majority of things are not noticed by users at all. Like, at times we’ve even put things like in the very center of the screen in a dialog box and user will literally click around it like they were. It’s almost like they do not even see it. They’re so directed towards the way they always work with the software that, you know, they just don’t they don’t feel it and they don’t they don’t respond. Yeah. So when you’re when you’re experimenting, you really need to be bold and you need to think of extreme versions of your ideas. And and as I said earlier, it’s it’s better to it’s better to get a fail than to get no results at all. So, for example, if you if your hypothesis is the, you know, a user must do something in order to gain a bit of knowledge that will cause them to be happy. Then forcing them to do that, some people will argue, is like, Oh no, we really should not force the customer, but either you believe it or you don’t. You can’t have it both ways. And so your experiment may as well be the extreme version. You might as well force them to do it, because if it turns out that that is a bad idea and customers absolutely hate it, that experiment will fail and you’ll never make that mistake ever again. But if it turns out that that that’s exactly what you need the customer to do, then that will be a huge success and forever after you get the benefit of that success. Yeah, that’s the real upside of growth hacking, right? You pay for failure exactly once and you get the benefit of successes infinitely.
Bronson: Yeah, that’s the quote of the interview right there. And I love your second point. Be extreme, be bold. And with your first point, it almost made me think, you know, we have MVP minimum viable product. It’s almost like the F minimum viable feature just so we can tested to see. It’s almost like bringing, you know, the company as a whole into the growth hacking mindset itself was the smallest amount of resource I can put into this test to get the essence of what I’m looking for as kind of the way you put it there. So I think that’s a great takeaway.
Shaun: Yeah, you know, you know, Brunson like quite literally, that is our goal here that this idea to bring growth hacking to every single team, every single team, the idea that, you know, what we do should be should be validated in some way, you know, in growth hacking is a great way to validate things.
Bronson: Yeah, absolutely. Well, Shawn, this has been incredible interview. I’ve got one last question for you here. It’s kind of the the high level fortune cookie question. You can be vague if you want or take it in a direction you want. What’s the best advice that you have for any startup that’s trying to grow?
Shaun: Well, yeah. You know, I think it’s kind of common advice these days. It’s a little bit of a, you know, chestnut. But, you know, I think that what you really want to be doing is optimizing the rate at which you can validate your ideas. And we’ve been talking about it throughout this interview. I think that there are a lot of opinions. And in startups, early stage startups, there are a hell of a lot of opinions. But but it’s really critical to make some attempts, whatever that attempt might be, to validate that they have some basis, in fact, because, you know, in our growth in our growth hacking experience, they just don’t work. They’re just not right. It’s it’s so unfortunate that you might think that you can think like a customer, but you just can’t because you’re not in their shoes. You you can you can try, but you need to be sure and you need to try it. And and I think that what startling about this is that if you make this a core competency, then what you’re actually doing is you’re optimizing your ability to optimize. And and that’s a really I know that’s matter or you know but but really you’re optimizing your ability to optimize the output of your work because if you get if you get to the point where it’s a core competency of everything that you do in development, in support, in marketing, and if it’s core competency to test your in some easy way quickly what it is you think to be true, and then you guarantee that all the horses are pulling in the same direction. Mm hmm. You cheaply and quickly guarantee that. And once you do that, you get, like, ten times the bang for your buck as people, as a traditional organization does, where everybody’s pulling in different directions and no one’s even sure that that’s the right the right thing, you know, two steps forward, three steps back. So, you know, that sort of thing. So you can be way more effective than than other organizations can be. And, you know, and it’s a bit like, you know, if we talk about like continuous integration and like releasing daily that you talked about that sort of idea five years ago, people would have laughed you out of the room, right? They’re like, you’re kidding, right? We released once a year, you know, and it’s a big bang release and it’s all of that stuff. But once you get better at it, you become better at it. Once it becomes a competency, then it becomes something that you do without thinking that’s so fast and easy that you don’t consider it. So, you know, when you’re releasing every day, there is no friction to releasing every day and it’s the same with them. Same with validating your ideas. Once you once you reach the point where it’s something that you always do, it takes, you know, very little time to find some way to validate. To validate it. Yeah. And to get the best, best possible outcome that way.
Bronson: Yeah. That’s incredible advice to end on again. Shaun, thank you so much for coming on Growth Accuracy TV.
Shaun: Thank you very much and pleasure.
Get the strategies, motivation, and in-depth interview with all the details every week!