Learn How Dan McKinley, Principal Engineer at Etsy, Approaches Product Experimentation and Analytics

Posted by Anant January 13, 2023

Dan is a Principal Engineer at Etsy that focuses on product experimentation and analytics. In this episode he talks about the failure of bandit testing, why real-time analytics aren’t always the answer, and why continuous experimentation matters.

TOPIC DAN COVERS

→ He focused on search and recommendation systems with a focus on experimentation and analysis

→ How Big data is getting creative at Etsy

→ What year did he join Etsy

→ His insights about the failure of bandit testing

→ What is bandit testing is

→ What’s wrong with real-time analytics

→ Why continuous experimentation matters

→ His best advice for any startup that’s trying to grow

→ And a whole lot more

LINKS & RESOURCES

Dan Mckinley’s Website

Experiment Calculator

WATCH THE INTERVIEW

READ THE TRANSCRIPTION

Bronson: Welcome to another episode of Growth Hacker TV. I’m Bronson Taylor and today I have Dan McKinley with us. Dan, thanks for coming on the show.

Dan: Thank you for having me.

Bronson: Yeah, I’m excited about this one. I’ve been reading your blog and you have a lot of opinions, a lot of reasons for those opinions. And so I think we have a good dialog here. But let me tell people about you a little bit. You’re a principal engineer at sea, and you can tell me if I’m wrong here. But online, at least in some places, it says that early on you focused on scaling the site, on scaling at sea, but now you’re more focused on search and recommendation systems with a focus on experimentation and analytics. Is that a fair way to kind of describe your trajectory?

Dan: Yeah, I think that’s fair. I’d say the reason for that is mostly I mean, that was born out of necessity. So I showed up at Etsy thinking I would do a lot of product hacking that didn’t really that wasn’t really true until we got out in front of our scaling infrastructure hell of the earlier years. And those are all good problems to have for sure. But until we, you know, got a bigger team, hired some great people like John Osborn and people like that, people who really wanted to focus on infrastructure and scaling wasn’t really possible for me to do anything else. So that’s the way it is. That’s just the way it was.

Bronson: Yeah. So this is kind of where you always wanted to be. You just had to go roundabout to get here.

Dan: Yeah, I would say so. Although when I was younger, I was very interested in all sorts of programing. Things that can’t hold my attention these days.

Bronson: Yeah. Well, you recently were on the cover of a magazine of Network World, and the subtitle on the magazine was Big Data Gets Creative at Etsy. So talk to us a little bit about how Big Data is getting creative at Etsy.

Dan: Sure. I think it was it was kind of a broad article. It was about all the ways we use, quote, unquote, big data at Etsy. And there are a lot of a lot of ways that we use it to produce recommendations and to do search ranking or to create datasets for search ranking. We don’t literally search ranking was a Dubai thing, but those are the original reasons we started getting into it. And as we started to integrate Clickstream data into those applications, we realized that we accidentally built an analytics system. And that’s a lot of that’s probably the major reason we use it now is for experimental analysis after we’ve made products and also for exploratory analysis before we’ve made them to see if they make sense or not. And, you know, various business reporting reasons and stuff like that, probably a bunch of other things. I’m not I’m forgetting we’ve used big data to look at performance logs from the web servers. So for a lot of things we use it for.

Bronson: Yeah. You said you accidentally built an analytics app system kind of thing and you show a screenshot online of the back end that you guys look at and it’s kind of comical. I think one of the other graphs is the three arm sweater crap and screwed customers graph or something like that.

Dan: Yeah, those are those are all of our operational metrics, which is kind of a separate system. It’s all powered by graphite, but those are all those are all kind of like real time things saying like what’s happening on the website right this second, the Hadoop stuff is more like analyzing all of our data at once and those kinds of things. Yeah.

Bronson: Now give us some examples of what kinds of experiments specifically that you guys will be running. And I know we’re going to get into a couple of them later when we talk about Infinite Scroll and the search and things like that. But what are give us an example of something where you guys would run these kind of analysis and that sort of thing. So we have an idea of what we’re talking about here.

Dan: Okay. So I think that for the most part, almost everybody who does user facing work at Etsy is doing experiments when they release things. So we use them for almost anything. We release to the Internet at large. We release it as at least a 5050 experiment for a while. It’s not completely universal. We’ll probably talk about this later, but it doesn’t always make sense to do that. You may not have enough data in some situations. You’re just correcting an edge case or whatever. But for most major things, we’re releasing things as experiments. And so the things that you look at for any of those vary depending on what you’re trying to accomplish. Most of the things that I worked on recently are all things related to increasing revenue. Etsy as a whole is nowhere near as. It is this. This is just me and one or two other people on my team. This is what mainly what we do. And so even within that, the exact metric you look at for any given experiment varies. One specific thing I’ve done in the last year or so is I put recommendations in the checkout funnel. And so doing that, you’re targeting people who are already checking out. Hmm. So you don’t want to look at the ratio of people that check out what you want to look at as the total value of their check out. In other cases, like another thing I did was I started emailing people who started checking out but gave up. And in that case, what you do is you figure out who’s eligible. You divide that population in half, email half of them, and don’t email the other half. And then you measure of the ratio of them that check out within some given time. Mm hmm. And so the metric really changes depending on what you’re trying to accomplish. Although I usually look at some high level things to make sure you haven’t screwed the pooch if you’re trying to affect registrations, obviously, you look at the ratio of people who register. If you’re changing the the site Chrome of the mobile website, you probably want to look at bounce rate more than we might in other cases and so on.

Bronson: Yeah. So it seems like at this point it’s really baked in to kind of the ethos of Etsy that you’re testing just about anything that makes sense to test. Is that fair?

Dan: More or less. I think, yeah, for definitely for things that are what we would call buyer facing things that Etsy is is two very large pieces. There’s like the buyer facing site that most people see and there’s a seller back end that only people who operate shop see. Mm hmm. I think we do a little less experimentation with the shop back end mainly. And I would characterize the difference as like they’re the ones that one constituency we can talk to, one constituency we can talk to. And so the Internet at large, we the only feedback we have is the experiments. And so we we really try to make sure we’re doing the right things there with the seller back. And it’s a much more engaged user base. We do experiments where it makes sense, but often times these are things where there’s it’s a small amount of data anyway and it may not make sense. Yeah.

Bronson: Now from in your blog, I get the impression that Etsy wasn’t always this way, that there was a moment because you’ve been there for a while now. What year did you join Etsy?

Dan: 27

Bronson: Yeah. So it seems like in the early years it was more just put stuff out there and assume they work. And then at some point you realize that you could remove a feature and it didn’t actually change anything. So then it got you guys thinking like, wait a second, did it actually matter in the first place? So what is that? What happened to kind of change you guys perspective on this or what else happened?

Dan: That was what started to get my attention was that in the early on, we would we would develop things. We’d spend a ton of time upfront developing things, polishing and polishing them, gold plating them. You could say to the point where we thought they were perfect, putting them out there, and then we would have like an all hands meeting where we would like tell everybody, Hey, we just released this thing and everybody would applaud. And then I realized eventually, like two years later, we turned that thing off because nobody had ever used it. And like and this happened for more things than not. And so I wondered if there was any way we could avoid that. And I kind of settled on measuring things as the way. And I’d say that as we started to measure things, as we released them, we were like, Hey, our first stab at everything is mostly not working out. Maybe we should not put five months into a project before we figure out if it’s going to work or not. So that was that was like stage two. And then and then now I’d say we’ve adopted more of an iterative model where we try a simple version of a thing and then we make it more complicated over time and make sure we have the right placement for it on the site, make sure it’s the right thing just to begin crafting and things like that. And yeah, it’s been a gradual transition I guess, but I think that’s the quick version of how we got there.

Bronson: Yeah. And just a moment, I think we’ll talk about the iterative kind of continuous experimentation that you guys do. But first, I want to talk about what may be a side project that you’ve done that’s actually really cool. You built something for AB testing and experiment calculator com. So tell us about this tool. What is experiment calculator dot com do because when I first saw it, I was like, this is genius. Like, this is so cool.

Dan: I don’t know if it’s genius exactly.

Bronson: But it’s cool.

Dan: You should you should credit the people who wrote the paper that I stole it from the it’s a sample size estimate here basically so you if you’re we. So as we talked about like we’ve gotten people to the point where they want to run experiments and then from there like there’s a class of mistakes people make that are all errors of sample size. So people either run 20 variants at once in an experiment or they run it at a 2% or what have you, because they think if they go higher than 2%, that and the only reason producing 3% is that they think that they go higher. People will start to notice it and start to yell at us and things like that. Mm hmm. Or, you know, any. Any number of mistakes like that. Or you’re really screwing yourself because you’re not going to have enough data to come to a decision, or you’re not gonna have enough data to notice effects, even if you’re there. So the point of the tool is you tell it how much traffic you have, you tell it how it inverts for whatever. Like it’s just about conversion on the tool because there’s not a better name for it. But you could envision that as being the number of people who buy or the number of people who accomplish any goal that you have in mind. And then from there you can kind of enter a guess for how you think you’ll change that number. And there are a few other parameters and the and then at the end it’ll spit out an estimate for how long it’ll take you to run that experiment in days or years or whatever it happens to be. So, I mean, it’s, it’s, it’s meant to get you in the ballpark and in some cases you’ll fill out this tool and they, it’ll say, like, it’ll take you 17 years to run this experiment. In that case, you could say, Well, let’s just do this. Or you could say, maybe that wasn’t maybe this isn’t a terribly important thing to do in the first place. Both of those decisions are valid or in other cases you could say the other parameters are the probability of type one and type two errors. So you could you can tweak if you’re changing like the checkout funnel and you want to be really sure that you’re seeing any negative effect that you might have. You could say, I want to be 90% sure that there’s no -1% drop and you can use it for those kind of things.

Bronson: Yeah. So just make sure that the people watching or listening are really clear about this. So they come here and they have an AB test in mind that ask them a bunch of questions about that AB test. You know, how many people are going to see this variant as opposed to that one? How sure do you need to be? What goal are you trying to reach? What kind of increase, you know, conversion and you try to raise it by 2%, 5% or whatever. And then it kind of tells you. Here’s how long you need to run the experiment for. So it really is just it’s a safeguard because if you need, like you said to an experiment for 17 years, then you’ve probably done something wrong. Right?

Dan: Yeah. Or not necessarily. It’s like some changes you just want to make because you just want to make them. And, like, it’s not that this will validate that. It’s probably not going to be that big of a deal in those cases. I don’t want to discount I don’t want to discount every project that doesn’t run as an experiment or doesn’t make sense to run as an experiment. But this can give you some idea for what or what categories certain thing falls into.

Bronson: No, I think that’s that’s it’s a cool way to look at it. I mean, I just had a conversation earlier today where somebody said, hey, why don’t you ab test this on your site? And I said, That’s a great idea. I don’t have the resources right now. So it’ll be down the road maybe, you know. And so you just, you test what makes a lot of sense to test based on your resources and how important that feature is and all these different things. It all kind of comes together. Now, you said in your blog that when it comes to AB testing, nearly everything fails. What did you mean by that? Yeah.

Dan: Well, I think I talked about some of this already. Like the a lot of the a lot of the products we built early on at Etsy were things that sounded cool. But in the way we built them, our in our early iterations of them just didn’t work. I know that early on we had an early on Etsy had a thing called alchemy where you could go to the site and write up a description of an item you wanted, and then sellers would bid on making that thing. And the first first few iterations of that got basically no usage. There were many more support contacts based on this thing than there were people successfully using it. It was just a mess. That was a huge failure. It was one of those things that we celebrated when we put it out, and then two years later we turned it off. In recent, recent months, we’ve reintroduced kind of similar features, but we came about adding those in an iterative way that was validated along the way. And Etsy has custom orders again, but it looks totally different than our first attempt. And I think that that is really what I’m driving at, is that you’re going to have some conception of how your site is used or whatever, and you’re the model that you’re using in your head for this is not going to match reality. Exactly. Or there may be any realities going on at once. And like when you put something out, it’s not necessarily going to be used or not going to be used the same way the way that you expect it to be. Mm hmm. And you can have better or worse guesses, but you’re never exactly on the market. Yeah, I think that’s what I’m that’s that’s kind of what I’m talking about when I say nearly everything fail. So most products fail most first attempt. So products fail. Most startups fail. Most things fail.

Bronson: There you go. This is very optimistic view of the world now. A very.

Dan: Real world. Yeah, I wouldn’t I wouldn’t want it to come across as extreme nihilism or anything like that. It’s just I think people may have unrealistic expectations of how good they are when they’re designing things. Mm hmm.

Bronson: No, that’s sober advice. That’s awesome. Now, you recently gave a presentation that kind of deals with the fact that most things fail, and that’s kind of what you guys do to try to work around that. It was a presentation on design for continuous experimentation. So first, you’ve already talked about it a lot, but what is continuous experimentation kind of in a nutshell?

Dan: Well, okay, so what I was driving at with the title and I presented this at a design conference where it may have been lost, but it was driving at the title was And It’s Not Their Fault, it’s mine. The the title is an allusion to Etsy’s overarching development methodology, which is a thing called continuous deployment, which is not related to experiments. Exactly. But I think that it’s it’s integral in the way that we release these products is. The way we do that is is very related to the way we release code more generally. So Etsy doesn’t do releases in the sense that we release 20,000 lines of code. Once you release 100 lines of code at once and we do it 30 times a day. Now, there’s a lot of literature on the Internet for the for this entire thing. There’s a whole conference called Velocity where people get together and talk about it. So I won’t I won’t dove too far into explaining what that is. But I think that if you release code this way, it’s very easy to also tack on doing experimentation to that methodology. And that’s really what I was going for. The title was you releasing things all the time, ramping them up, and that constantly it’s very easy to then just say any ramp up can be measured as an experiment. We’re just tracking track the performance metrics along with whatever, along with the exception rate or whatever else we’re measuring. And so that’s where I was going with the title. And what the talk is about is really it covers two different projects. One was the kind where we scored 4 to 5 months of effort into a thing and put it out there. And another was the thing where we we had a 4 to 5 month direction that we wanted to go in, but we did it piece by piece so that we could adjust course as we went.

Bronson: Yeah. Tell me about that first one, because these two case studies are the thing that kind of made the lightbulb go for me as I was looking through your slide deck. So the first one where you spent five months on it before you released it. That was the infinite scroll. Right. Tell me about that process. What do you think would happen? And then what did happen and just all that I think.

Dan: Yeah so infinite scroll in our we wanted to as was the style at the time a year ago we wanted to replace our page search results on Etsy with the thing where you just scroll indefinitely and more items would fill in. And I think the premise was essentially that we would make search easier to use a little faster and things would just get better. And with that, we put a lot of effort into making this happen before we tested it. As soon as we tested it, we thrashed around for a while, but we couldn’t find a way to do it. That didn’t make everything worse. So we went back and we examined some assumptions there and we we said, okay, this is making the search slightly faster. Make it better. Um, we, we played around with increasing search performance by 200 milliseconds. We also had a few occasions where we increased performance and tried to see if those increased conversions and like that, more or less, it didn’t really prove to be very sensitive. Now, some people have taken this to mean that I don’t think that performance matters in any way. And a number of invectives have been written against me, but I don’t think it should be. I don’t think that there’s clearly some if you made the page take 5 seconds to come back, I’m sure that fewer people would buy. But it doesn’t seem to be sensitive on the order of 100 to 200 milliseconds. And that’s these guys anyway. Yeah, at least starting from where we are right now. Yeah. So that premise was kind of flawed and then, you know, we never really sorted out what was wrong with it because we had sunk a lot of time into it and we didn’t really have the stomach to keep going because it didn’t. There was no end in sight. Basically. We weren’t sure what was going on and we weren’t sure how we were going to get it into a state where we could really say.

Bronson: Yeah, and, you know, one way to kind of, you know, some of this up is infinite scroll is in vogue. Everybody assume it’s the right thing to do. You’ll be labeled a fool if you said it was the wrong thing to do and it took five months and it was the wrong thing to do, and then you had to reverse something. That’s just a great case study. Not, yeah, you know, enjoy your all’s pain, but it is a good like, okay, like there’s something to take away from that.

Dan: Yeah I oh well okay. I would say that I, I do like to have at least one major failure a year to point out otherwise, cause otherwise you’re probably, you’re probably not drawing the right, you’re not being aggressive enough if you don’t have one like huge boondoggle failure to point out and say we could’ve done that better. But yeah, I don’t. And finance role is fine as a thing. Mm hmm. It could be that this context is wrong. It could be that we just never settled on the right design for it. Mm hmm. I don’t. I also don’t want to condemn infinite scroll in every instance or come across as doing that. So.

Bronson: Sure. Now, tell me about the other example, kind of the other side of the coin. You have the the search dropdown. And, you know, it still had a five month goal, like you said, but you didn’t work five months behind the scenes and then unveil it. What did that look like and what were the results there?

Dan: Right. So the this search dropdown was basically a dropdown menu that was in the the site’s global NAV. And as any designer watching this would probably expect, almost nobody use this or knew what it was for. It was just a crappy solution to to a massive problem early on in the site’s history. And it just got way out of hand by at various points there would be it started with three things in it. By the end, I think there were in some cases that were like ten things in it. It was just it didn’t make any sense and it wasn’t working as a design. So we knew we needed to get rid of that. But then we knew that it was going to be a ton of effort because depending on what page you were on on the site, it was different. There were all these different use cases that had to be covered so we knew was going to be a ton of work. And so we came up with a plan where, okay, this use case will be replaced by this little thing that we’ll do on this page and this other use case will be handled in this way. And I had a secondary search bar on this page or whatever, whatever the case may be. And we kind of took a piece by piece and did each of those is like small, measured experiments, not necessarily serially. Some of them would happen in parallel to each other. But at any point, like we could say, okay, that that part of the solution didn’t work out. So let’s either come up with a different way to do that, or let’s just say that use case isn’t that important. Turns out in some cases that was through the search bar. Like if its behavior would change from page to page and cases, we realized that some of the use case is are just people being confused. And so anybody who searches from that page just exits the site. And we found those in some cases. And I think that you have to compare this with the with a big release that really removes the search bar in its entirety up all over the site at once. And what that would look like, you would miss all of these cases if you did that. Yeah. And so I think, yeah, the talk was contrasting. Those two those two projects, one worked out, one didn’t. And the one that worked out didn’t work out exactly as we thought it would. But along the way, we gave ourselves points where we could make adjustments.

Bronson: Yeah, now it makes a lot of sense and I like the three things you kind of in the presentation with and we will dove into each of these because you’ve been talking about these three the whole time, but the first one is experiment with minimal versions of your idea. So that’s the idea of don’t spend five months in a cave and then come out and unveil something in the world, experiment with little incremental changes that you can see if you’re right or wrong along the way. On the second one, plan on being wrong. We’ve already mentioned this a few times. Most AB tests are wrong. Plan on being wrong. Don’t be surprised. I mean, wrong thing is prefer incremental redesigns. So I think it’s just a good kind of framework to use as people approach these, you know, overhauls to the pieces of the site. Now, I want to talk about a couple of things that you’re against, because this would be a fun talk. You know, in reading your blogs, one of the things you’re against is abandon testing. So first, for a lot of people watching this or listening, they’ve never heard that phrase. They don’t know what bandit testing is. So tell us first, what is it? And then second, why are you against it?

Dan: Okay. So in addressing band testing, like what I’m what I’m talking about is any sort of scheme where you keep all of your test variants live in production indefinitely and an algorithm decides which one gets shown based on past performance. So you put them both out of 5050 and then the ratios are adjusted over time. The one, the loser is never completely eliminated. It stays in play to some extent, and if people’s behavior changes, your rhythm, adjusts the algorithm, adjust the ratio. Now, I am not against this in all context. I think that depending on the domain you’re working in, it may or may not make sense. And it’s fine as a theoretical construct. So if you’re if you’re testing banner ads, if you’re testing different ranking formulas for search, it could very well make a lot of sense to do it. And I don’t want to discount that other people may find it useful. We’re not. I would find it very surprising if we ever used it for products. That’s mainly because for a bunch of reasons. First reason would be complexity. Eventually, we want to remove code for things that are old and not mostly not used anymore. That’s a major reason. What you realize when you operate a large site is that once things don’t have traffic going through them, there’s just there’s a talk, a clock ticking. And after after a while, those things just don’t work anymore. And so it’s not like the algorithm would be able to ramp this thing back up and everything would be fine. It would probably be more like the algorithm ramp this thing back up at four in the morning and suddenly the site would be broken because that code doesn’t work anymore or whatever. Maybe that wouldn’t happen. You just have a philosophical problem where it’s like, is that code branch losing because it sucks? Or is that code branch losing because because it’s losing? Or is that code branch losing because it’s broken and the code doesn’t work. So so there’s that and then there’s user confusion is another major one where we’re doing things that have visible changes that are very small and discrete, like different banner ads or whatever. We typically want to experiment with things and then announce a change to the community and say, Look, we experimented with this new design for thing. The results were such that this is better and these are this is how it’s better and this is the way it works now. And even for like things that face people buying on Etsy, sellers want to know what they look like because sellers want to make sure that they’ve entered the data in a way that looks nice and it just just wouldn’t fly for us to say, well, most of the time people are going to see this listing page, but it’s possible that some people are going to see this other listing and or what have you like that just that just wouldn’t work. I don’t want and again, I don’t want to impute bandit testing for every conceivable purpose, but I think that is the long and short of why Etsy wouldn’t use them.

Bronson: Yeah, no. It’s very helpful to know how Etsy would view it, even if there’s contexts where it makes sense because it gives you a way to understand it, at least because a lot of people don’t even know what it is and how it makes sense.

Dan: And so the thing that comes up a lot internally because everybody adequately, it’s on Hacker News and someone says Why? We’re doing gamma testing. And so that’s, that’s mainly my motivation for writing, writing about it.

Bronson: Yeah, it’s on Hacker News. That’s enough reason not to do it right. I didn’t say no comments. Now, another thing you’re against is real time analytics. What’s wrong with real time analytics? What could be wrong with that?

Dan: Well, okay, what I’m talking about there specifically is the idea of using real time data to make product decisions. I’m very much in favor of using real time data for operational stuff, and that is part and parcel to doing continuous deployment is you have to know if the site is working right now or not. And so like I want to delineate things that delineate the kind of data that you use to know if a thing is working or whatever from the things that you use to make product decisions. I would state the I would state the whole thing more simply, which maybe I should have written a smaller blog post, but the whole thing more simply is if the result of a measurement is that you’re going to go write code for a week, you’re you’re kind of being an idiot if you think that you need to do that in real time. So. Right, I mean, again, like I write this because people ask for it, but people think that they can release the thing and then just watch the data stream in over the course of a few hours and then pivot at that point. And that just doesn’t. It doesn’t make sense for a lot of reasons. You may think you’re doing something if you’re doing that, but you’re probably not. You’re probably making statistical errors and you’re probably not doing what you think you’re doing, and that if you go to Strata, there will be a lot of people trying to sell you products that offer to do real time, but that’s all it takes once people are crazy.

Bronson: Yeah, it seems like real time analytics kind of plays into the whole AB testing tool that you made. You know, you may not have statistical significance. It may not be a big enough sample size, you may not have your controls in place. It may not be the right time to make a week long coding decision based on all of what’s happened the last few hours here. But it’s very appealing. So I can see why the idea surfacing it.

Dan: It it satisfies some A.D.D. inclinations in some people I.

Bronson: Think is it programmers wanting a challenge? What is it?

Dan: Oh, there’s also that side of it is at least half the half of the people who say we should build it or people who would get a real visceral thrill out of building it. And this is a it’s motivation, I understand, as as a younger man, like, I had those inclinations and I completely understand how awesome it would be to build a real time product analytics system. But. Yeah. I think that the decisions that come out of it would not be the best ones.

Bronson: Yeah. So as a, as a 20 year old, you would have built abandoned system with real time analytics and now you’ve seen a lot of interest.

Dan: No doubt. No doubt. I yes. I no comment as to how much I see myself in the commenters on hiker news on these dogs.

Bronson: Live and learn. Well, looking back, what have been some of the biggest lessons you learned about experimentation, analytics or kind of any piece of what you’re doing there? Etsy, what are the big the big takeaways that you really are going to keep with you for whatever you do from now on?

Dan: Okay. I guess I would say that. To summarize, I mean, I think that you can usually go beyond saying this thing or that thing would be a cool idea. And that’s usually that’s like the minimum that you can say about anything, and then you can almost always go a lot further than that. And I don’t want this to come across as like I’m against X and just doing things that sound awesome or anything like that. But like, I’m like those things are useful, but as a way to do daily business, like I think we can usually get a lot further than this would be a cool idea. Like you can usually say, if we built this thing, what’s the best thing that could possibly happen based on just the inputs and the inputs and outputs from it are people who do business modeling. People who do forecasting are pretty familiar with these kinds of things. But I think that engineers are not so much an it’s as an engineer, it’s a real force multiplier, I think, just to be able to do some basic opportunity analysis and say, okay, if we did this and the conversion rate increased by this amount, this is that would be worth to us. And this is how much time it’s going to take. Do we think that this is a good use of everyone’s time? These are things that many people don’t do. These are things that you can do better and better as you have more data about your website. Mm hmm. And I think that that’s a great way to make sure you work on things that are more likely to work or at least be spectacular and failure. And it’s a good way to I think that I have increased my batting average doing things like that. And so I think that lately that’s what I’m most interested in, is how to do that kind of thing. Yeah, I guess that’s the way I would answer that.

Bronson: Well, Dennis has been an awesome interview. I got one last question for you here. What’s the best advice that you have for any startup that’s trying to grow?

Dan: Well, it’s been a while since I’ve been in what you can call a startup, I guess. But, I mean, that has been kind of big for a while. But I would say generally show your work to people as soon as you can. That’s true for any product is you’re just not sure what’s going to happen until you actually have people try to use it. Yeah. Back in the van and dealing with scaling thing I’d say. And Etsy wasted a lot of time early on arguing about two different things that might scale to a thousand X and that just doesn’t make any sense. Like try to write code that’ll scale to ten x if you can manage it, don’t worry about it. Otherwise you’re going to have to rewrite things, just infrastructure as you go. Oh, well, that’s unrelated to experimentation, but I think if I could go back and tell people that Etsy one thing, it would be that like a lot of the things you’re arguing about, like this is what Google does or this is what Amazon does, these are these concerns, are you good at your scale? Don’t don’t work with those. And well, I’d say like. Don’t get in the weeds on experimentation until you know what your product is about. You may be able to start thinking about it relatively early because you’re going to be going for bigger wins. It’s true that Etsy couldn’t it couldn’t test for generic of the size that we get at smaller scale because there’s never a minor. But Etsy doesn’t have a lot of very low hanging fruit. So a smaller site that might not be true. And it’s very easy to notice a 50% increase in something with a small amount of data. So. Do. And don’t worry about experimentation early on.

Bronson: No way to worry about it. Well, Dan, thank you. Thank you so much for coming on Growth Hacker TV. It’s been awesome to kind of get inside your head for a few moments.

Dan: My pleasure. Had a great time.

Ready To Grow Your Startup?TVicon

Get the strategies, motivation, and in-depth interview with all the details every week!

Categories

Popular Blog Posts

26 Simple Websites that Prove Innovation Doesn’t Have...
Unlocking Growth Hacking Power: Learn Why and How...
Learn How to Convert Free Users to Paying...
The Art of Generating Viral Buzz: Learn How...
Daryl Hatton on Overcoming Growth Obstacles and Building...

Share On

Are you an
entrepreneur
who is trying
to grow a
startup?TVicon

Get the strategies, motivation, and in-depth interview with all the details every week!