Request info

Building a competitive edge with mobile QA at Lyft

Testlio CEO Kristel Kruustük recently interviewed Lyft Sr. QA Engineer Heather Daigle at Mobile Dev + Test in San Diego. The following is a transcript of the interview, followed by audience Q&A. Enjoy!

Interview

Kristel: Hi everyone, thanks for coming! I’m so excited for to be her with Heather, who’s a QA manager at Lyft. We’re going to talk about how they’ve been scaled their QA while delivering an amazing customer experience to their drivers and their riders. I’m sure you all know Lyft, but it has grown into a multi-billion dollar company.They’re changing millions of people’s lives on a monthly basis. Today, Heather will share Lyft’s best practices for mobile QA and what she’s learned while working at Lyft. I have a lot of questions to ask her myself, but feel free to follow up after the session as well. We’ll be there for you.

Let me introduce myself briefly. My name is Kristel and I’m one of the founders of Testlio. I’m also the CEO of Testlio. We’re proud to be sponsoring today’s event. I started Testlio four years ago. It was actually born out of my personal frustration for different mobile testing services providers, especially in the crowdsourced testing space. I’ve been a software tester my entire career, so my frustrations turned into an actual product that is Testlio today. We are a global community of expert testers that provide amazing customer experiences. Today, we have a team of over 60 people. We have offices in San Francisco and Tallinn, Estonia. We’re working with customers like Lyft and NBA and Microsoft and others. But lets get started.

Thank you so much for coming today, Heather! 

Heather: Yeah, thank you.

Kristel: Can’t thank you enough. Maybe you could start off with by telling a little bit about yourself and how did you get into QA and Lyft.

Heather: Yeah. I first started off at an intern at a company called [Plaxo 00:02:13] right before graduating from college. Right when I graduated, I was lucky enough to be offered a full time job at Plaxo, and it was right in transition where they got acquired by Comcast as well. I had a very great opportunity and went into it, and from there my career at Comcast took off. Starting as the junior tester and learning different technologies and platforms and different hardwares with set top boxes. I went from there to expanding my role, and then becoming a lead and eventually becoming a manager [while 00:02:53] my time at Comcast.

About two and a half years in, it was about time where I wanted to shift from set top boxes to more of a mobile, faster platform. That’s when I decided to look into different opportunities. I had a friend talk to me about Lyft. I applied and went in and it seemed like a really good fit, and then I decided to go back to my testing roots and just dive into testing again from a mobile platform. Two years later, here we are. Growing pretty fast.

Kristel: Very cool. You’re recently an employee 400 and you were the first Heather in Lyft’s team.

Heather: Yes. I joined in May 2015 for Lyft. At then, I was only employee 400. Now we’re at 1600 employees and growing. Since I’ve joined, and now even at the end of Q1 this year, we’ve completed 70.4 million rides and we now have all sorts of programs for employees. We have a whole help center now for Lyft. It’s always expanding, growing, and lots of opportunities is what we call it.

Kristel: How is it? Lyft is in such a competitive space. Ride sharing. A lot of companies out there. How is it? How’s the culture and what’s the most important thing for your team to deliver?

Heather: High quality’s always our most important, and we do have a platform that supports drivers and passengers. Some use it as a way of life. From a QA and testing perspective, we want to try and exhaust all our options to make sure we have all the test coverage for our teams and for our users. When you work in a competitive space, you definitely look at what the competition’s doing. Sometimes you have to be reactive. Other times, you want to be the ones that set the chain reaction. We definitely like having our QA team to see where we push the limits for the Lyft app and what we can do.

Part of that is we have a beta program that we do before we release officially to the app stores. If you ever meet a beta driver, give them an extra fist bump, because we definitely put him through [crosstalk 00:05:27]. We definitely do that. We try to do as much internal testing with our automation, even from manual testing, and through unit and integration testing from the service standpoint. Even then, sometimes it doesn’t cover everything. What we think how folks want to use the app, we always get surprised by the range of users we have. That’s why we really enjoy our beta program. We have about 2,000 drivers on that with another 3,000 passengers. We have various groups and integration set up where they can communicate with us directly and let us know what issues or interesting things they found while using our app.

Kristel: Very cool. Can you also tell us more about how is Lyft QA team structured and organized?

Heather: Yeah, yes.

Kristel: That would be very interesting to know. I think a lot of bigger companies have moved towards combined engineering where testers, they’ve been developers organization. Tell us a little bit about.

Heather: We’re no longer isolated as a QA team. We’re now embedded into the different engineering teams, and report to engineering managers. Depending on what projects they take on, they work directly with developers, UX, product managers, and our customer support team. We do have a weekly QA meeting where we check in on what the other teams are doing, mostly because there’s a lot of cross-team communication. In terms of engineering, we’re completely embedded. We definitely work one on one with developers, or at the rate we’re going, seven on one with developers. It’s not just throwing over the QA ball at times. It’s how would a developer want to go about developing this feature based on feedback from the team and from QA? Or even just what they’ve heard so far from users, so knowing how a user would interact with the Lyft app.

Kristel: What are your team’s goals in Lyft?

Heather: To be as bug free as possible. Always to have a high quality app, of course. Making sure that we’ve tested all that we could before having to send it out externally. The big thing for that is no crashes, no glaring UI defects, and no just for anything that would produce a [max or min 00:08:09] result, that it’s handled. That way, any extreme cases that come up or interesting scenarios that we hear from our beta programmer [prod 00:08:23] that it was something that we probably couldn’t have caught unless it was outlined in the app. One example would be location specific. Particularly on iOS. It’s very hard to simulate that on a real device. Sometimes, if you’re in an area where Lyft is not supported, there might be an interesting interaction there on the app. At least, that’s top of the one you hear a lot.

Kristel: What do you think are the key challenges in such a high growth environment for you guys? Is it all related to …

Heather: Definitely scalability. It’s hard to keep up at times with the growth and the changes that come in. As I said before, the priority for Lyft when I was hired was to hire more developers. Testing and QA was a second thought, and then, “Okay, we have a lot of developers now. Great. We can put all these features out.” Then it turns out that when you combine them all together, it causes a lot of breaks and crashes. It’s like, okay, now we need to help test that. Just keeping up definitely with all the new staff that we’re hiring and the growth. They’re all good problems to have, but just keeping up with it at times. You’re definitely just sweating by the end of the day to make sure it was all covered.

Kristel: What are the challenges related to testability? Can you give us an example situation where a feature for you guys wasn’t really testable?

Heather: Testability is definitely a challenge at Lyft. That’s part of scaling up. One example is a feature everybody loves and hates at the same time, is Prime Time. For a while, there were so many changes coming into our pricing service that Prime Time was not testable on [warm 00:10:26] boxes since it’s based on supply and demand to trigger it. We had no good system to simulate that on a test environment. For a good seven months, we had to do our testing for Prime Time on prod around five p.m. on commute hours. Which again, I’m really glad for the beta program. It was definitely just one of things that it’s such a popular feature, and it’s considered a 2.0 feature. Here we are, testing it on prod and that’s the only way to verify. It was just something that you can’t do as you mature as a company.

I finally talked to my manager about it. He was like, “if there’s one thing you could change, what would it be?” We have to make this testable on in house. I was very fortunate that our engineering managers are very informative and they’re good listeners. They’re like, “Okay, if this is considered high priority and every time if it breaks, it’s considered a locker. If you want to increase in scale, this needs to be testable.” We were able to spend a week to create the tool to simulate Prime Time with different amounts of drivers and passengers. Now we have Prime Time testable, and plus, now we can leave automated too. Now it’s just not a concern anymore as it used to be in the past, so check.

Kristel: Very cool. I assume that you’re building such a big product that millions of people are using on a daily basis. How do you make decisions and how do you prioritize what features or issues to fix? Or what features you should push to live first? How do you do that? What’s your process?

Heather: There’s a lot of planning involved in that. It starts with we do have sprint planning a lot. We go over what product thinks is a priority. We also have our founders, John and Logan. Logan is a big product guy, and what he actually does is that he’ll have a big meeting with all our product managers once or twice a month to go over their current road map. He’ll sit down and be like, “Okay, I want X, Y, and Z prioritized.” That’s how it will trickle down to the rest of the teams, which is nice because then you know what your founders want and what the top priorities are. That does have an effect on testing and what is considered high priority.

Usually, for testing, what is considered high priority is anything that affects the drivers, completing the [heavy 00:13:12] path for passengers requesting to completing a ride, and pretty much just going through what is the cost of if something doesn’t go correctly or if something breaks. That [close 00:13:28] impact of what we consider to be high priority instead of taking [care 00:13:32] first.

Kristel: Okay. What are some of the other things that help you make those weekly releases, like quick planning and getting older? What else helps you with weekly …

Heather: Weekly releases.

Kristel: Releases and being up to speed.

Heather: We have weekly releases for Android and iOS, which is a nice, tight, fun testing schedule. It comes down to how do you scale with that? That’s kinda where we tie in with Testlio here, as we do have more features on a weekly basis. We are only so many, again, QA in house, which we are hiring. We don’t have bandwidth necessarily between running regression every week and testing new features. Automation does help with that, but on a weekly release schedule, the bandwidth for even automating is pretty compressed too. That’s actually when we brought in Testlio, was to help alleviate our bandwidth in house and have an external team help with regression while we catch up from an automation perspective and be able to test new features.

Kristel: How do you handle really big feature releases at Lyft? [crosstalk 00:14:47]

Heather: A lot of planning.

Kristel: A lot of planning. You have the beta group.

Heather: Yeah, we do have the beta group. It’s definitely a company effort for big features. You always have to have a lead for everything. A product lead, a test lead, a project manager lead. Have that all in place and plan it out. You kind of address what needs to be covered and tested and addressed right away. Where will the weak spots be? Where are we going to have challenges? That also ties in back with, okay, we have this big project, we have this feature. How are we going to verify all the codes that’s in place and how will we know that this is going to work? It goes down to iterations and a lot of planning and then, okay, when do we need to have Testlio come in? When do we need to incorporate regression testing into this?

I can’t emphasize enough planning. Even though it’s on a weekly basis, that requires it. We’ve definitely had projects where we’ve had the lead sit in a room for the whole day and just battle plan how it’s gonna go down, but it makes a big difference. From there, you can address and you can plan. Then we can warn you guys three weeks down the road, “Hey, this feature’s coming.” Plan out those test cases and those test scenarios and make sure that it is workable on our test environments and that we can avoid testing on production.

Kristel: Also, still such a high growth environment. It’s crazy. Today’s world, in mobile world we’re facing so many different platforms and devices. How many devices do you test on?

Heather: We do have a range. We do poll monthly the top devices used on our platform. In house, we have about, I would say, a dozen of each iOS and Android to play around with. Anything that we don’t have that’s covered, we’ll use emulator or simulator. We definitely have quite a few real devices that we can use. It’s great to have on hand just in case someone needs a device to use or if we have a lot of customer calls come in related to one specific iOS or platform. We’ll have that device and we can check it out and verify.

Kristel: Okay. You’ve been talking a lot about gathering data and making decisions based on data, seeing what are some of the bad paths that your drivers are having in the app and then making your decisions based on that. Can you tell us more about how do you make data useful for Lyft and how it helps you with testing the product? What tools are you using?

Heather: What’s nice is that we have a nice army of data scientists to help collect all that data for us. They kinda go through where the most clicks or times a passenger or driver on in which screen of the map. It’s always the map view, most of it. Kind of where we base on that decision is where do we want to improve or where could we enhance the experience based on how long a driver, passenger is on certain screens? Even there was some features we had only in the web view for when a driver logs in. They’ll collect data on how long they’ve been in that web view, and can we put that on the mobile app so that we don’t have to necessarily go into just the website to grab it, or a mobile app? Although it all works nicely. Definitely they’ve heard before too about customers wanting to have that in the mobile app experience.

There’s certain things that they see in terms of data where drivers and passengers are on the app. They’ll sometimes want to run experiments and just see where it might change. That’s where, actually, they like to use our data, or where to see certain patterns. We’ll run experiments on just tweaking the user flow here and there, and go from there. If that’s gonna be a good or bad thing or if it improves the experience. Usually we give it to our wonderful beta team first. What’s nice is that we can hear right away for some of those experiments if they really love it or really hate it. They go from there [and just 00:19:33].

Kristel: What about if there are issues happening in the app? How do you gather that? What tools are you using?

Heather: We use Bugsnag and Instabug for any reports that come in from the prod and beta app. From there, we try to look at if there’s a common theme for when those complaints come in. That way, we can also verify too or see if we can reproduce that issue that they’re talking about. We also have Jira for our internal ticket collection. We’ll run bug triage with all the EMs, engineering managers, once or twice a week.

Kristel: Once or twice?

Heather: Yeah. We literally since so … We’ve been pretty good recently with not having as many tickets come in, which is good, but even for our internal testing, we do find a lot of issues. We just have to keep on course with the ems and make sure that these issues are on their radar. With a weekly release schedule, it has to be. If not everything is fixed in time, the release is not going out. That’s the rule of the land. Now let’s see what we did in new releases.

Kristel: That’s insane. It’s really impressive as well, I must say. For newbies here, how do you start collecting data? Where do you recommend us to start with?

Heather: We have a customer experience team in Nashville, which is about 600 customer employees now. I would say they’re the first line of defense. They’re the ones who interact with our users [inaudible 00:21:23] first. Definitely talk to them and check in with them and see if there’s a recurring pattern. They could probably easily name the top five pain points at their what drivers are used complain about. I would definitely say start with your customer support team because no one knows the app like they do. No one hears about how the app was working like they do.

Heather: Knows the app like they do, no one hears about how the app is working like they do. And then next, listen to your QA team. Since we always see how far we can take it with the app. Yeah, I would say just start from there. And definitely be open to listening, to where feedback is available.

Kristel: Yeah. And talking about resources, I think. When was the point at your company when you decided to go after for an external resources, but test [inaudible 00:00:38] for example. When did you make that decision and why? What made you …

Heather: What made that decision was, we wanted to build on our automation suite faster and more comprehensive, but there was no bandwidth for that on the tight release schedule we have because we’re in a competitive market, you need to be feature [inaudible 00:01:01] with the rest of our competitors out there. At the same time, we need to make sure we’re releasing a high quality app every time we do it. The only way you have to scale is that you have to have automation for all the things that can be automated, and that way you have bandwidth to do all the things that can’t necessarily be automated like voiceover. So you’ve got this nice, wonderful imbalance of, “When do you have time to automate? But you gotta test these new features and you gotta make sure [inaudible 00:01:30] every week.”

So that’s kind of when our … We had an engineering manager who was like, “Okay, we need to alleviate some of this bandwidth.” And that’s when we kind of talked to Joel who, he used to work at Lyft but he went back to Australia, but he was the one who was like, “Oh let’s talk to Testlio, they’re a great external test vendor.” It definitely takes time to kind of adjust closed perception on what I think about third party testing, where they’re just, “Oh they don’t know how Lyft works, will they even know how to access our test servers?” Things like that.

It’s like no, we’ve done a good job increasing the stability, and anything that we can do … And part of it also was that, we also couldn’t hire fast enough. It just kind of … Pretty much just having to sit down and talk to some of our execs and just like, “This is a good thing, this is better for the future.” And plus it was accepting the fact that Lyft is no longer that tiny start-up company where you can do everything internally. It was at the point where like, “No, we’re growing, we’re maturing. Let’s get some external help.” And yeah, we just kind of talked and had a few meetings and it took off.

Kristel: Yeah. Do you think that your role has changed at Lyft as well after you … Yeah [crosstalk 00:02:56]

Heather: Yeah. My role has definitely changed a lot since I joined two years ago. But you know, it’s kind of definitely more getting into that making sure everything is testable, everything’s in place where if I need to hand off a project or a feature to someone else, I have everything in place for them to take it on. So kind of more like, instead of being in the field as much and testing the app, I’m definitely kind of looking back like, “I know how this app works. How do I plan it where someone else can test the app for me?” Just kind of more of that planning, which I love planning, so I’m really okay with taking that on.

Kristel: Do you also have any tips or tricks for choosing an outside vendor? What do you look out for?

Heather: The big things is always addressed, like, why do you need a third party vendor? What issues are they solving for you? And then that kind of goes deep into like, what are the most painful things in your job right now that if you wanted someone to take over stuff for you, what would you hand them off? What would be … And that kind of leads to an even deeper question like, what’s your perfect day at work? And I’m like, what would you be happy not doing anymore?

Just kind of asking those questions to yourself and then writing it down. Then when you’re shopping for a third party vendor, like, “Okay, can you solve these pain points, things I don’t want to deal with anymore.” Kind of like regression testing, because it gets repetitive. Especially when it’s not automated and you can’t just click a button.

Kristel: But yeah. Talking about automation, how do you think about automation at Lyft? You mentioned that you do some, but …

Heather: Yeah. So we are on our third automation [framework 00:04:43] now. Yeah. Comes with growing big. We definitely had to move the Lyft app from Objective-C to Swift. That was a big change for us in 2015, and with that we had to create an automation framework which, when we were on a smaller scale with less features, we were using [inaudible 00:05:09] which was working pretty well. But as we got more complicated, and especially with mocking locations, it was becoming more difficult to keep running that. So now we’ve actually built our own native frameworks in iOS and Espresso for Android specifically.

Since we are still struggling scaling because we keep growing, we are trying to shift this culture where maybe dev [inaudible 00:05:36] necessarily need QA to automate and test everything themselves. If they need to write up a test or if they have [inaudible 00:05:44] to write a test, they have that data framework and they can plug and chug as they develop. So we’re coming to a point where like, we’re hoping, where a dev can be able to do their own testing and be encouraged to do their own testing while we take care of all the extreme pieces or scenarios that we know that users will use the app for.

Kristel: So great customer experience has to be built into your team’s core thinking, right? Everyone has to take care of it and everyone has to be involved. [crosstalk 00:06:20]

Heather: Yeah. Quality is everybody’s problem.

Kristel: Yeah, exactly. I always like to say that as well, that quality’s everyone’s job. Last week I did a webinar with the [inaudible 00:06:30] partner director of engineering at Microsoft. And he told me the same thing. He’s running the Outlook team and they are all involved in getting a very high quality. So they have engineers going for one thing to work the customer support to see what are the exact issues that the customers are complaining about, so everyone can feel the real pain, like what is happening.

Heather: We have our voice of customer lead at headquarters in San Francisco. She comes to our engineering all hands that we have every other week, and she’ll go through the bug of the week or complaints of the week with us, and yeah. We walk through what customers are facing, if I haven’t heard it already.

Kristel: So how do you measure your QA efforts team?

Heather: That’s a great question.

Kristel: Because there are a lot of companies still out there just purely measure QA based on the number of issues. But how is it for you guys?

Heather: Yeah, so how we measure QA is probably about how often folks ask us for testing needs when they [inaudible 00:07:40] in our slack channel if they need someone. Or I would say it’s more on how much does the team trust QA, where they’re like, “Oh [inaudible 00:07:50] test.” And you’re like, “Yeah, give it to QA and we’ll be fine.” I would say yeah, based on the level of trust is probably how we measure QA. If folks are like, “Oh I don’t want to … I don’t think we can trust QA on this, I don’t think QA will be enough to have this feature go out,” I would say that is probably a bad sign. And if that’s the case, then you have to like go talk to the team and be like, “Okay, where are you uncomfortable with? If you don’t like QA, want to write your own tests?” I would say that’s kind of how it is.

And also for me, what I think would be a good way to measure QA at Lyft is also do developers or does the engineering team help write test pools to make it easier to test? And there answer is not just, try it out on [inaudible 00:08:42] the beta program where they want to put all the logging and tooling in place where anybody can try it out on their test [inaudible 00:08:52] dev box first.

Kristel: Very cool. But yeah, I don’t think I don’t have any other questions, but thank you so much for coming.

Heather: Yeah. Thanks Kristel.

Kristel: Thank you. We’re gonna go together to Orlando as well in a couple of weeks and have a session as well. But yeah, thanks for much for coming and if you have any questions then.

Audience Q&A

Kristel: Oh, awesome. That was good. We did very well. We had a lot of questions here [inaudible 00:09:27] and I think high level we got all the way through. But does any of you have any questions about how Lyft does? Yeah, awesome.

Question 1: Hi. You mentioned your broad customer base. Just wondering, what sort of considerations you’ve given to accessibility of the application?

Heather: Yeah, we do. Accessibility’s really important at Lyft. We have a part time accessibility tester, Marco, and he’s blind. He’ll come in every two weeks and kind of run through any new features that we want to ship, and he’ll make sure that it’s voiceover friendly. And recently Android just [crosstalk 00:10:08] their accessibility efforts, so we’ve been working on making sure that Android is now accessible [inaudible 00:10:15] friendly.

And yeah, we also have … For our deaf features as well. We also have a lot of deaf drivers, and we have a special different customer support for them to talk about what they find useful or not useful as a deaf driver. What’s really interesting was, there was a case where we talked to Marco, and he’ll tell us that a handful of times he’s had a deaf driver, so when you have a blind passenger with a deaf driver it gets really difficult to kind of communicate that. But I mean, it could happen. So we’ve definitely … We do consider voiceover to be a [P-0 00:10:54] test case, so if any changes on the release break voiceover, we’ll cancel the release. So it’s pretty important, yeah.

Question 2: I think Lyft is doing global right now. How do you handle the testing quality control in terms of different countries, different [inaudible 00:11:16] different address format, and all these kind of things? How do you test that?

Heather: We made sure on our fair pricing service that nothing is hard coded in USD. That’s my first … Make sure everything is, in the service side, all translated into cents. Make sure no place in the code refers to US dollars.

Heather: The code when we … We were refactoring a lot of our code was to make sure nothing was hard coded in that sense where in case when we did the DD integration for example, that nothing was hard coded in USD.

Yeah that’s kind of like how we did that check.

Question 2.1: You have anyone physically in those different regions or [crosstalk 00:00:21]

Heather: No not … I don’t know if necessarily anyone is in those countries. Yeah, I mean we do have some international post testing, but they all simulate their locations in that [inaudible 00:00:32]. But, if there’s something that we need to absolutely verify from a different country, we might just like call a friend and have them … give them some of the credit. But if like Lyft is willing to take us internationally to verify a test, I’m up for it.

Question 2.2: Has it changed your device profiles? International makes accountability much harder. Have you changed your device profiles as a result of that yet?

Heather: No. We discovered, when we did the DD integration, that a lot of international users have much older phone versions, like some … There was a huge portion of folks still working on Samsung S2’s, and we had to go into like the old Lyft closet and pull out those S2’s and … which we were glad they still work, but you know, part of that was when we discovered that, that’s when we invested in creating induct Lyft, or like the mobile web version for Lyft, for those cases where we really don’t want to support those older platforms, and that was something we kept pushing off, but as again, we were [inaudible 00:01:41] like “no we have to invest in this,” and now we have mobile web, and that’s our solution for any old or international devices.

Question 2.3: Do you have bullet point presentations too … [inaudible 00:02:00]what presentations model of your group’s testing?

Heather: We have some percentage. I would say …

Question 2.4: 50/50? 50%?

Heather: No, it’s not 50. It’s more like 30% automated and [crosstalk 00:02:12] … yeah, 70% manual or [inaudible 00:02:18]. Yeah.

Question 2.5: Do you have any requirements monitoring the process?

Heather: Requirements?

Question 2.6: Yeah. How you define the features from the beginning of the QA testers…?

Heather: Yeah. So usually, our product managers will do a walkthrough of the mock-ups and the designs with our U.S. designer as well and that’s when we have the opportunity to find, “OK, what’s the goal of this flow or this feature?” And from there we can see what we’ll need to test and go from, “Okay, what would we consider positive, high priority test cases to high priority area test cases?” And then some product managers will be good about reviewing those test cases with the QA, but if product has no time, then they’ll entrust it to us in QA at Lyft to say like, “Oh, this test case failed, this is not going out.” And then we’re like “OK.” So, it’s nice to have a lot of trust in that, from the product side.

Question 3: So are you always involved? Is QA always involved in requirements from the get-go?

Heather: We should be. There are times where we’re moving too fast that they’re just like “oh, fill this feature,” and we’re like “where’s the mock-ups or designs for this?” And we’re like “oh yeah it’s just going to be this feature and that’s it.” OK, you know that breaks airport queues, right? Then that’s where we have to escalate that and say “there wasn’t that time, or testing involved to realize that this was going to break,” and so that’s when you have to say “well, there wasn’t enough preparation. There wasn’t enough thought into it. It’s not going in.”

Question 4: Can you also tell some of your biggest learnings the past few years at Lyft? Some things to do and not do?

Heather: I think it’s great that we move fast and we want to stay very competitive and provide a great experience. Along with that, if it’s too much too soon, it’s okay to slow down and rethink about it. What’s nice … what I’ve learned at Lyft is that it’s all very much a learning process, and no one ever … it’s not a blame culture, which I really appreciate. It’s just “OK, it looks like I messed up y’all. How do we do better next time?” Probably the biggest thing I’ve learned at Lyft is learning to be very adapted to changes. When I joined two years ago, it was still relatively smaller and it was always just making sure passengers and drivers were able to meet their rides. Now it’s like “OK you know, we have the amp now. How do we make this a really cool experience for drivers and passengers?” It’s like, just keep changing. Plus the integration that we did with [inaudible 00:05:29] and all that. Now with our partnership with QDM it’s always just like “OK, what else can we do?” Yeah, I’m always changing. So, be open to change.

Question 4.1: How many [inaudible 00:05:49]?

Heather: Well for the UI, for the apps, it’s weekly releases. Our server side is … they release two to three times a day.

Question 4.2: So since you’re doing weekly releases, I’m assuming most your tests are manual right?

Heather: Yeah.

Question 4.3: [inaudible 00:06:08]They’re supposed to do the automation for you. Is that kind of how?…

Heather: No, [inaudible 00:06:10] does the manual [inaudible 00:06:11] for us. Yeah and we keep our … we have our automation engineers in house and they are the ones who [crosstalk 00:06:17]

Question 4.4: Eventually, hopefully they’ll want to [inaudible 00:06:21]

Heather: That’s the goal. We also have new features coming in every week and so it’s just a lot to stay on par with all the changes.

Question 5: What’s the timeframe within the Beta test [inaudible 00:06:38] and going to the actual [inaudible 00:06:39]

Heather: So we cut a release branch every Wednesday. I think we’re moving to Tuesdays the end of May. So, we cut the release branch, and the day we cut the release branch that gives us a week before we submit it to the App Store. So that’s when we give it to the media users, so the betas have about a week playing with it before we submit it, or release it to Apple or Google.

Question 6: How do you engage with your beta users? How have you gotten them to be part of your beta program?

Heather: It started off with just friends and family who use Lyft. We have a Facebook group that we interact … I mean post pictures and other beta users can call in on that post too with their screenshots. So that’s mostly how we interact with the beta group. We also have another email that they can submit all their feedback on, and also we have quite a few beta users in San Francisco. So I’ll usually drive with beta drivers[inaudible 00:07:41] and ask them how they’ve liked any feature we’ve been putting on them for the beta.

Question 7: Is it possible to automate the voice over tests?

Heather: We haven’t found a way yet. So that’s still just very manual at this point.

Question 7.1: I’ve looked into a few [inaudible 00:08:06]

Heather: I haven’t seen any comparing audio … I’m sure there’s a way but I don’t have time for that, so. Right now it’s just one high priority call at a time.

Question 8: So weekly releases hasn’t eliminated the need for emergency patches or hot fixes or …?

Heather: I wish. There’s still a hot fix that happens every now and then and we’ll have to expedite it when it happens. It’s a little bit easier on the [inaudible 00:08:41] side, but you know with Apple, you have to know someone inside to be able to just get it through quickly. We definitely raise that caution like, what’s the cost of this hot fix versus waiting for the next release to get it in? So you just have to balance it out and see what the call is. But definitely … we don’t hot fix to often. It’s usually related to … if it’s something that will completely ruin a ride for a user or driver, which doesn’t happen often. Usually what ends up [inaudible 00:09:18] a hot fix was someone merging something without telling anybody and they thought they’d get away with it.

Question 9: We’ve just invented, like you said, and with [inaudible 00:09:28] change as well. It’s a good feeling to have that camaraderie and even developers now are talking about the confidence level has gone up. Is that … are you seeing that start to rise as well?

Heather: Oh yes.

Question 9.1: OK.

Heather: Yeah. Once … I think it definitely helps when a developer sees how QA goes about testing and how they look at the app. They’re definitely just going “oh wow, you know, I never thought of it that way.”

Question 10: How long have you been [inaudible 00:09:55] with Lyft?

Heather: So, I joined Lyft right when they were starting that culture change. So it was about … I think they just … I was interviewing in April of 2015, and I was mentioning it. They were like “the QA at Comcast will work with the developer teams[inaudible 00:10:15]. They were like “OK if we do this would you be fine with it?” I’m like “yeah, I think it’s a great idea.” I was definitely … some folks on the team when I joined, they were very unsure of that, but you know, it’s all just a learning process.

Question 10.1: The developers were unsure about it?

Heather: I think everyone was. I think even QA was just like “why would [inaudible 00:10:34] even know what to do?” Yes. And so, it’s a learning change for everybody. Product really appreciated it too, just so they can see what developers are trying to figure out, and how testing complements that. So it was a good change and I’m glad I joined right when they were doing that.

Question 10.2: Who drove the change? Did it come from top management?

Heather: Yes. They just hired our VP of engineering, Pete Morelli at the time, and he was … he had joined Lyft earlier in 2015 and he was the one who was like “no, no, no we have to have QA as part of the engineering team. He came from Twitter, so that’s kind of how they did it there too.

Question 10.3: I think with this combined engineering culture a lot of manual testers have started to feel that they’re being removed from engineering, or they’re gonna lose their jobs. But I don’t think that [inaudible 00:11:31] forever.

Heather: That’s what we had a couple of months ago and they were so afraid. They were like “we can’t do that.” Then there was also a group who reminded the more you get involved the more you can learn, and you can progress your career as well. You can become…

Kristel: The more testable everything becomes.  [crosstalk 00:11:53]

Question 11: Talk about working together with [inaudible 00:11:57]. What’s the working model you guys work together? Do you have people sitting in your office you can put together?

Question 11.1: No. [inaudible 00:12:13] got a [inaudible 00:12:08]

Heather: There’s a universal messaging app that we use that [inaudible 00:12:15] all the time during a test cycle. It will let me know if the test boxes are down or if something’s not working right.

Question 11.2: So you guys are not physically sitting together in a room.

Michelle: No but these guys come into my office all the time.

Kristel: Thanks for coming. [inaudible 00:12:33] But is it time now?

Michelle: It is time.  [inaudible 00:12:37] Thank you so much.