Users want sleek, elegant apps that are priced right and action-focused. Effective mobile application testing strategies are based on one end goal: pleasing the user. Mobile application testing is about more than just finding bugs. Testers need to communicate user desires to developers and stakeholders. And there also environment concerns. Different screen resolutions, OSes, and possible network limitations all top the list for common issues with mobile testing. How do testers handle all of that while advocating for the user? From the esoteric to the oops-should’ve-thought-of-that, here’s a round-up of every major mobile app testing fail. Saving testing for the end Developing purely agile or purely waterfall is a bit of hoax. The vast majority of teams operate in some sort of hybrid mode. But even when testing is supposed to be built in, it’s easy to fall prey to the temptation of leaving it until the end (especially for small projects). But we know that practice isn’t optimal. Testing early, often, and fast in the development process allows for test-driven requirements and goals. Testers turn into champions for the app along its course, as opposed to nagging critics who are slowing the project down (which is how it can feel when testing is left until right before launch). Mobile apps are typically expected to release very quickly, so there’s no time to save the process until the end. When that happens, it’s too late to make high-level changes that users might really appreciate. Not paying enough attention to the state of the device Even though teams know in theory how important mobile device states are to a performance of an app, they still might ignore this type of testing, or not delve deeply enough. Users enter new device states every day, so not paying attention to them isn’t an option. Here are some of the things to test for: Location services on/off Battery level Interactions with background apps Permissions and settings for notifications Screen brightness Incoming calls and texts Actions of any physical buttons on devices For a more exhaustive list (the best I’ve seen), check out the book App Quality: Secrets for Agile App Team by Jason Arbon. Wasting resources by purchasing a variety of devices While a testing lab full of dozens of different devices sounds super fun, it’s likely a waste of funds. Devices change all the time, and you need not only access to different devices, but to the networks they function on in other countries. You’ll still want to test with whatever you have access to in-house, but when it comes to going “all out” for device coverage, find another way than purchasing devices whose relevance won’t last. You can crowdsource your testing with us and gain access to a variety of devices all around the world by going the manual testing route. Or you can use services like Perfecto or the AWS Device Farm to access real devices in the cloud via automated scripted testing. Hanging on to a traditional, function-driven test mentality The “it does what it’s supposed to do” mentality works for lots of types of testing, like ERP and EAI products. But for mobile? Not really. Testers really need to adopt that user-advocate mindset. Never before have testers been required to be so creative and so user-focused. Mobile users are pickier and more critical because devices are personal. They’re an extension of our natural capabilities and we need them to do what we want, how we want, and when we want every single day. From an organizational stand-point, keeping everyone involved, whether employees or contractors, mindful of user expectations is how you build a brand and a product that disrupts the market. You empower everyone on the team to be customer-focused. Relying on users to be testers For development teams with a small testing budget and no access to testers, it’s tempting to proof the app yourself and allow users to be your initial testers. Not only do you need a second eye for UX concerns and device concerns, but you also don’t want to risk the time it takes to implement the fixes. Reviews are not bug submittals. Imagine the horror! Your app is approved by the store, it goes live, and you get a few positive reviews, with also some frustrated bug finds. You can’t erase that review and just make it go away. You also can’t fix it quickly. Even if you upload your fix within a day, you don’t control when that fix gets deployed to users, and you can’t rush the process with the app store. The only thing worse than saving testing until the end is not testing at all. Focusing on localization without globalization efforts A typical development process is to design an app in the native or primary language and then translate, having testers check that the translations are in the right place. Saving localization until the end can mean that text gets wrapped, cut-off, or split, depending on the length of the new words. Buttons and menus might not look at all like they’re intended to. Instead, have testers initially (in the early phases!) look at a pseudo-localized build that shows the longest possible string of characters and handles left-to-right and right-to-left concerns as well. Releasing mobile app updates without customer transition testing When releasing a new update or version, your testers must look at the last couple versions (even though they tested them already). It’s important to check versions against each other. Is an old feature missing? If so, this should be explained during the release of the new version either in the app store info, within pop up explainer messages inside the app, on the company blog—or all three. Is customer data suddenly missing after the upgrade? That could be an error! Users often wake up to brand new versions of their favorite app, and unless you’re Instagram, it’s hard to get away with shocking people before breakfast. So consider the transition when testing—always. Ok testers, over to you! What else do you do (or not do) when testing mobile apps?