Insight and analysis on the data center space from industry thought leaders.
Five Steps to Make Mobile Compatibility Testing More Agile and Future-Ready
Today, the most crucial point for mobile and smart device application testers is to ensure the tested product works smoothly on all types of devices used by end-users.
February 2, 2017
Pavel Novik is Mobile Apps Testing Manager for A1QA.
With the number of devices and operating systems appearing on the market, their fragmentation poses a particular challenge to software testing and quality assurance specialists. It turns out that many companies are not ready to face the ever growing number of OSs, platforms and devices.
Android's operating system seems to be the most complicated issue. Why so? Compare the following data: In 2012 there were about 4,000 Android device models on sale. By the end of 2015, this figure has increased six fold and made up a total of 24,000 distinct Android devices.
Obviously, companies eager to take the lead in the industry should focus not only on constant innovation, but also be able to renew their set of mobile devices on a regular basis.
Today, the most crucial point for mobile and smart device application testers is to ensure the tested product works smoothly on all types of devices used by end-users. To meet this demand, they need to take into account various network conditions under which the apps are used and experience they derive. This is commonly referred to as the compatibility of the application. And the quality assurance process to make sure it’s achieved is usually named “compatibility testing.”
There are three factors that regularly increase the complexity of compatibility testing:
Frequent launch of new device models that incorporate new mobile technologies. To make things worse, sometimes new models appear in the market when testing has come to its logical end, which poses new challenges to product owners and managers. Another challenge that may arise while the product is being tested for compatibility is for there to be changes in UI, font size, CSS style, and color, which causes more difficulty in the testing procedure.
Testing can’t be limited to browser layer checks and must cover additional layers, such as operating systems and technology features.
Application functionality depends on the device hardware features, which necessitates the comprehension of how the hardware features impact the functionality.
To make this point clear, let’s consider the following case concerning the popular smartphone app, Pokémon GO that combines augmented reality and GPS tracking.
When testing the application (of course, it’s impossible to provide full testing coverage without access to the app’s backend), it was found that in some devices the application wasn’t matched with camera drivers and therefore switched off the augmented reality mode, which is an important factor for the apps’ incredible success.
With all this in mind, the main question for mobile testers should be how to run compatibility testing and:
Cover maximum number devices and come close to 100 percent of my end users’ device base?
Test for those functionalities that are more likely to fail in the event of a technology upgrade or a new device appearing on the market?
Trace the detected issue to the corresponding layer - OS, browser, device features or skins?
The following five-step approach to make compatibility testing more agile and scalable will help answer these questions. At first glance, it may look time-consuming, but it will pay off in the end.
Step One – Create the Device Compatibility Library: Take every device or model available in the market and structure the following information: platform details, technology features supported by the device (audio/video formats, image, and document formats, etc.), hardware features included in the device, and network and other technology features supported by the device. Most of this data will be easy to find on the manufacturer’s website or product release notes. This list will be helpful on numerous projects.
Step Two – For purposes of every unique project, shortlist the device list based in compliance with region or country’s peculiarities to cover maximum end users in the region: Consider the available poll results and market analysis. You can also use sites like DeviceAtlas, StatCounter or Google Analytics to define the most popular devices in the region.
Step Three – Divide all devices into two lists: fully compatible vs. partially compatible devices: Fully compatible devices support all technology features required to make all the application functionalities work seamlessly, while partially compatible devices may not support one or a few features and therefore cause error messages.
The question that forms the bulk of the debate is whether it’s worthwhile to run tests on emulators. Real devices are always a better option, as they give a true feeling of the app.
But if the real device is rare or highly expensive, “half a loaf is better than no bread.” Android and iOS emulators are mainly designed for native applications, but their default browsers accurately reproduce how the app will look on a real device.
Step Four – Run tests on fully compatible devices: When prioritizing testing, check for 100 percent apps functionality on select devices from this list. If you don’t have the opportunity to run tests on all the devices on the list, focus on at least one from each manufacturer.
Step Five – Run tests on partially compatible devices to the extent possible: Try to perform testing on the latest and most widely used set of devices. Place initial focus on the functionality that might be influenced by unsupported features.
This simple approach has been proven to be very efficient while handling multifaceted compatibility testing.
After going through all of these steps, it should be easier to understand that mobile compatibility testing is important and in the end, may be less time-consuming, more agile, and future-ready if you follow this plan of action. Remember, starting with testing early always pays off! In case of mobile compatibility, it should be performed when the build is stable enough to support testing.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like