Ready to learn Machine Learning? Browse courses like Machine Learning Foundations: Supervised Learning developed by industry thought leaders and Experfy in Harvard Innovation Lab.
Today’s consumers want to be able to interact with businesses through their smartphones, and many of the modern services and products are exclusively app-based. This makes the availability of mobile apps a requirement, a prerequisite for any successful consumer-centric business.
Consequently, a lot of businesses view apps as a primary digital channel that attracts customers and generate revenue. The most significant challenge stems from this fact. Consumers have ever-increasing quality standards for mobile apps. If your app is broken in some way or doesn’t meet the necessary quality requirements, the chances are high that you’re going to lose your user base.
Such negative customer experience also affects brand loyalty, as people flock towards products that work, and a poorly executed app will increase churn at an alarming rate.
Many apps get installs, but only a fraction of them is used more than once. And a substantial portion of those apps gets deleted after the first use merely because they were not user-friendly. Many factors lead to such this, including the design and app testing pipeline applied to the software.
Testing apps is a complicated and challenging process. But most importantly, it’s very taxing on financial and human resources. That’s why automated testing routines are slowly taking over the process, and AI is viewed as a solution for complicated app testing problems.
In this article, we’ll talk about the concepts that are involved in AI-driven app testing. We’ll also discuss some of the bottlenecks and misconceptions associated with this monumental shift in app testing playbooks.
App Testing AI is Not AI
Today’s artificial intelligence scene is mostly a collection of buzzwords attached to concepts that actually have nothing to do with AI. When businesses try to lure clients with the ‘AI’ marketing speak, the underlying technology is most likely an elaborate machine learning platform with bells and whistles.
There’s no real AI involved in app testing. That means that there are no actual cognitive functions behind the software. Although advanced machine learning platforms manage to mimic that.
So when companies advertise AI-based app testing, they most likely mean machine learning (ML-based) software that automates certain parts of the pipeline. This is an important distinction, as you, being the customer, have to know the difference to make the right purchasing decision.
So you’re presented with a couple of major options: either to dedicate your team’s effort to building out that pipeline yourself or to hire a professional team of testers that will either complement the software of your choosing or already have that software in their arsenal. Let’s take a closer look at some of these tools to get a better understanding of their functions.
Using AI-Driven App Testing Tools
This is one of the paths taken by companies that don’t have the necessary talent to set up a testing pipeline with AI. The pros of this approach are pretty straightforward, as you don’t have to know much about testing or have the infrastructure in place.
However, it’s important to realize that these tools still use human input, and so having professional testers on board for troubleshooting might be the preferred course of action. Especially if you don’t know much about optimized app testing. Additionally, many of the tests require human input as software systems aren’t prepared to identify specific errors. Like in testing localised app versions, where appropriate use of CTAs and text contextualization can only be identified by a person.
These tools also usually require human input at the starting point, so testing teams have an advantage here, as they already have testing scenarios and scripts figured out.
Eggplant AI
This is part of the Eggplant app testing bundle. This product automates test case creation using learning algorithms (machine learning). A high-level overview is pretty simple – you create an optimal behavioral model for your users, with desired and undesired outcomes. It’s then used as a blueprint for the algorithms to comb through your app. Now let’s take a closer look at it.
You start out by building a model of user behavior. This model incorporates the basic behavioral patterns that a user might exhibit throughout your application. The patterns are represented by states and actions, which correspond to pages in your app, specific transition effects, or UI elements on those pages.
Each of these categories also has subcategories. For example, actions can be non-sequential, meaning that they don’t have to be performed in a particular order. Alternatively, there are actions that have to be performed in a particular order. There are also return states to which the app has to be able to revert after a certain action has been performed. For example, going to the catalog of your products and then going back to the homepage is one of such actions.
You can also set ‘the weight’ for each action by identifying which of them are more important. This helps the AI within Eggplant to determine which actions to prioritize during the execution of the model.
Once everything is set up, you can start running the model(s). During the execution stage, Eggplant AI uses machine learning to identify the most relevant test cases and adjusts the testing behavior accordingly.
As a result, you receive maximum coverage for your codebase, as the model covers as many user activity scenarios as possible.
At the same time, you don’t have to waste any resources compiling and verifying test scenarios/cases that your app might need to go through. Of course, everything is being logged during the test, so you can always track back and review error reports after each run.
Testim ML Automation
This product takes a different approach. Instead of relying on any internal frameworks, it uses your input as the starting point for testing. With the traditional test automation, you’d often have to record the test or define actions, then run the test through a standard automation software that relies on repeating input. Then when you run the test, and it experiences an interrupting action that breaks your defined cycle, the test fails immediately or at the end of the programmed run.
For example, your test case is to get from page A to page B in the app. But in the process, a popup appears. Many automation tools would categorize this as a failure since the popup was not in the script. These cases often occur when you make changes to your app but don’t update the test scripts.
Testim uses machine learning to identify crucial performance bottlenecks by analyzing your input and by comparing your older code to the newer version. The more you test the app, the more intelligent the algorithm gets, identifying new actions within the app as you update the code. It will also ‘understand’ when an intermittent action won’t break the sequence and completes the test error-free. So since it gathers all of this app usage data constantly, the tests become more stable over time, as the software ‘learns’ about the usage patterns and the standard/desired functionality.
Rainforest Machine Learning Test Verification
Rainforest QA takes yet another approach to using AI in the QA process. Instead of directly affecting the tests, it uses machine learning algorithms to identify false positive cases of errors and broken test sequences.
That way the development team doesn’t have to spend the time reviewing errors that are really just issues with the test flow or mistakes by testers.
What About Those Bots Everybody’s Been Talking About?
Some services run app testing with the help of bots. These essentially offer software that mimics actions of real users. Over time, it supposedly becomes better at using the app and exploring all of the various test cases and routines through machine learning. Some of these tools do promise total app coverage, even UX testing, which is hard to imagine as bots can’t perceive the app like humans do. No matter the amount of AI/machine learning put into it.
Let’s now talk about the problems with the solutions that promise advanced machine learning functionality for your application testing needs.
What’s Wrong with These Tools
While all of the products that we listed above offer specific advanced AI functions, they can’t possibly reproduce the complexity of the testing process. And here’s why:
■ These tools can’t entirely mimic the human experience. Say, you have an older version of your logo in the app. You’d still need someone to go over the app pages and make sure that the new logo is displayed correctly everywhere. This is just a simple example, but it illustrates the problem pretty well. You still need humans for this part of the testing process.
■ They can’t possibly know the exact purpose of your app. That’s why they can’t fully grasp various small differences that might make your app special and certain ‘human’ use cases that may be specific to your particular app.
Why App Testing AI is Out of Reach for SMBs
Building out a machine learning pipeline is a complicated process. First, you need data. Lots of it. Then you need to build features out of that data. This means creating datasets with specific data points used for the model.
This is the hardest part of the process, as you need to convert your app data somehow, say user logs, into rows and columns that can be fed into the model. Sure, you can try playing around with unsupervised machine learning, where this kind of featurization is not required. But it takes a lot of mathematical knowledge to make a working model out of that.
This also doesn’t guarantee the result, as these models are built with neural networks, a specific subtype of machine learning algorithms. And it’s impossible to decode the exact underlying algorithm as it’s being created within the neural network. In fact, you’re getting a black box solution, just like any of the tools above.
Eventually, you’ll also need to operationalize that model. It’s useless if you can’t apply it to real-life cases, like predicting which pages or functions on the page are likely to fail.
Then there’s the cost associated with hiring data science or analytics talent that can solve this problem for you. We’re talking dozens of thousands of dollars. If you’re testing just this one app, it doesn’t make sense to even think about this option financially.
Your Options
■ You can choose to try one of these services and rely on their generalized approach to machine learning. But keep in mind that they’re not going to offer a full coverage for the code, UX and UI of your app. Unless you’re ready to sign up for several of these services to somehow combine their functionality.
■ You can try building an AI-driven app testing solution in-house, but you might as well offer it as a product to other businesses if you do manage to generate a viable solution. It’s going to be incredibly expensive, time-consuming and devastating for your production deadlines.
■ Your final option is to find an experienced app testing team that also has their hands in the machine learning pie. They do have the historical data. They run tests all the time, and they probably already experiment with machine learning. So this would be a relatively simple machine learning feat for them, requiring open-source regression algorithms designed to find dependencies.
An Afterthought
At this current point in time, artificial intelligence is an incredibly complicated concept for application testing. There aren’t that many products that offer real AI/machine learning functionality for app QA.
Your best bet is to find a QA team that has in-house machine learning solutions or uses one of the tools that we mentioned and their alternatives. This way, your app testing needs will get the maximum coverage that they deserve.
It’s also important to remember that traditional QA automation still works. You don’t have to jump on the AI bandwagon just because everyone is using it in their marketing nowadays. You don’t need machine learning or AI for successful app QA. A solid QA team can foresee any obstacle and possible bottleneck with your app. They have experience, as this is their bread and butter. Pro-tip: take a look at their blog and see if the mobile app testing content that they cover is on par with the tech standards that you employ at your business and see if they have content related to the specific type of apps that you’re looking to test.