AI Insurance Quality Assurance

Defining Extreme Automation: Cases in AI-Driven Testing

In today’s hyper-connected world where data is continuously generated by a multitude of devices, apps, and ecosystems, Artificial Intelligence (AI) has emerged as the disruptor in the field of advanced analytics. Key to this disruption has been the umbrella concept of Extreme Automation, or the confluence of disruptive technologies and tools to revamp workflow and operating model. Insurers today recognize that extreme automation will be critical to address changing demands, industry disruptions, and speed-to-market of products. Despite this recognition, many struggle to clearly define Extreme Automation and understand the way it informs the use of AI technology, especially in testing. This blog will dig deeper into the concept of Extreme Automation and focus on three main uses of AI-driven testing in the insurance landscape.

What is Extreme Automation?

Extreme Automation is a fundamental re-imagination of how a company works across the entire enterprise in a digital world. It means developing new IT business operation models and workflows that are end-to-end, flexible, and driven by technologies such as Machine Learning (ML) and National Language Processing (NLP). These technologies siphon critical data to provide real-time insights across all stages of production, accelerating efficiency, productivity, and enhancing customer experiences. Especially in our current era of both an increasing virtual workforce and customer base, Extreme Automation provides scalable platforms and decision-making capabilities that cater to a hybrid workforce.

Importance of Extreme Automation in Testing

Extreme Automation attends to the increasing need to launch new products or software fast and often. As the software development life cycle becomes even more complex day by day, with tight delivery schedules, weekly releases, and almost daily updates, EA-informed workflow infuses AI-automated analytics for the full testing lifecycle, reducing costly mistakes and bugs early on in the product development process. Here are a few use cases:

Use Case 1: Identify Duplicate Test Cases

In order to make the testing cost-effective and avoid wasting a lot of effort, a machine learning algorithm can be trained using NLP, to recognize Test Cases written by different users for the same purpose. It can then perform this task repeatedly without any bias, gradually getting better at recognizing the duplicates.

Case in point, a large direct carrier had thousands of test cases with hundreds of duplicates among them, creating redundancy and wasting time in the testing process. Using an AI model based on NLP and feeding it with enough data from test case descriptions, the model was able to predict unique test cases vs. duplicates. The model read through the thousands of test cases and classified them as either unique or duplicate with probability score. In place of running through the test cases, a human reviewer can now focus on the result, confirm or change the recommendation and update the test case repository. This info is then fed back to the AI model so that it gets better at predicting.

AI model based on NLP predicts unique test cases vs duplicates

Use Case 2: Defect Root Cause Identification Using AI

A tester identifies a defect and assigns it to a dev team to resolve only to get the reply: “sorry not an issue with my application”. The tester then assigns it to another team and soon the defect can go back and forth like a ping-pong ball. This issue is widely common in testing. An AI-based model can be applied here to go through the description of defects and use this data to learn and predict the root causes of new defects as the tester identifies and creates them. Based on the keywords used in defect description, the model can predict the right root cause application using historical data and learning. This, in turn, saved a significant amount of time for the development and testing teams

AI model uses data to learn and predict root cause of new defects

Use Case 3: Predict Which Test Case Will Likely Find a Bug

It is common knowledge that exhaustive testing is humanly impossible. Hence, testers often use various techniques such as combinatorial algorithm, risk-based testing or even go with their gut feeling to select the test cases to execute.

Using historical information such as all the release data, bug data, and test cases, AI can be leveraged to identify different patterns and recommend the right set of test cases to execute. Example: Based on the historical data, the machine identified that if code is checked in with changes to file 1, then file 2 has always been checked in. Then, in a subsequent code delivery, if file 1 is checked in and not file 2, AI will recommend to execute test cases related to file 2 to ensure the changes needed in file 2 are not accidentally left out by the developer. NLP along with Bayesian Model is used here.

AI uses historical information to identify patterns and recommend sets of test cases


Extreme Automation is not just about the use of AI to automate a particular step in testing. It is an approach infused in every step of the testing, fundamentally rooting automated analytics and real-time data into the product development cycle. As insurers increasingly adopt AI-based technologies to help them work smarter and faster, they must also reengineer intelligent workflows to continuously create agile automation solutions. With the expansion of the digital ecosystems and the growth in the volumes of data, Extreme Automation will be a critical approach for enterprises to leverage AI and get the insights they need to compete in this digital world.

Take your QA and Testing practices to the next level. Learn how ValueMomentum’s QualityLeap Services can help you get there.