SmartBear Adds More Generative AI Testing Tools to Platform

SmartBear this week extended its efforts to bring generative artificial intelligence (AI) to its test automation portfolio to include test data and application programming interface (API) contracts.

This latest addition to the SmartBear HaloAI family of generative AI tools automates a set of manual tasks as part of a larger effort to reduce the labor required to test applications, by leveraging foundational large language models originally developed by Open AI.

Dan Faulkner, chief product officer at SmartBear, said the company has been working toward mastering techniques such as retrieval-augmented generation (RAG) to enable developers to use natural language to automatically create valid tests based on data collected via its platform. The goal is to increase the level of determinism that can be applied to testing using a generative AI platform that, by definition, provides a probabilistic approach to generating content and now synthetic data for testing, he noted.

SmartBear earlier this year acquired Reflect, a provider of a no-code testing platform for web applications that leverages generative AI, to jumpstart its efforts.

The AI Test Data Generation for the company’s TestComplete platform will be available in beta next month. AI-augmented contract testing for its PactFlow platform will be beta-tested from July through August. SmartBear last month added an AI tool for generating API tests that eliminates the need for developers to write and debug scripts.

In general, the usage of natural language will make application testing more accessible to application stakeholders who wish to participate in the application testing process, noted Faulkner.

Generative AI should also make it simpler to strike the right balance between shifting some testing responsibilities further left toward developers when appropriate and running tests that would be better suited to being conducted by DevOps engineers, he added.

Ultimately, organizations should be able to shift responsibility to testing both further left and right to improve the overall quality of the applications they deploy, noted Faulkner.

It’s still early days so far as the adoption of generative AI is concerned. However, it won’t be long before these tools are pervasively employed to not only generate code but also automate DevOps workflows. The next major challenge will be orchestrating all the AI assistants optimized to perform specific tasks, such as testing APIs that will soon be embedded into those workflows.

In the meantime, DevOps teams will need to determine to what degree they can extend their existing workflows in ways that incorporate AI assistants versus opting to replace their existing platforms. Regardless of approach, DevOps engineers will soon find they have access to a range of AI assistants trained to automate specific tasks that they will need to asynchronously orchestrate at very stages of a pipeline to enable organizations to deploy more software faster.

The challenge is the amount of software an organization deploys within the next few years is likely to far exceed the software they might have deployed in the past decade. The less that software is tested before being deployed, however, the more likely it becomes DevOps engineers will find themselves spending more time than ever troubleshooting applications long after they have been initially deployed.

Originally Appeared Here

Author: Rayne Chancer