5 ChatGPT prompts to try as a QA Engineer

This article will explain what ChatGPT is and how it can help you in your everyday role as a QA engineer.

If you’ve been paying attention to the news at all over the last six months, you’ve probably heard of ChatGPT, the chat-based AI tool developed by OpenAI. Enough time has passed since its release for people to come up with hundreds of ways for it to assist people, including quality assurance (QA) engineers. This article will explain what ChatGPT is and how it can help you in your everyday role as a QA engineer.

What is ChatGPT?

ChatGPT is an AI-powered tool that takes inputs and gives responses in a chat format. The GPT in its name stands for Generative Pre-trained Transformer, which describes the type of artificial intelligence it employs – generative for the responses it compiles, pre-trained on a massive dataset pulled from all over the internet, and transformer refers to the architecture it uses to learn.

How does ChatGPT work?

In simplest terms, it takes an input from a user, processes it, and outputs text it thinks will be an acceptable response to the input. This can mean answering a question, doing math problems, or even generating simple pieces of code.

ChatGPT also takes into account past inputs and outputs and can be asked to modify an output to get a better result. For example, if asked to generate a list of ten words, and then asked to generate another list of ten words, it won’t use words from the first list in the second.

What problems does ChatGPT have?

Like all other AI tools, ChatGPT doesn’t have common sense. However, it is good at mimicking common sense. Some outputs it gives will sound like they make sense, but they aren’t necessarily true.

It also can carry biases that were present in the training material it learned from, and the dataset it used to learn was far too large to be filtered through for inappropriate and/or incorrect content.

How can ChatGPT help quality assurance engineers?

While it does have some limitations, ChatGPT can be used to help QA engineers test software. It can help generate test plans, use cases, descriptive text, small datasets, and write small amounts of code, all of which can be used to reduce the workload on a QA engineer. The rest of this article will focus on how to write effective prompts for QA engineers to try, and what they should watch out for.

Five prompts for your QA team to try

If you’re new to experimenting with ChatGPT, you’ll need to head over to OpenAI’s website or download the ChatGPT app (currently available for Android and iOS). You’ll also need to make an account with OpenAI to access ChatGPT, which you can do for free if you can make do without the most advanced, updated version. ChatGPT 3.5 is free, while access to the version running on GPT-4 requires a paid subscription.

Once you have access to ChatGPT, try asking it a few questions to get a sense of what it can do, then try to get it to do some of the following:

  • Generate a test plan
  • Generate some basic code
  • Create a small dataset
  • Create descriptive text
  • Create a use case

It’s important to give ChatGPT enough information to generate a good response, so if you’re looking for something specific, make sure to add those details to your prompt. We’ll go over how to get ChatGPT to do each of the above scenarios in the sections below:

Generate testing plans

When asking ChatGPT to generate a testing plan, be sure to include what the purpose of the test is, what the goal is, and any metrics you plan to measure. An example would be something like:

Generate a test plan for checking email functionality in multiple email clients. The email needs to be free of bugs and spelling errors.

Generate basic testing code

ChatGPT can write some basic code, and it works best with small sections that can be easily looked over, refined, and tested individually before being combined into a larger piece of code. Make sure to specify the language you’re using and what you want the code to do. Here’s an example input:

Write code in Python to check if links are working.

Create small datasets for testing

Some of your tests may require having simulated users, and using ChatGPT to quickly come up with a fake dataset can save a lot of time. When doing this, it’s best to keep it small and simple so you can easily check the results. An example:

Create a dataset of 10 people with ages between 23 and 60, first and last names, job occupations, and salaries.

Create descriptive text

Another quick way to make documentation a little easier is to ask ChatGPT to do some of the heavy lifting, and clean up the results. You’ll want to be clear about what you’re asking it to do, and stick to smaller chunks of text. Here’s an example:

Briefly describe the purpose of this code snippet (insert the snippet here).

Create a use case

You can ask ChatGPT to generate use cases as well, although they will need to be checked over and you might have to ask it to modify the results a few times. Try something like this:

Write a use case for a client receiving an email, opening it, and clicking a link to purchase a book.

Pitfalls to avoid when using ChatGPT for QA

While ChatGPT can do some interesting things, it isn’t a replacement for quality assurance professionals. The responses it generates need to be looked over at the very least, if not modified. There are several critical pitfalls QA engineers may run into when using ChatGPT, including:

  • Lack of context
  • Prompt quality
  • Proprietary information
  • Inaccurate statements
  • Lack of specificity

ChatGPT doesn’t know what’s relevant

The only information ChatGPT has about what you’re trying to achieve is what you’ve told it, and depending on the level of technical complexity and specificity of your goals, it might not have the training or context to provide useful answers. If you ask it to generate a test plan, it might give you a test plan for baking the best croissant instead of something related to software testing.

Dependent on prompt quality

ChatGPT relies on the quality of the prompts it’s given to generate good responses. This means it’s still dependent on humans to figure out what information it needs to have, translate that into the prompt, check the responses, and iterate if necessary. It’s not a standalone tool that can automatically understand what you want, and writing good prompts is a skill that has to be developed like any other.

Avoid inputting proprietary information

Any data you put into ChatGPT is no longer under your control, since OpenAI uses the inputs and outputs to continue to train and improve ChatGPT. Do not put proprietary information into ChatGPT. Even if you don’t work with proprietary information, you may want to keep an eye on what information you’re willing to disclose.

Always check over the outputs

It’s critical to always check over the outputs ChatGPT gives with a careful eye. Even if something is phrased in a way that seems plausible, it might not be true. AI tools like this are known to have this problem, and it’s referred to as a hallucination. Keep in mind that ChatGPT has no way of knowing if its outputs are correct or not. It’s essentially estimating a response based off a lot of other text it has looked at.

Lack of specificity

ChatGPT is a generalist, and this becomes obvious when asking it to do something specific and technical, which could describe a lot of QA engineering. It’s helpful as a launching point, but most of its responses for QA are going to require an expert to understand what’s missing, what’s wrong, and build off them.

Get help testing emails

Mailosaur helps people test their email and SMS campaigns by creating software tools to make testing easier to automate, faster to perform, and less work. Please reach out to us today with any questions!