WTF is QA Testing?

First of all, apologies for not writing in over a year, and shout out to @theproductpup on TikTok for motivating me to get back into it (someone needs to write a WTF is TikTok article for old farts like me!)

As a software engineer, I spend around 80% of my “coding time” testing what I wrote. Even pro coders can’t just type code and expect it to work (a rare moment of zen when programming occurs when something, usually a small change, works on the first try). Coding is an iterative process; even if you have a Quality Assurance (QA) team, engineers will always spend time testing before handing it off to them.

If my engineers test their code, why tf do I need to hire QA testers?

Most engineering professionals agree that the engineer writing the code shouldn’t be the only person to test it before it goes into production. While leaders expect and encourage engineers to thoroughly test all of the code they write, a second pair of eyes is always useful. Engineers think differently from non-engineers (hey, someone should write a blog about that!) and diversity is key in thinking of outside the box use cases. In general, you want to test as many use cases as possible, and different people with different mindsets are likely to cover more ground than any individual.

While engineers should always think about the code that surrounds the feature under development, they tend to focus their testing on that specific feature. Someone dedicated to QA testing will have a test plan – that is, a collection of test cases that are run every time they test a new update – to test functionality peripheral or seemingly unrelated to the new feature. This is a form of “regression testing” that I’ll discuss later.

In the very early stage startups I’ve worked at, QA testing may fall on the leadership team or Product Managers, but most companies eventually find that they need people dedicated to testing.

Can’t this be automated?

These days, that question is being asked about nearly every job function. While some engineering teams strive to have such strong automated testing that manual QA is not needed, most companies require some manual testing before code goes into production. Automation can be an excellent mechanism for some forms of testing, such as unit testing and regression testing.

Many companies land on a hybrid of automated tests that supplement but do not replace manual testing. Software products tend to become more and more complex over time, making it harder and more time consuming for manual testers to cover all the bases. Automated tests help with this; as more functionality is added, the collection of automated tests – we refer to this collection as a test suite – grows to cover the new and existing use cases. Typically, the engineer writing the code is expected to write automated tests to go along with it, as they are most familiar with the intended functionality of the code.

When these automated tests exist, engineers run the entire suite before releasing their code to the next phase of development. Tests may fail either because they legitimately caught a bug or because the new functionality invalidates the old test (eg. a test designed to confirm that a button appears as blue is invalidated when a rebranding initiative requires the button be green). The engineer will fix either the code or the test (often resisting the urge to just delete the test) until the full suite passes. It’s not uncommon for a test that seems completely unrelated to the new feature being developed to fail. That’s a good thing: the test suite caught a bug that the engineer and QA testers were unlikely to have caught otherwise.

Building a comprehensive automated testing suite takes time and effort, and the benefits take time to materialize. Writing tests to cover a piece of code can take as long or longer to for the engineer to create than the code itself, and manually testing that code in the moment it is written could get it to production faster. However, the benefits of continuous investment in automated tests compound over time, leading to more confidence in the quality of software and shortened testing cycles in the future. Also, since the test suite is run as soon as an engineer finishes their first draft of some code, it’s quicker to fix any bugs that arise. The code is still fresh in the engineer’s mind and you avoid the overhead of someone else filing a ticket, providing reproduction steps, validating the fix, etc.

QA Engineering is a large umbrella that includes various titles, including QA Testers, QA Analysts, QA Engineers and QA Automation Engineers. While there isn’t a uniform standard for these titles, the former titles typically suggest manual testing roles while the later ones are more technical roles involving at least some coding to automate tests.

Types of Testing

There are so many different categories and subcategories of testing that I can’t possibly list them all (this article makes a valiant attempt). I’ll just highlight a few of the main categories. Any of these types of testing could be automated, manual, or a combination of both, though some lend themselves more to one or the other.

Sanity testing

Also known as smoke testing, this is the bare minimum amount of testing an engineer could perform to test that their code works. While a junior engineer might consider this sufficient for handing off to the QA team or even pushing to production, it is almost never enough. Still, there’s a place for it in the development process. An engineer working on a multifaceted project might sanity test parts of code they just wrote to make sure they’re on the right track, then come back later to perform deeper testing after the pieces are all in place. Another time it may be appropriate is when multiple engineers are working on different parts of a project – say, a backend engineer building an API and a front end engineer writing code that uses that API – one engineer might release their code to a staging environment after only sanity testing in order to unblock the other, with the understanding that the code is a work in progress and will be tested more later. Sanity testing typically only tests the very obvious and easy to test cases, leaving complicated cases and regression testing to the future.

Regression Testing

Have you ever seen a new feature introduced to your product that caused a bug that seemed completely unrelated? This is quite common; one reason is refactoring. Regression testing means testing things that were not supposed to change with the introduction of the new code.

Many aspects of regression testing are a good candidates for automated testing. Engineers are encouraged to write unit tests as soon as they write the code in question. In fact, a coding methodology called Test Driven Development (TDD) requires that the tests be written first, then the engineer develops the feature until the tests all pass. With unit tests covering all cases implemented in the past, a future engineer can focus their testing on the new functionality.

Regression testing can be accomplished with a combination of the following three types of testing: unit testing, integration testing, and functional testing.

Unit testing

Unit testing is the most granular level of testing, aimed at pinpointing very specific functionality without consideration of the world beyond that specific piece of code. Unit tests typically refer to automated tests, as manual testing usually implicitly tests the surrounding code. For example, to test a login button in an app, a manual tester will need the app to open successfully, they’ll need to be able to navigate to the login screen, they’ll need to type a username and password before clicking the button, etc. An automated unit test lives in its own little world, unconcerned about whether anything else except the login button works correctly – there are other unit tests to make sure that they do.

Integration Testing

Zooming out, the next higher level of testing is called integration testing. A bunch of separate components each working correctly on their own does not mean that everything will work perfectly when assembled. Whereas unit testing ensures each component works as it’s supposed to on its own, integration testing ensures that adjacent components fit together.

Functional Testing

Functional testing refers to testing a single use case, such as a user story. It may span many different code units and even separate systems (eg. backend API and frontend website).

Unit tests, integration tests, and functional tests are sometimes depicted as a pyramid (think: food pyramid) with unit tests on the bottom, integration tests in the middle and functional tests on top, signifying the quantity of each type of test in a suite. Unit tests typically make up the largest portion of a test suite because they are expected to cover the most ground and are easiest to write. Functional tests encompass the most functionality per test, include a lot of steps, and are the most complex to create. Also, when tests fail, unit tests are often more helpful to the engineer running the test suite in pinpointing what went wrong since they immediately narrow the range of possible problems to the pinpointed focus of each failing unit test.

Alpha and beta testing

The term “beta testing” has become widely used outside engineering circles so you probably already know what it means: releasing a version of your software to people who accept that it’s still a work in progress. This could be a beta version of a brand new product or a new version or new feature built on top of an existing product. Some people really enjoy testing your software for you, either because they like being on the cutting edge, they are fans of your product, or because they enjoy the thrill of being the first to find bugs. Some companies even reward these finds with “bug bounties.”

Alpha testing, as the name implies, comes before beta testing. While beta testers are people outside the organization, alpha testing is done by internal employees. Even after engineers and QA testers do their best work, more eyes can find more bugs or other usability suggestions. Alpha testing is also known as “dogfooding” as in “eat your own dogfood,” the idea being that a product isn’t ready for external customers until it is ready for internal users, who are more forgiving and more likely to provide feedback. Alpha and beta testing are used for more than just finding bugs; they are also a great way to gather feedback about functionality and usability even if everything functions correctly according to the spec.

A related term is User Acceptance Testing, which refers to the requester of the project testing that the code meets the stated objectives and signing off on its release. If the software is being built for a specific external client who requested it, this would be considered a beta test whereas if the requester is an internal employee like a Product Manager, this would be considered an alpha test.

Performance and load testing

While often omitted or only sanity tested for early stage MVPs, performance testing  is important in order to determine if any aspects of the product are sluggish. For applications that are expected to be used by thousands or millions of people, or have the chance of going viral, performance testing is a good idea.

A subcategory of performance testing is load testing. An application can work perfectly fine when a few people are using it but completely fall apart when lots of users use it simultaneously. Scaling is something engineers spend a lot of time thinking about when architecting a system, but they can’t be confident about its capacity without testing. Load testing tests how many users can use the system simultaneously before things start to slow down or break altogether; for example, overwhelming a database to the point that it’s unresponsive. It is typically performed by writing scripts that programmatically hammer an application with a lot of simultaneous requests.

Mocks, Stubs, and Fixtures

You may hear the terms mocks, stubs, or fixtures when your engineer talks about automated tests. These are tools are “fake” versions of code’s dependencies that enable tests to operate in their own little world. For example, a test for frontend code will likely have a mock API that returns hard coded responses to certain calls. For example, it may be programmed such that “when the /user/123 API is called, return Jane Doe’s information.” This enables the test engineer to test specific use cases without worrying abut whether the dependent code is working. Presumably, that dependent code has its own tests to ensure it works correctly. This helps keep tests as focused as possible and assists in creating convoluted use cases that are hard to reproduce manually.

So how do I know when a product is ready for production?

That depends a lot on the stakes of your application. If you’re building a pacemaker where a small bug could literally kill someone, you’ll spare no expense and spend months testing before release. If you’re building an MVP of a fun app, you probably won’t want to spend too much time and effort on testing. Most companies fall somewhere in between.

Another factor is how easily you can distribute bug fixes. If your software is shipped as part of a physical device, you may need to send replacements in order to fix bugs. Intel’s infamous FDIV bug in the ’90s prompted a recall that cost the company $475 million for a bug that would only affect 1 in 9 billion operations. At the other end of the spectrum, websites are quite easy to update: as soon as you fix the bug, all users will be using the updated version as soon as they refresh the page or revisit your site. Locally run applications such as native iOS and Android apps fall in between: Apple and Google have approval and distribution processes that take time, then 100% of your app’s users have to opt-in to the update (many but not all users allow updates automatically) until the bug is completely exterminated. If it takes a long time to roll out a software update, you’re probably going to want to put extra effort into testing.

The reality is that software is never bug-free. Even if you have a QA team that spends months testing each release, a bug could still appear. At the end of the day, it’s a judgement call: how exhaustively does the product need to be tested before you’re comfortable enough to release it? It’s a tradeoff between speed of iteration and tolerance to production glitches.

Unfortunately, when push comes to shove and an engineering team is pressured to deliver more features faster, testing is often the first thing to fall through the cracks.  If you’re a leader at your company, you can set the tone: if you push for faster speed of development, you’re increasing the likelihood of bugs in production. That may be ok for your low-stakes website MVP but not for your Intel processor. If a small bug being released to production would significantly harm your company, invest in QA engineers and encourage your team to take the time they need to be confident in software’s quality before it ships, even at the expense of development speed.

2 thoughts on “WTF is QA Testing?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s