The temptation is great. A new feature is developed and the tester goes right for the GUI test. After all, it tests the complete stack of code just like the customer is going to use it. While true, any promises of time or money saved from writing that test are greatly exaggerated. One sure way to not save any time or money and actually cause yourself more work is to write all of your automated tests at the GUI level. By writing your tests in the right place for the right job you will make your automated testing suite faster, more reliable, more robust, and more runnable.
When doing all your testing at the GUI, the story of your automated testing suite usually goes like this.
Phase 1: You have a few tests that work pretty well.
Phase 2: You have created a bunch of tests. You get some that fail unpredictably, but you just run them again. Tests take about 20–30 minutes to run.
Phase 3: You have a huge suite of tests. Only you can run them because you have to nurse them through and only you know which ones are “Ok” if they fail.
Phase 4: You have too many tests that fail too often. It starts taking more of your time to maintain what you have than to write new tests or keep up on testing the new functionality. Maintenance starts to slip. More and more tests slide into decay and ruin. You end up being killed by the weight of your own GUI tests. Your soul is also crushed.
So What Can You Do?
First, understand the weaknesses of a GUI test
We tend to force a GUI test do something it is not good at and never will be.
GUI tests are fragile, slow, expensive, lack humanity, and often do not prove as much as one might think. These are weaknesses that cannot be overcome with the current state of available tools.
Fragile, slow, and expensive are self explanatory. I will briefly touch on the other two weaknesses I mentioned.
A GUI is the place where a human interacts with the code. The automated GUI test can send input to a form field, click a button and make sure data comes back into the GUI. You cannot, however, make sure it works well from a user experience point of view. (That is why a QA engineer should understand design—but that is another post.) The GUI test lacks the humanity necessary to actually test the interface. Trying to make a GUI test do so will result in failure.
A GUI test also may not test as much as you might think you are testing (depending on your architecture). For example: Suppose all of your data is served via services to your web browser. You use a GUI test to check various inputs. You have successfully verified that the GUI receives and sends data to the service. But you have done nothing to verify that the service is doing everything it is supposed to do. You have not checked the security, the formatting, the validation of the service on the server side, etc. You might argue that you never intended to test these things. If that is the case, you are on the right track. The problem arises when you think you are doing a more comprehensive test of the application than you actually are—because you are using it from the GUI as a customer would.
Fill Out Your Test Suite at the Appropriate Levels
This is best illustrated with an example.
Some test cases to make sure it is doing what it supposed to might include:
- Positive integers
- Long input
- No input
- Negative integers
- Negative floats
- Validation messages of the form field
- Wording of the validation
- Testing the responsiveness of the design
...and on and on (and on).
The temptation exists to automate all of this with GUI tests—since that is what you know, where you are, what you have, and will test the whole stack from DB to GUI in one test. Don’t fall for it.
Let’s look at where we could do the testing in a more efficient way.
At the Unit Test Level
In the code, you could cover all of the different types of input to make sure it is handled correctly by the appropriate methods. You can have tens or hundreds of tests here and they will run in seconds (compared to minutes for a GUI test).
At the Service Level
Just because the GUI rejects something by way of validation doesn’t mean that the service won’t take it and pass it right along to the server. Here you can check the security of who can call the service, many variations of input checking (the positive, decimals, letters, negatives, et cetera). If you have covered this sufficiently at the unit test level, you may not need to test this at the service level. Depending on the logic and rules around the services, you will need to decide what else can be tested here.
At the GUI Level
At the GUI level we do only what we have to do—testing what we can test in no other way. We could write a single test to check the happy path. We could write a single test to check that validation fires. What we do not test is all the variations of the input, the validation, et cetera. You end up with 80–90 percent of your tests at lower, more reliable levels and keep your GUI tests to a minimum. In so doing, you can run far more tests in far less time.
By writing your tests at the appropriate place, you get an automated testing suite that runs faster. If your tests are faster they are likely to be run more often. If they are reliable (no more flakey failures) you can write more tests, resulting in a more robust testing suite. If you do all of this within the actual code you are testing (so that anyone can run your tests—like a developer or other QA engineers) your tests will be more runnable.
The more your tests are run the more likely they are to find a bug quickly. Using them all the time forces you to keep them maintained and in good working order.
Don’t start at the GUI. Just don’t. We want our automated testing suite to run fast, work well, and cover as many useful and correct test cases as possible. By understanding where GUI tests work and don’t work, you can write your tests in the appropriate place. By so doing you will be able to ship more software more quickly with more confidence. All of this will result in the most important thing—Quality software, happier engineers and you not having your soul crushed by your own tests.