Your Fastest Path to Constructing Good Tests
A guide to help you define and create good tests, quickly. Assertion Based Tests are basic building blocks for good tests and give you the flexibility needed for creating several types of tests.
Wouldn't it be great if you could define and build good test cases that were building blocks to all of the types of testing projects you'll need to do. Whether you are writing Functional or System tests or trying your hand at advanced test projects such as Stress or Continuous Hours of Operation testing, you should start with the building blocks to make this easy. Assertion Based tests are your bricks and will enable you raise your testing structure faster.
To put it in simple terms, Assertion Based Testing is testing aimed at verifying the Assertions or Promises made by System and Software Developers. Developers make assertions in their specifications, in their code documentation and any other media written regarding the code they create. Test Assertions then specify a single unit of functionality or behavior derived from Assertions or Promises contained in the specification to be tested.
Examples of Assertions would be something like:
When the ct_connect() command executes successfully, it returns a status of CS_SUCCEED.
If the asynchronous flag is set, then the command returns a process identifier.
An error upon execution will be written to the application log and the command will return a negative integer value.
Pressing the exit button will end this session of execution of the program.
* 1
* 1 https://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc32840.1500/html/ctref/X31502.htm - Sybase Open Client Library Routines, ct_connect() command, version 15.0, 2008 Sybase Inc
Testing with Convex Computer Corp
I was looking across the desk from my boss as he handed me my first assignment in my first job as a new-hire to Convex Computer Corporation. The assignment: create a set of initial tests to exercise the command shell to a real-time Operating System we were building. The shell was a pared down version of the standard User-Shell you find in Unix. I was assigned to write a set of initial C language tests to exercise that shell. Of course, this story wouldn't be a story without a Pitfall, or Snapping Alligators. My boss then stood to leave my office, turned to me and said, "I'll be in Texas for the next week, I want those tests written by the time I get back." What? By the time he gets back? My heart sank!
I spent that next week working 18 hour days getting to know the command shell. I knew I had to get to know my test subject well. But at the end of the week I still didn't have a single test case written. By Sunday I was drafting a single test case. I got it to compile and run even though it seemed to hobble across the finish line.
On Monday morning, my boss returned and looked a little bit more than just perturbed. I had arrived at the office and was settling in for the day when he briefly popped into my office and announced, "I've got meetings until noon. When I get back, I want to see those tests. You better have some tests …" I swallowed hard. I had A TEST running and nothing more. I had a much stronger understanding of how things worked with this shell, but just a single test.
So, to mitigate this "lack of tests", I opened a file and typed:
/*
* Test <test_name_###>
* Objective
* The ____ command will return a result code of 0 and the output returned will contain ________.
*
* Steps
* Execute the _____ command
* Capture the return code
* Capture collateral information
*
* If the return code is 0, set first part of test to PASS
* If the returned values contains the value of ________, set the second
* part of the test to PASS
* If part 1 and part 2 results are both PASS, set test disposition to PASS
* Otherwise, set the test to FAIL
* Print return codes and all values returned
*
*/
I added the code from my first working test that I had finished the day before. I made fifty copies of that file. I fell back to the developer documentation I had studied all week. I parsed out the promised behavior and translated that into simplified "if this, then that" test cases. I went back and started substituting the commands and the expected values I had parsed out, tweaking as much of the code as I could to make each test its own individual shell command and look for the correct output. By 11:50am, I had 50 good test cases defined.
Noon came and the boss was standing at my door. His scowl had grown from the last time I saw him, but my confidence had greatly increased. Before he could sit down, I handed a printout of the first test to him and explained, "Boss, I have 50 cases. I won't take up your time because I know you have reports to write from your trip. But since I'm still debugging them, let me demo one for you!" A look of surprise overcame the scowl and he hastily pulled the chair over to my side of the desk. I showed him the organization of comments and code and then compiled and executed the case. He had the look of shock on his face and patted me on the back. "Wow, if your other cases work like that, then we are in great shape. I'll let you get back to debugging and we can sync up later in the week."
I won't tell you that I felt like a hero, but I was able to stave off the job-killing dragon at that moment. I spent the remainder of the week honing my tests. Later, looking back at my methods, I realized that this Cookie-Cutter approach deserved more attention.
In 1996, I was employed with Sybase, Inc, the creator of Client-Server computing and Transact-SQL databases. A recent organizational move put me in the Communications Products group where I was tasked to help improve overall quality of the product line. A quality audit on our APIs and Libraries revealed that the entire line had quite a few bugs that had made it through previous testing processes and were in the hands of customers. After looking at several options, we entered into an agreement with Unisoft Corporation, now a Millbrae, California based software developer to help us identify and quickly create tests to capture these bugs and improve subsequent releases.
Right away, we knew we had to build a large base of test cases that could be used in API-functional, system, stress and duration testing. We wanted to implement Functional Tests that we could employ in a set of Regression Test scenarios. Using a novel methodology, Unisoft taught us a way to identify and create 11,000 test cases to cover the five products that were already in the field. We used this methodology to identify "promises" that were made about product functionality, then took that forward with our own harnesses and test libraries.
Eventually, all 11,000 cases were run across 32 different system platforms. We successfully increased our coverage and captured data for hundreds of defects that we were able to eradicate from our products.
Building Assertions to test starts with product documentation. Looking at the documentation, developers and documenters would write things like the following:
Returns
ct_connect returns the following values:
* Actual Product documentation. See: https://infocenter.sybase.com/help/index.jsp
This table stated that the ct_connect command returns a CS_SUCCEED to indicate success. I would then write test case code to exercise this assertion and each of the other return values as well.
An example based on that documentation would look like this:
Objective
If the ct_connect() command succeeds, and the value of CS_SUCCEED is returned, then set the Test Disposition to PASS.
Otherwise, set the Test Disposition to FAIL.
Then the code would exercise the ct_connect() command, and upon success, would verify that the call to ct_connect() actually returned a CS_SUCCEED value, an equivalent integer with the value of -1.
Looking at more of that documentation, we see more usage information that helps us understand the behavior of the command.
Information about the connection is stored in a CS_CONNECTION structure, which uniquely identifies the connection. In the process of establishing a connection, ct_connect sets up communication with the network, logs into the server, and communicates any connection-specific property information to the server.
We learned to look at these blocks of information and parse out the promises. For example, you read:
Information about the connection is stored in a CS_CONNECTION structure …
This is telling us that the connection information is stored in this structure. Follow more of this command in the next sub-section:
ct_connect
Connect to a server.
CS_RETCODE ct_connect(connection, server_name,
snamelen)
CS_CONNECTION *connection;
CS_CHAR *server_name;
CS_INT snamelen;
The CS_CONNECTION structure is a C data structure that we have to provide before calling the command. This data structure is passed in as a parameter, and various fields are filled in for us upon a successful execution. The assertion made about the command is that information is written to the structure and stored there, so we test the information and ensure that it matches expectations. Digging into collateral documentation, the CS_CONNECTION data structure is said to store this specific information:
A CS_CONNECTION structure stores information about a particular client/server connection, including the user name and password for the connection, the packet size the connection will use, and whether the connection is synchronous or asynchronous.
11,000 Cases
Having done this 11,000 times, we created tests for 11,000 different Assertions found in the documentation. We were able to push product development fixes into the code branch and raise quality to over 98% passed cases. We then gathered these tests into suites for Regression, System, Stress and other types of testing. Suites were added to various testing processes, Our harnesses were expanded to cover a wide range of systems supported at that time and we were well on our way to success.
With this project completed, the good folks at Unisys had taught us how to create tests with this methodology. Looking back at my stint with Convex, I perceived that I was developing good tests with confidence. This time, at a grand scale.
If you take a second look at the grammar in the example I give above, you might think that I'm describing the grammar of Behavior Driven Design tests. The When/Then syntax you see here has been around a lot longer than the BDD methodology. In a subsequent section, we'll explore the grammar used in Assertion Based Testing.
You are facing a challenge in front of you now. Maybe you started a new job like I had and you are challenged with defining tests. Maybe you are looking at systems that are not the traditional systems you used before. Again, testing can be a daunting task.
Count the cost and pay the price to do the work of breaking up the promises into testable chunks.
To gets started, look at:
Documentation
Specifications
Wiki Pages
Code comments and flow
ReadMe files
What you cannot do is to make assumptions about the code.
Start with a simple test grammar and keep your test objectives very short. Be very clear, and don't combine too many conditions. Focus in on what makes a test Pass or Fail and specify how to achieve both.
Your grammar is a reflection of your image. Good or bad, you have made an impression.
And like all impressions, you are in total control.
Jeffrey Gitomer - Author, Salesman
A good grammar becomes the bricks used to build your Testing Strategy. Learning the essential grammar is necessary, and is easily mastered in Assertion Based Testing.
Basic Assertions
<Assertion>
Conditional Assertions
IF <condition> THEN <action>
WHEN <condition> THEN <action>
Assertion Based Tests are the basis for many types of testing. You can leverage this paradigm to create various types of tests, such as:
Unit Tests
Functional Tests
System Tests
GUI/Interface Tests
Regression Tests
Integration Tests
Acceptance Tests
Continuous Hours of Operation Tests
Stress Tests
You start with your test case and execute it in these varying environments. You might have to modify either tests or your harness libraries to check for other factors, such as searching logs for events that occur during CHO or Stress Testing.