Testing is the redheaded step child of the BA world. It is not emphasized in the BABOK and, for many it is not the most exciting part of the job. While we may prefer the excitement of the front end interactions and analysis during the requirements phase of the project, testing is a necessary evil. Some organizations are fortunate to have unique QA/UAT roles. However, it is highly likely at some point you will be involved in an organization where hands on testing is necessary. On the bright side, it’s a great way to come into a new organization and understand the intricacies of the software and/or processes. Below are some thoughts around organizing a testing plan, which will help keep you on track to success.
The first step I take is laying out an introduction where I review the purpose, scope, and assumptions of my testing. While the purpose may seem obvious to you, it is helpful to write a couple of sentences on how your plan will verify that the total system (both software and non-software deliverables) will function successfully together. The scope can lay out where the testing occurs and what departments will participate. I start with something like, “This User Acceptance Testing (UAT) will be coordinated and managed by…The following departments will participate…” In the case of UAT testing, I will write out my assumptions on the type of testing that should occur prior to UAT. This could include (but may not be limited to): unit testing, integration testing, regression testing, system testing, and functional testing.
From here, I like to lay out a high-level outline of the software applications (or functional areas), documents, processes, forms, etc. that are to be tested within the scope of my test plan and what their high-level pass/fail criteria are. Others may or may not find this useful/necessary, but I feel it helps especially if you are engaging other departments in your testing efforts.
A risk assessment can also be helpful in your test plan. Significant risks are inherent in any system design or environment. I typically outline the risks that may impact the ability to complete the UAT process successfully and on time. For each, I like to assess Severity Level (High, Medium, Low) to represent the significance of each. To me, the significance of each risk is a combination of a risk’s likelihood of occurrence and its degree of impact if it occurs. It’s helpful to remember both pieces.
I also like to lay out the types of testing that will be performed as part of my plan. This can include smoke tests to verify there are no critical issues that will hinder the execution of the testing effort. Usability tests can help verify the ease of use of the system and its associated documentation and procedures. Control tests test the system’s ability to produce appropriate audit trails. Also, my testing should include both positive and negative tests. The positive tests are testing responses to valid user actions and data. The negative tests are testing responses to user actions and data that should generate error messages.
Timelines can be extremely helpful in any testing plan. If you can coordinate with the project manager and IT partners, you should be able to flush out start and end dates to include in your plan for things such as plan development (which is hopefully a breeze with this article); test case development; environment set-up; software delivery to your environment; execution timelines; etc.
Last, it is important to touch on testing procedures used to control the test. These include problem reporting, change management, test execution control, and user acceptance. If you are coordinating testing efforts with other users, they should know what to do in the case they reach a blocking defect and cannot test any further. You should set the expectation for the sharing of testing progress reports with your fellow testers, project team, and stakeholders. It is important to define defect severity to help your team. A model we typically use is as follows:
- Critical (showstopper, cannot do any work)
- High (needs to be fixed prior to launch)
- Medium (should be fixed prior to launch)
- Low (maybe can be handled after launch)
For me, every test plan is a little different. You may find some of these thoughts more or less useful in your own test planning. I hope that this advice helps you as you outline the strategy that will define your testing. The test plan can really ease the execution of your individual test cases. It is a great way to keep everyone (stakeholders, fellow testers, and project team members) on the same page. When coordinating larger testing efforts, this is a great way to organize and set the right expectations.
Please feel free to comment on what test plan information and outlines have been helpful for you in your organization.