The proper testing of software takes a lot of work, and therefore a lot of time. While I don’t agree with this tactic, when projects get delayed, the time to test the system is one of the most likely candidates to take a hit when squeezing the schedule. „Two weeks delayed? We test two weeks shorter, and presto, back on schedule again.“
When facing a short time frame available for testing purposes, you got to make the best the time and resources available. A software test strategy that takes this into account is risk and requirements based testing.
In this strategy we assume that it’s undoable to test everything. From a economic point of view it doesn’t even make sense; spending lots of time to parts of a system where the changes of having a bug are low, or even if a bug has been found, where the impact of it will be low. Risk and requirents based testing helps you to determine what to test first, in which sequence, so you spend the time you have to the parts that really matter.
The strategy starts with a risk analysis to determine the functions (requirements) with the highest risk, and plan your test activities guided by this analysis.
To help you identify the risks involved in all your requirements, consider the following aspects:
• Functions often used by the users
• Complex functions
• Functions that have a lot of updates or bugfixes
• Functions that require high availibility
• Functions that require a consistent level of performance
• Functions that are developed with new tools
• Functions that require interfacing with external systems
• Functions with requirements with a low level of quality
• Functions developed by more programmers at the same time
• New functions
• Functions developed under extreme time pressure
• Functions that are most important to the stakeholders
• Functions that reflect a complex business process
To have a proper testing of software, the project manager, or test manager, should plan the test activities in advance. To assist with this task, I list on this page an outline for a software test plan.
Test organization
This section of the test plan describes the organization around the testing activities. Responsibilities and used resources should be named under this topic.
Communication & procedures
An overview of the communication during the testing activities should be provided in this section. Also the procedures during the software test for bugfixing, version control e.a. should be listed.
Test strategy
This section contains on overview of the test strategy used for the software testing, acceptance criteria and a statement to which level will be tested.
Test items
An overview of the functions to be tested and their priorities should be listed in this section of the software test plan.
Test deliverables
A description of the products used by testing.
• Test input
• Test reports
• Infrastructure to be used
• Progress reports
Test activities
An overview of the activities needed for testing, e.g.
• Installation infrastructure
• Writing of test scripts
• The actual performance of the test
• Monitoring progress
• Creation of reports
Schedule
The actual planning of the software test activities and resources used.
Testing where user plays a role/user is required:
User Acceptance Testing:
In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to.
Alpha Testing:
In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers.
Beta Testing:
In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers.
Smoke test is a term used in plumbing, electronics, and computer software development. It refers to the first test made after repairs or first assembly to provide some assurance that a device, plumbing, or software will not catastrophically fail. After the smoke test proves that the pipes will not leak or the circuit will not burn, the assembly is ready for more stressful testing.
• In a plumbing smoke test, actual smoke is forced through newly plumbed pipes to find leaks, before water is allowed to flow through the pipes.
• In electronics, a smoke test is the first time a circuit is attached to power, which will sometimes produce actual smoke if a design or wiring mistake has been made.
• In computer programming and software testing, smoke testing is a preliminary to further testing, which should reveal simple failures severe enough to reject a prospective software release. In this case, the smoke is metaphorical.
Sanity test
From Wikipedia, the free encyclopedia
Jump to: navigation, search
A sanity test or sanity check is a basic test to quickly evaluate the validity of a claim or calculation, specifically a very brief run-through of the functionality of a computer program, system, calculation, or other analysis, to assure that the system or methodology works as expected, often prior to a more exhaustive round of testing.
Sanity tests are sometimes mistakenly equated to smoke tests. Where a distinction is made between sanity testing and smoke testing, it's usually in one of two directions. Either sanity testing is a focused but limited form of regression testing – narrow and deep, but cursory; or it's broad and shallow, like a smoke test, but concerned more with the possibility of "insane behavior" such as slowing the entire system to a crawl, or destroying the database, but is not as thorough as a true smoke test.
Generally, a smoke test is scripted (either using a written set of tests or an automated test), whereas a sanity test is usually unscripted.
With the evolution of test methodologies sanity tests are useful both for initial environment validation and future interactive increments. The process of sanity testing begins with the execution of some online transactions of various modules, batch programs of various modules to see whether the software runs without any hindrance or abnormal termination. This practice can help identify most of the environment related problems. A classic example of this in programming is the hello world program. If a person has just set up a computer and a compiler, a quick sanity test can be performed to see if the compiler actually works: write a program that simply displays the words "hello world".
A sanity test can refer to various order of magnitude and other simple rule of thumb devices applied to cross-check mathematical calculations. For example:
• If one were to evaluate 7382 and came up with the answer 53,874, a quick sanity check would show this to be wrong since the square of 500, a smaller number to start with, is 250,000, which is greater than the incorrect 53,874.
• In multiplication, 918 x 155 is not 142135 since 918 is divisible by three but 142135 is not (digits do not add up to a multiple of three).
• When talking about quantities in physics, the power output of a car cannot be 700 kJ since that is a unit of energy, not power (energy per unit time).
• If someone calculates that a genealogy containing four generations spans only 32 years, the calculation is probably in error since eight year olds can't have children.
Smoke testing in software development
Smoke testing is done by developers before the build is released or by testers before accepting a build for further testing.
In software engineering, a smoke test generally consists of a collection of tests that can be applied to a newly created or repaired computer program. Sometimes the tests are performed by the automated system that builds the final software. In this sense a smoke test is the process of validating code changes before the changes are checked into the larger product’s official source code collection. Next after code reviews, smoke testing is the most cost effective method for identifying and fixing defects in software; some even believe that it is the most effective of all.
In software testing, a smoke test is a collection of written tests that are performed on a system prior to being accepted for further testing. This is also known as a build verification test. This is a "shallow and wide" approach to the application. The tester "touches" all areas of the application without getting too deep, looking for answers to basic questions like, "Can I launch the test item at all?", "Does it open to a window?", "Do the buttons on the window do things?". There is no need to get down to field validation or business flows. If you get a "No" answer to basic questions like these, then the application is so badly broken, there's effectively nothing there to allow further testing. These written tests can either be performed manually or using an automated tool. When automated tools are used, the tests are often initiated by the same process that generates the build itself.
Test Plan Sample
______________________________________
Table of Contents
1. Introduction
Description of this Document
Related Documents
Schedule and Milestones
2. Resource Requirements
Hardware
Software
o Test Tools
Staffing
o Responsibilities
o Training
3. Features To Be Tested / Test Approach
New Features Testing
Regression Testing
4. Features Not To Be Tested
5. Test Deliverables
6. Dependencies/Risks
7. Entrance/Exit Criteria
1. Introduction
Description of this Document
This document is a Test Plan for the -Project name-, produced by Quality Assurance. It describes the testing strategy and approach to testing QA will use to validate the quality of this product prior to release. It also contains various resources required for the successful completion of this project.
The focus of the -Project name- is to support those new features that will allow easier development, deployment and maintenance of solutions built upon the -Project name-. Those features include:
[List of the features]
This release of the -Project name- will also include legacy bug fixing, and redesigning or including missing functionality from previous release
[List of the features]
The following implementations were made:
[List and description of implementations made]
Related Documents
[List of related documents such as: Functional Specifications, Design Specifications]
Schedule and Milestones
[Schedule information QA testing estimates]
2. Resource Requirements
Hardware
[List of hardware requirements]
Software
[List of software requirements: primary and secondary OS]
Test Tools
Apart from manual tests, the following tools will be used:
-
-
-
Staffing
Responsibilities
[List of QA team members and there responsibilities]
Training
[List of training's required]
3. Features To Be Tested / Test Approach
[List of the features to be tested]
Media Verification
[The process will include installing all possible products from the media and subjecting them to basic sanity testing.]
4. Features Not To Be Tested
[List of the features not to be tested]
5. Test Deliverables
[List of the test cases/matrices or there location]
[List of the features to be automated ]
6. Dependencies/Risks
Dependencies
Risks
7. Milestone Criteria
The correct title of this article is malloc. The initial letter is shown capitalized due to technical restrictions.
In computing, malloc is a subroutine provided in the C programming language's standard library for performing dynamic memory allocation.
[edit] Rationale
The C programming language manages memory either statically or automatically. Static-duration variables are allocated in main (fixed) memory and persist for the lifetime of the program; automatic-duration variables are allocated on the stack and come and go as functions are called and return. However, both these forms of allocation are somewhat limited, as the size of the allocation must be a compile-time constant. If the required size will not be known until run-time — for example, if data of arbitrary size is being read from the user or from a disk file — using fixed-size data objects is inadequate.
The lifetime of allocated memory is also a concern. Neither static- nor automatic-duration memory is adequate for all situations. Stack-allocated data can obviously not be persisted across multiple function calls, while static data persists for the life of the program whether it is needed or not. In many situations the programmer requires greater flexibility in managing the lifetime of allocated memory.
These limitations are avoided by using dynamic memory allocation in which memory is more explicitly but more flexibly managed, typically by allocating it from a heap, an area of memory structured for this purpose. In C, one uses the library function malloc to allocate a block of memory on the heap. The program accesses this block of memory via a pointer which malloc returns. When the memory is no longer needed, the pointer is passed to free which deallocates the memory so that it can be reused by other parts of the program.
[edit] Dynamic memory allocation in C
The malloc function is the basic function used to allocate memory on the heap in C. Its prototype is
void *malloc(size_t size);
which allocates size bytes of memory. If the allocation succeeds, a pointer to the block of memory is returned.
malloc returns a void pointer (void *), which indicates that it is a pointer to a region of unknown data type. This pointer is typically cast to a more specific pointer type by the programmer before being used. Note that because malloc returns a void pointer, it needn't be explictly cast to a more specific pointer type: ANSI C defines an implicit coercion between the void pointer type and other pointer types. An explicit cast of malloc's return value is sometimes performed because malloc originally returned a char *, but this cast is unnecessary in modern C code, and some programmers consider it bad style.[1][2]
Memory allocated via malloc is persistent: it will continue to exist until the program terminates or the memory is explicitly deallocated by the programmer (that is, the block is said to be "freed"). This is achieved by use of the free function. Its prototype is
void free(void *pointer);
which releases the block of memory pointed to by pointer. pointer must have been previously returned by malloc or calloc (or a function which uses one of these, eg strdup), and must only be passed to free once.
[edit] Usage example
The standard method of creating an array of ten integers on the stack is:
int array[10];
To allocate a similar array dynamically, the following code could be used:
#include
/* Allocate space for an array with ten elements of type int. */
int *ptr = malloc(10 * sizeof (int));
if (ptr == NULL) {
/* Memory could not be allocated, so print an error and exit. */
fprintf(stderr, "Couldn't allocate memory\n");
exit(EXIT_FAILURE);
}
/* Allocation succeeded. */
[edit] Related functions
malloc returns a block of memory that is allocated for the programmer to use, but is uninitialized. The memory is usually initialized by hand if necessary -- either via the memset function, or by one or more assignment statements that dereference the pointer. An alternative is to use the calloc function, which allocates memory and then initializes it. Its prototype is
void *calloc(size_t nelements, size_t bytes);
which allocates a region of memory large enough to hold nelements of size bytes each. The allocated region is initialized to zero.
It is often useful to be able to grow or shrink a block of memory. This can be done using malloc and free: a new block of the appropriate size can be allocated, the content can be copied over, and then the old block can be freed. However, this is somewhat awkward; instead, the realloc function can be used. Its prototype is
void *realloc(void *ptr, size_t bytes);
realloc returns a pointer to a memory region of the specified size, which contains the same data as the old region pointed to by ptr (truncated to the minimum of the old and new sizes). If realloc is unable to resize the memory region in-place, it allocates new storage, copies the required data, and frees the old pointer. If this allocation fails, realloc returns the null pointer and leaves ptr unchanged.
[edit] Common errors
Some programmers find that the improper use of malloc and related functions in C can be a frequent source of bugs.
[edit] Allocation failure
malloc is not guaranteed to succeed — if there is no memory available, or if the program has exceeded the amount of memory it is allowed to reference, malloc will return a NULL pointer. Depending on the nature of the underlying environment, this may or may not be a likely occurrence. Many programs do not check for malloc failure. Such a program would attempt to use the NULL pointer returned by malloc as if it pointed to allocated memory, and the program would crash. This has traditionally been considered an incorrect design, although it remains common, as memory allocation failures only occur rarely in most situations, and the program frequently can do nothing better than to exit anyway. Checking for allocation failure is more important when implementing libraries — since the library might be used in low-memory environments, it is usually considered good practice to return memory allocation failures to the program using the library and allow it to choose whether to attempt to handle the error.
[Memory leaks
When a call to malloc, calloc or realloc succeeds, the return value of the call should eventually be passed to the free function. This releases the allocated memory, allowing it to be reused to satisfy other memory allocation requests. If this is not done, the allocated memory will not be released until the process exits — in other words, a memory leak will occur.
Use after free
After a pointer has been passed to free, it references a region of memory with undefined content, which may not be available for use. However, the pointer may still be used, for example:
int *ptr = malloc(sizeof (int));
free(ptr);
*ptr = 0; /* Undefined behavior! */
Code like this has undefined behavior — after the memory has been freed, the system may reuse that memory region for storage of unrelated data. Therefore, writing through a pointer to a deallocated region of memory may result in overwriting another piece of data somewhere else in the program. Depending on what data is overwritten, this may result in data corruption or cause the program to crash at a later time. A particularly bad example of this problem is if the same pointer is passed to free twice, known as a double free. To avoid this, some programmers set pointers to NULL after passing them to free: free(NULL) is safe (it does nothing).
Your continued donations keep Wikipedia running!
Test plan
From Wikipedia, the free encyclopedia
Jump to: navigation, search
A test plan is a systematic approach to testing a system such as a machine or software. The plan typically contains a detailed understanding of what the eventual workflow will be.
Contents
[hide]
• 1 Test plans in software development
o 1.1 Outline
o 1.2 Test plan identifier
o 1.3 References
o 1.4 Introduction
1.4.1 Test items (functions)
1.4.2 Features to be tested
1.4.3 Features not to be tested
1.4.4 Approach (strategy)
1.4.5 Item pass/fail criteria
1.4.6 Remaining test tasks
1.4.7 Environmental needs
1.4.8 Staffing and training needs
1.4.9 Responsibilities
1.4.10 Schedule
1.4.11 Planning risks and contingencies
1.4.12 Approvals
1.4.13 Glossary
o 1.5 Regional differences
• 2 Test plans in hardware development
• 3 Test plans in economics
• 4 See also
• 5 External link
[edit] Test plans in software development
Cem Kaner, co-author of Testing Computer Software (ISBN 0-471-35846-0), has suggested that test plans are written for two very different purposes. Sometimes the test plan is a product; sometimes it's a tool. It's too easy, but also too expensive, to confuse these goals.
In software testing, a test plan gives detailed testing information regarding an upcoming testing effort, including
• Scope of testing
• Schedule
• Test Deliverables
• Release Criteria
• Risks and Contingencies
±
===Test plan template, IEEE 829 format===
[edit] Outline
1. Test Plan Identifier
2. References
3. Introduction of Testing.
4. Test Items
5. Software Risk Issues
6. Features to be Tested
7. Features not to be Tested
8. Approach
9. Item Pass/Fail Criteria
10. Entry & Exit Criteria
11. Suspension Criteria and Resumption Requirements
12. Test Deliverables
13. Remaining Test Tasks
14. Environmental Needs
15. Staffing and Training Needs
16. Responsibilities
17. Schedule
18. Planning Risks and Contingencies
19. Approvals
20. Glossary
21. pass result
[edit] Test plan identifier
Master test plan for the Line of Credit Payment System.
[edit] References
List all documents that support this test plan.
Documents that are referenced include:
• Project Plan
• System Requirements specifications.
• High Level design document.
• Detail design document.
• Development and Test process standards.
• Methodology guidelines and examples.
• Corporate standards and guidelines.
[edit] Introduction
Objective of the plan
Scope of the plan
In relation to the Software Project plan that it relates to. Other items may include, resource and budget constraints, scope of the testing effort, how testing relates to other evaluation activities (Analysis & Reviews), and possibly the process to be used for change control and communication and coordination of key activities.
As this is the "Executive Summary" keep information brief and to the point.
[edit] Test items (functions)
These are things you intend to test within the scope of this test plan. Essentially, something you will test, a list of what is to be tested. This can be developed from the software application inventories as well as other sources of documentation and information.
This can be controlled on a local Configuration Management (CM) process if you have one. This information includes version numbers, configuration requirements where needed, (especially if multiple versions of the product are supported). It may also include key delivery schedule issues for critical elements.
Remember, what you are testing is what you intend to deliver to the Client.
This section can be oriented to the level of the test plan. For higher levels it may be by application or functional area, for lower levels it may be by program, unit, module or build. a
====Software risk issues====
Identify what software is to be tested and what the critical areas are, such as:
1. Delivery of a third party product.
2. New version of interfacing software.
3. Ability to use and understand a new package/tool, etc.
4. Extremely complex functions.
5. Modifications to components with a past history of failure.
6. Poorly documented modules or change requests.
There are some inherent software risks such as complexity; these need to be identified.
1. Safety.
2. Multiple interfaces.
3. Impacts on Client.
4. Government regulations and rules.
Another key area of risk is a misunderstanding of the original requirements. This can occur at the management, user and developer levels. Be aware of vague or unclear requirements and requirements that cannot be tested.
The past history of defects (bugs) discovered during Unit testing will help identify potential areas within the software that are risky. If the unit testing discovered a large number of defects or a tendency towards defects in a particular area of the software, this is an indication of potential future problems. It is the nature of defects to cluster and clump together. If it was defect ridden earlier, it will most likely continue to be defect prone.
One good approach to define where the risks are is to have several brainstorming sessions.
• Start with ideas, such as, what worries me about this project/application.
[edit] Features to be tested
This is a listing of what is to be tested from the user's viewpoint of what the system does. This is not a technical description of the software, but a USERS view of the functions.
Set the level of risk for each feature. Use a simple rating scale such as (H, M, L): High, Medium and Low. These types of levels are understandable to a User. You should be prepared to discuss why a particular level was chosen.
Sections 4 and 6 are very similar, and the only true difference is the point of view. Section 4 is a technical type description including version numbers and other technical information and Section 6 is from the User’s viewpoint. Users do not understand technical software terminology; they understand functions and processes as they relate to their jobs.
[edit] Features not to be tested
This is a listing of what is 'not' to be tested from both the user's viewpoint of what the system does and a configuration management/version control view. This is not a technical description of the software, but a user's view of the functions.
Identify why the feature is not to be tested, there can be any number of reasons.
• Not to be included in this release of the Software.
• Low risk, has been used before and was considered stable.
• Will be released but not tested or documented as a functional part of the release of this version of the software.
Sections 6 and 7 are directly related to Sections 5 and 17. What will and will not be tested are directly affected by the levels of acceptable risk within the project, and what does not get tested affects the level of risk of the project.
[edit] Approach (strategy)
This is your overall test strategy for this test plan; it should be appropriate to the level of the plan (master, acceptance, etc.) and should be in agreement with all higher and lower levels of plans. Overall rules and processes should be identified.
• Are any special tools to be used and what are they?
• Will the tool require special training?
• What metrics will be collected?
• Which level is each metric to be collected at?
• How is Configuration Management to be handled?
• How many different configurations will be tested?
• Hardware
• Software
• Combinations of HW, SW and other vendor packages
• What levels of regression testing will be done and how much at each test level?
• Will regression testing be based on severity of defects detected?
• How will elements in the requirements and design that do not make sense or are untestable be processed?
If this is a master test plan the overall project testing approach and coverage requirements must also be identified.
Specify if there are special requirements for the testing.
• Only the full component will be tested.
• A specified segment of grouping of features/components must be tested together.
Other information that may be useful in setting the approach are:
• MTBF, Mean Time Between Failures - if this is a valid measurement for the test involved and if the data is available.
• SRE, Software Reliability Engineering - if this methodology is in use and if the information is available.
How will meetings and other organizational processes be handled?
[edit] Item pass/fail criteria
Showstopper issue
.
'Resumption Criteria'
what is going on here
[edit] Remaining test tasks
If this is a multi-phase process or if the application is to be released in increments there may be parts of the application that this plan does not address. These areas need to be identified to avoid any confusion should defects be reported back on those future functions. This will also allow the users and testers to avoid incomplete functions and prevent waste of resources chasing non-defects.
If the project is being developed as a multi-party process, this plan may only cover a portion of the total functions/features. This status needs to be identified so that those other areas have plans developed for them and to avoid wasting resources tracking defects that do not relate to this plan.
When a third party is developing the software, this section may contain descriptions of those test tasks belonging to both the internal groups and the external groups.
[edit] Environmental needs
Are there any special requirements for this test plan, such as:
• Special hardware such as simulators, static generators etc.
• How will test data be provided. Are there special collection requirements or specific ranges of data that must be provided?
• How much testing will be done on each component of a multi-part feature?
• Special power requirements.
• Specific versions of other supporting software.
• Restricted use of the system during testing.
[edit] Staffing and training needs
Training on the application/system.
Training for any test tools to be used.
Section 4 and Section 15 also affect this section. What is to be tested and who is responsible for the testing and training.
[edit] Responsibilities
Who is in charge?
This issue includes all areas of the plan. Here are some examples:
• Setting risks.
• Selecting features to be tested and not tested.
• Setting overall strategy for this level of plan.
• Ensuring all required elements are in place for testing.
• Providing for resolution of scheduling conflicts, especially, if testing is done on the production system.
• Who provides the required training?
• Who makes the critical go/no go decisions for items not covered in the test plans?
Schedule
A schedule should be based on realistic and validated estimates. If the estimates for the development of the application are inaccurate, the entire project plan will slip and the testing is part of the overall project plan.
• As we all know, the first area of a project plan to get cut when it comes to crunch time at the end of a project is the testing. It usually comes down to the decision, ‘Let’s put something out even if it does not really work all that well’. And, as we all know, this is usually the worst possible decision.
How slippage in the schedule will to be handled should also be addressed here.
• If the users know in advance that a slippage in the development will cause a slippage in the test and the overall delivery of the system, they just may be a little more tolerant, if they know it’s in their interest to get a better tested application.
• By spelling out the effects here you have a chance to discuss them in advance of their actual occurrence. You may even get the users to agree to a few defects in advance, if the schedule slips.
At this point, all relevant milestones should be identified with their relationship to the development process identified. This will also help in identifying and tracking potential slippage in the schedule caused by the test process.
It is always best to tie all test dates directly to their related development activity dates. This prevents the test team from being perceived as the cause of a delay. For example, if system testing is to begin after delivery of the final build, then system testing begins the day after delivery. If the delivery is late, system testing starts from the day of delivery, not on a specific date. This is called dependent or relative dating.
Planning risks and contingencies
What are the overall risks to the project with an emphasis on the testing process?
• Lack of personnel resources when testing is to begin.
• Lack of availability of required hardware, software, data or tools.
• Late delivery of the software, hardware or tools.
• Delays in training on the application and/or tools.
• Changes to the original requirements or designs.
• Complexities involved in testing the applications
Specify what will be done for various events, for example:
Requirements definition will be complete by January 1, 20XX, and, if the requirements change after that date, the following actions will be taken:
• The test schedule and development schedule will move out an appropriate number of days. This rarely occurs, as most projects tend to have fixed delivery dates.
• The number of tests performed will be reduced.
• The number of acceptable defects will be increased.
o These two items could lower the overall quality of the delivered product.
• Resources will be added to the test team.
• The test team will work overtime (this could affect team morale).
• The scope of the plan may be changed.
• There may be some optimization of resources. This should be avoided, if possible, for obvious reasons.
Management is usually reluctant to accept scenarios such as the one above even though they have seen it happen in the past.
The important thing to remember is that, if you do nothing at all, the usual result is that testing is cut back or omitted completely, neither of which should be an acceptable option.
Approvals
Who can approve the process as complete and allow the project to proceed to the next level (depending on the level of the plan)?
At the master test plan level, this may be all involved parties.
When determining the approval process, keep in mind who the audience is:
• The audience for a unit test level plan is different from that of an integration, system or master level plan.
• The levels and type of knowledge at the various levels will be different as well.
• Programmers are very technical but may not have a clear understanding of the overall business process driving the project.
• Users may have varying levels of business acumen and very little technical skills.
• Always be wary of users who claim high levels of technical skills and programmers that claim to fully understand the business process. These types of individuals can cause more harm than good if they do not have the skills they believe they possess.
======
Company Name
Test Plan
Revision C
Revision History
DATE REV AUTHOR DESCRIPTION
5/14/98 A First Draft
5/21/98 B Second Draft
5/25/98 C Added FTBT
Table of Contents
1. Introduction 20
1.1. Test Plan Objectives 20
2. Scope 20
2.1. Data Entry 20
2.2. ReportsFile Transfer 20
2.3. File Transfer 20
2.4. Security 21
3. Test Strategy 22
3.1. System Test 22
3.2. Performance Test 22
3.3. Security Test 22
3.4. Automated Test 22
3.5. Stress and Volume Test 22
3.6. Recovery Test 22
3.7. Documentation Test 22
3.8. Beta Test 22
3.9. User Acceptance Test 22
4. Environment Requirements 23
4.1. Data Entry workstations 23
4.2 MainFrame 23
5. Test Schedule 23
6. Control Procedures 23
6.1 Reviews 23
6.2 Bug Review meetings 23
6.3 Change Request 24
6.4 Defect Reporting 24
7. Functions To Be Tested 24
8. Resources and Responsibilities 24
8.1. Resources 25
8.2. Responsibilities 25
9. Deliverables 25
10. Suspension / Exit Criteria 27
11. Resumption Criteria 27
12. Dependencies 27
12.1 Personnel Dependencies 27
12.2 Software Dependencies 27
12.3 Hardware Dependancies 27
12.3 Test Data & Database 27
13. Risks 27
13.1. Schedule 27
13.2. Technical 27
13.3. Management 27
13.4. Personnel 28
13.5 Requirements 28
14. Tools 28
15. Documentation 28
16. Approvals 28
1. Introduction
The company has outgrown its current payroll system & is developing a new system that will allow for further growth and provide additional features. The software test department has been tasked with testing the new system.
The new system will do the following:
Provide the users with menus, directions & error messages to direct him/her on the various options.
Handle the update/addition of employee information.
Print various reports.
Create a payroll file and transfer the file to the mainframe.
Run on the Banyan Vines Network using IBM compatible PCs as data entry terminals
1.1. Test Plan Objectives
This Test Plan for the new Payroll System supports the following objectives:
Define the activities required to prepare for and conduct System, Beta and User Acceptance testing.
Communicate to all responsible parties the System Test strategy.
Define deliverables and responsible parties.
Communicate to all responsible parties the various Dependencies and Risks
2. Scope
2.1. Data Entry
The new payroll system should allow the payroll clerks to enter employee information from IBM compatible PC workstations running DOS 3.3 or higher. The system will be menu driven and will provide error messages to help direct the clerks through various options.
2.2. Reports
The system will allow the payroll clerks to print 3 types of reports. These reports are:
A pay period transaction report
A pay period exception report
A three month history report
2.3. File Transfer
Once the employee information is entered into the LAN database, the payroll system will allow the clerk to create a payroll file. This file can then be transferred, over the network, to the mainframe.
2.4. Security
Each payroll clerk will need a userid and password to login to the system. The system will require the clerks to change the password every 30 days.
3. Test Strategy
The test strategy consists of a series of different tests that will fully exercise the payroll system. The primary purpose of these tests is to uncover the systems limitations and measure its full capabilities. A list of the various planned tests and a brief explanation follows below.
3.1. System Test
The System tests will focus on the behavior of the payroll system. User scenarios will be executed against the system as well as screen mapping and error message testing. Overall, the system tests will test the integrated system and verify that it meets the requirements defined in the requirements document.
3.2. Performance Test
Performance test will be conducted to ensure that the payroll system’s response times meet the user expectations and does not exceed the specified performance criteria. During these tests, response times will be measured under heavy stress and/or volume.
3.3. Security Test
Security tests will determine how secure the new payroll system is. The tests will verify that unauthorized user access to confidential data is prevented.
3.4. Automated Test
A suite of automated tests will be developed to test the basic functionality of the payroll system and perform regression testing on areas of the systems that previously had critical/major defects. The tool will also assist us by executing user scenarios thereby emulating several users.
3.5. Stress and Volume Test
We will subject the payroll system to high input conditions and a high volume of data during the peak times. The System will be stress tested using twice (20 users) the number of expected users.
3.6. Recovery Test
Recovery tests will force the system to fail in a various ways and verify the recovery is properly performed. It is vitally important that all payroll data is recovered after a system failure & no corruption of the data occurred.
3.7. Documentation Test
Tests will be conducted to check the accuracy of the user documentation. These tests will ensure that no features are missing, and the contents can be easily understood.
3.8. Beta Test
The Payroll department will beta tests the new payroll system and will report any defects they find. This will subject the system to tests that could not be performed in our test environment.
3.9. User Acceptance Test
Once the payroll system is ready for implementation, the Payroll department will perform User Acceptance Testing. The purpose of these tests is to confirm that the system is developed according to the specified user requirements and is ready for operational use.
4. Environment Requirements
4.1. Data Entry workstations
20 IBM compatible PCs (10 will be used by the automation tool to emulate payroll clerks).
286 processor (minimum)
4mb RAM
100 mb Hard Drive
DOS 3.3 or higher
Attached to Banyan Vines network
A Network attached printer
20 user ids and passwords (10 will be used by the automation tool to emulate payroll clerks).
4.2 MainFrame
Attached to the Banyan Vines network
Access to a test database (to store payroll information transferred from LAN payroll system)
5. Test Schedule
Ramp up / System familiarization 6/01/98 - 6/15/98
System Test 6/16/98 - 8/26/98
Beta Test 7/28/98 - 8/18/98
User Acceptance Test 8/29/98 - 9/03/98
6. Control Procedures
6.1 Reviews
The project team will perform reviews for each Phase. (i.e. Requirements Review, Design Review, Code Review, Test Plan Review, Test Case Review and Final Test Summary Review). A meeting notice, with related documents, will be emailed to each participant.
6.2 Bug Review meetings
Regular weekly meeting will be held to discuss reported defects. The development department will provide status/updates on all defects reported and the test department will provide addition defect information if needed. All member of the project team will participate.
6.3 Change Request
Once testing begins, changes to the payroll system are discouraged. If functional changes are required, these proposed changes will be discussed with the Change Control Board (CCB). The CCB will determine the impact of the change and if/when it should be implemented.
6.4 Defect Reporting
When defects are found, the testers will complete a defect report on the defect tracking system. The defect tracking Systems is accessible by testers, developers & all members of the project team. When a defect has been fixed or more information is needed, the developer will change the status of the defect to indicate the current state. Once a defect is verified as FIXED by the testers, the testers will close the defect report.
7. Functions To Be Tested
The following is a list of functions that will be tested:
Add/update employee information
Search / Lookup employee information
Escape to return to Main Menu
Security features
Scaling to 700 employee records
Error messages
Report Printing
Creation of payroll file
Transfer of payroll file to the mainframe
Screen mappings (GUI flow). Includes default settings
FICA Calculation
State Tax Calculation
Federal Tax Calculation
Gross pay Calculation
Net pay Calculation
Sick Leave Balance Calculation
Annual Leave Balance Calculation
A Requirements Validation Matrix will “map” the test cases back to the requirements. See Deliverables.
8. Resources and Responsibilities
The Test Lead and Project Manager will determine when system test will start and end. The Test lead will also be responsible for coordinating schedules, equipment, & tools for the testers as well as writing/updating the Test Plan, Weekly Test Status reports and Final Test Summary report. The testers will be responsible for writing the test cases and executing the tests. With the help of the Test Lead, the Payroll Department Manager and Payroll clerks will be responsible for the Beta and User Acceptance tests.
8.1. Resources
The test team will consist of:
A Project Manager
A Test Lead
5 Testers
The Payroll Department Manager
5 Payroll Clerks
8.2. Responsibilities
Project Manager Responsible for Project schedules and the overall success of the project. Participate on CCB.
Lead Developer Serve as a primary contact/liaison between the development department and the project team.
Participate on CCB.
Test Lead Ensures the overall success of the test cycles. He/she will coordinate weekly meetings and will communicate the testing status to the project team.
Participate on CCB.
Testers Responsible for performing the actual system testing.
Payroll Department Manager Serves as Liaison between Payroll department and project team. He/she will help coordinate the Beta and User Acceptance testing efforts. Participate on CCB.
Payroll Clerks Will assist in performing the Beta and User Acceptance testing.
9. Deliverables
Deliverable Responsibility Completion Date
Develop Test cases Testers 6/11/98
Test Case Review Test Lead, Dev. Lead, Testers 6/12/98
Develop Automated test suites Testers 7/01/98
Requirements Validation Matrix Test Lead 6/16/98
Obtain User ids and Passwords for payroll system/database Test Lead 5/27/98
Execute manual and automated tests Testers & Test Lead 8/26/98
Complete Defect Reports Everyone testing the product On-going
Document and communicate test status/coverage Test Lead Weekly
Execute Beta tests Payroll Department Clerks 8/18/98
Document and communicate Beta test status/coverage Payroll Department Manager 8/18/98
Execute User Acceptance tests Payroll Department Clerks 9/03/98
Document and communicate Acceptance test status/coverage Payroll Department Manager 9/03/98
Final Test Summary Report Test Lead 9/05/98
10. Suspension / Exit Criteria
If any defects are found which seriously impact the test progress, the QA manager may choose to
Suspend testing. Criteria that will justify test suspension are:
Hardware/software is not available at the times indicated in the project schedule.
Source code contains one or more critical defects, which seriously prevents or limits testing progress.
Assigned test resources are not available when needed by the test team.
11. Resumption Criteria
If testing is suspended, resumption will only occur when the problem(s) that caused the suspension has been resolved. When a critical defect is the cause of the suspension, the “FIX” must be verified by the test department before testing is resumed.
12. Dependencies
12.1 Personnel Dependencies
The test team requires experience testers to develop, perform and validate tests. These
The test team will also need the following resources available: Application developers and Payroll Clerks.
12.2 Software Dependencies
The source code must be unit tested and provided within the scheduled time outlined in the Project Schedule.
12.3 Hardware Dependencies
The Mainframe, 10 PCs (with specified hardware/software) as well as the LAN environment need to be available during normal working hours. Any downtime will affect the test schedule.
12.3 Test Data & Database
Test data (mock employee information) & database should also be made available to the testers for use during testing.
13. Risks
13.1. Schedule
The schedule for each phase is very aggressive and could affect testing. A slip in the schedule in one of the other phases could result in a subsequent slip in the test phase. Close project management is crucial to meeting the forecasted completion date.
13.2. Technical
Since this is a new payroll system, in the event of a failure the old system can be used. We will run our test in parallel with the production system so that there is no downtime of the current system.
13.3. Management
Management support is required so when the project falls behind, the test schedule does not
get squeezed to make up for the delay. Management can reduce the risk of delays by supporting the test team throughout the testing phase and assigning people to this project with the required skills set.
13.4. Personnel
Due to the aggressive schedule, it is very important to have experienced testers on this project. Unexpected turnovers can impact the schedule. If attrition does happen, all efforts must be made to replace the experienced individual
13.5 Requirements
The test plan and test schedule are based on the current Requirements Document. Any changes to the requirements could affect the test schedule and will need to be approved by the CCB.
14. Tools
The Acme Automated test tool will be used to help test the new payroll system. We have the licensed product onsite and installed. All of the testers have been trained on the use of this test tool.
15. Documentation
The following documentation will be available at the end of the test phase:
Test Plan
Test Cases
Test Case review
Requirements Validation Matrix
Defect reports
Final Test Summary Report
16. Approvals