CHAPTER 1

INTRODUCTION TO SOFTWARE TESTING

1.0.INTRODUCTION

Testing is the process of executing the program with the intent of finding faults. Who should do this testing and when should it start are very important questions that are answered in this text. As we know software testing is the fourth phase of the software development life cycle (SDLC). About 70% of development time is spent on testing. We explore this and many other interesting concepts in this chapter.

1.1.THE TESTING PROCESS

Testing is different from debugging. Removing errors from your programs is known as debugging but testing aims to locate as yet undiscovered errors. We test our programs with both valid and invalid inputs and then compare our expected outputs as well as the observed outputs (after execution of software). Please note that testing starts from the requirements analysis phase only and goes until the last maintenance phase. During requirement analysis and designing we do static testing wherein the SRS is tested to check whether it is as per user requirements or not. We use techniques of code reviews, code inspections, walkthroughs, and software technical reviews (STRs) to do static testing. Dynamic testing starts when the code is ready or even a unit (or module) is ready. It is dynamic testing as now the code is tested. We use various techniques for dynamic testing like black-box, gray-box, and white-box testing. We will be studying these in the subsequent chapters.

1.2.WHAT IS SOFTWARE TESTING?

The concept of software testing has evolved from simple program “check-out” to a broad set of activities that cover the entire software life-cycle.

There are five distinct levels of testing that are given below:

a.Debug: It is defined as the successful correction of a failure.

b.Demonstrate: The process of showing that major features work with typical input.

c.Verify: The process of finding as many faults in the application under test (AUT) as possible.

d.Validate: The process of finding as many faults in requirements, design, and AUT.

e.Prevent: To avoid errors in development of requirements, design, and implementation by self-checking techniques, including “test before design.”

There are various definitions of testing that are given below:

“Testing is the process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements.”

[IEEE 83a]

OR

“Software testing is the process of executing a program or system with the intent of finding errors.”

[Myers]

OR

“It involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results.”

[Hetzel]

Testing is NOT:

a.The process of demonstrating that errors are not present.

b.The process of showing that a program performs its intended functions correctly.

c.The process of establishing confidence that a program does what it is supposed to do.

So, all these definitions are incorrect. Because, with these as guidelines, one would tend to operate the system in a normal manner to see if it works. One would unconsciously choose such normal/correct test data as would prevent the system from failing. Besides, it is not possible to certify that a system has no errors—simply because it is almost impossible to detect all errors.

So, simply stated: “Testing is basically a task of locating errors.”

It may be:

a.Positive testing: Operate the application as it should be operated. Does it behave normally? Use a proper variety of legal test data, including data values at the boundaries to test if it fails. Check actual test results with the expected. Are results correct? Does the application function correctly?

b.Negative testing: Test for abnormal operations. Does the system fail/crash? Test with illegal or abnormal data. Intentionally attempt to make things go wrong and to discover/detect—“Does the program do what it should not do? Does it fail to do what it should?”

c.Positive view of negative testing: The job of testing is to discover errors before the user does. A good tester is one who is successful in making the system fail. Mentality of the tester has to be destructive—opposite to that of the creator/author, which should be constructive.

One very popular equation of software testing is:

Software Testing = Software Verification + Software Validation
 

As per IEEE definition(s):

 

Software verification: “It is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.”

OR

“It is the process of evaluating, reviewing, inspecting and doing desk checks of work products such as requirement specifications, design specifications and code.”

OR

“It is a human testing activity as it involves looking at the documents on paper.”

Whereas software validation: “It is defined as the process of evaluating a system or component during or at the end of development process to determine whether it satisfies the specified requirements. It involves executing the actual software. It is a computer based testing process.”

Both verification and validation (V&V) are complementary to each other.

As mentioned earlier, good testing expects more than just running a program. We consider a leap-year function working on MS SQL (server data base):

CREATE FUNCTION f_is_leap_year (@ ai_year small int)

RETURNS small int

AS

BEGIN

--if year is illegal (null or –ve), return –1

IF (@ ai_year IS NULL) or

(@ ai_year <= 0) RETURN –1

IF (((@ ai_year % 4) = 0) AND

((@ ai_year % 100)< > 0)) OR

((@ ai_year % 400) = 0)

RETURN 1 — leap year

RETURN 0 --Not a leap year

END

We execute above program with number of inputs:

TABLE 1.1Database Table: Test_leap_year

image

In this database table given above there are 15 test cases. But these are not sufficient as we have not tried with all possible inputs. We have not considered the trouble spots like:

  i.Removing statement (@ ai_year % 400 = 0) would result in Y2K problem.

 ii.Entering year in float format like 2010.11.

iii.Entering year as a character or as a string.

iv.Entering year as NULL or zero (0).

This list can also grow further. These are our trouble spots or critical areas. We wish to locate these areas and fix these problems before our customer does.

1.3.WHY SHOULD WE TEST? WHAT IS THE PURPOSE?

Testing is necessary. Why?

1.The Technical Case:

a.Competent developers are not inflallible.

b.The implications of requirements are not always forseeable.

c.The behavior of a system is not necessarily predictable from its components.

d.Languages, databases, user interfaces, and operating systems have bugs that can cause application failures.

e.Reusable classes and objects must be trustworthy.

2.The Business Case:

a.If you don’t find bugs your customers or users will.

b.Post-release debugging is the most expensive form of development.

c.Buggy software hurts operations, sales, and reputation.

d.Buggy software can be hazardous to life and property.

3.The Professional Case:

a.Test case design is a challenging and rewarding task.

b.Good testing allows confidence in your work.

c.Systematic testing allows you to be most effective.

d.Your credibility is increased and you have pride in your efforts.

4.The Economics Case: Practically speaking, defects get introduced in every phase of SDLC. Pressman has described a defect amplification model wherein he says that errors get amplified by a certain factor if that error is not removed in that phase only. This may increase the cost of defect removal. This principle of detecting errors as close to their point of introduction as possible is known as phase containment of errors.

image

FIGURE 1.1Efforts During SDLC.

5.To Improve Quality: As computers and software are used in critical applications, the outcome of a bug can be severe. Bugs can cause huge losses. Bugs in critical systems have caused airplane crashes, allowed space shuttle systems to go awry, and halted trading on the stock market. Bugs can kill. Bugs can cause disasters.

In a computerized embedded world, the quality and reliability of software is a matter of life and death. This can be achieved only if thorough testing is done.

6.For Verification and Validation (V&V): Testing can serve as metrics. It is heavily used as a tool in the V&V process. We can compare the quality among different products under the same specification based on results from the same test.

Good testing can provide measures for all relevant quality factors.

7.For Reliability Estimation: Software reliability has important relationships with many aspects of software, including the structure and the amount of testing done to the software. Based on an operational profile (an estimate of the relative frequency of use) of various inputs to the program, testing can serve as a statistical sampling method to gain failure data for reliability estimation.

Recent Software Failures

a.May 31st, 2012, HT reports the failure of air traffic management software, Auto Trac-III, at Delhi Airport. The system is unreliable. This ATC software was installed in 2010 as Auto Trac-II (its older version). Since then it has faced many problems due to inadequate testing. Some of the snags were:

1.May 28, 2011, snag hits radar system of ATC.

2.Feb. 22, 2011, ATC goes blind for 10 minutes with no data about arriving or departing flights.

3.July 28, 2010, radar screens at ATC go blank for 25 minutes after system displaying flight data crashes.

4.Feb. 10, 2010, one of the radar scopes stops working at ATC.

5.Jan. 27, 2010, a screen goes blank at ATC due to a technical glitch.

6.Jan. 15, 2010, radar system collapses due to software glitch. ATC officials manually handle the aircraft.

b.The case of a 2010 Toyota Prius that had a software bug that caused braking problems on bumpy roads.

c.In another case of Therac-25, 6 cancer patients were given overdose.

d.A breach on play station network caused a loss of $170 million to Sony Corp.

Why it happened?

As we know software testing constitutes about 40% of overall effort and 25% of the overall software budget. Software defects are introduced during SDLC due to poor quality requirements, design, and code. Sometimes due to the lack of time and inadequate testing, some of the defects are left behind, only to be found later by users. Software is a ubiquitous product; 90% of people use software in their everyday life. Software has high failure rates due to the poor qualify of the software.

Smaller companies that don’t have deep pockets can get wiped out because they did not pay enough attention to software quality and conduct the right amount of testing.

1.4.WHO SHOULD DO TESTING?

As mentioned earlier, testing starts right from the very beginning. This implies that testing is everyone’s responsibility. By “everyone,” we mean all project team members. So, we cannot rely on one person only. Naturally, it is a team effort. We cannot only designate the tester responsible. Even the developers are responsible. They build the code but do not indicate any errors as they have written their own code.

1.5.HOW MUCH SHOULD WE TEST?

Consider that there is a while_loop that has three paths. If this loop is executed twice, we have (3 × 3) paths and so on. So, the total number of paths through such code will be:

= 1 + 3 + (3 × 3) + (3 × 3 × 3) + ...

= 1 + Σ3n

(where n > 0)

This means an infinite number of test cases. Thus, testing is not 100% exhaustive.

1.6.SELECTION OF GOOD TEST CASES

Designing a good test case is a complex art. It is complex because:

a.Different types of test cases are needed for different classes of information.

b.All test cases within a test suite will not be good. Test cases may be good in variety of ways.

c.People create test cases according to certain testing styles like domain testing or risk-based testing. And good domain tests are different from good risk-based tests.

Brian Marick coined a new term to a lightly documented test case—the test idea. According to Brian, “A test idea is a brief statement of something that should be tested.” For example, if we are testing a square-root function, one test idea would be—“test a number less than zero.” The idea here is again to check if the code handles an error case.

Cem Kaner said—“The best test cases are the ones that find bugs. Our efforts should be on the test cases that finds issues. Do broad or deep coverage testing on the trouble spots.

A test case is a question that you ask of the program. The point of running the test is to gain information like whether the program will pass or fail the test.

1.7.MEASUREMENT OF TESTING

There is no single scale that is available to measure the testing progress. A good project manager (PM) wants worse conditions to occur in the very beginning of the project instead of in the later phases. If errors are large in numbers, we can say either testing was not done thoroughly or it was done so thoroughly that all errors were covered. So there is no standard way to measure our testing process. But metrics can be computed at the organizational, process, project, and product levels. Each set of these measurements has its value in monitoring, planning, and control.

NOTE

Metrics is assisted by four core components—schedule quality, resources, and size.

1.8.INCREMENTAL TESTING APPROACH

To be effective, a software tester should be knowledgeable in two key areas:

1.Software testing techniques

2.The application under test (AUT)

For each new testing assignment, a tester must invest time in learning about the application. A tester with no experience must also learn testing techniques, including general testing concepts and how to define test cases. Our goal is to define a suitable list of tests to perform within a tight deadline. There are 8 stages for this approach:

Stage 1: Exploration

Purpose: To gain familiarity with the application

Stage 2: Baseline test

Purpose: To devise and execute a simple test case

Stage 3: Trends analysis

Purpose: To evaluate whether the application performs as expected when actual output cannot be predetermined

Stage 4: Inventory

Purpose: To identify the different categories of data and create a test for each category item

Stage 5: Inventory combinations

Purpose: To combine different input data

Stage 6: Push the boundaries

Purpose: To evaluate application behavior at data boundaries

Stage 7: Devious data

Purpose: To evaluate system response when specifying bad data

Stage 8: Stress the environment

Purpose: To attempt to break the system

The schedule is tight, so we may not be able to perform all of the stages. The time permitted by the delivery schedule determines how many stages one person can perform. After executing the baseline test, later stages could be performed in parallel if more testers are available.

1.9.BASIC TERMINOLOGY RELATED TO SOFTWARE TESTING

We must define the following terminologies one by one:

1.Error (or mistake or bugs): People make errors. When people make mistakes while coding, we call these mistakes bugs. Errors tend to propagate. A requirements error may be magnified during design and still amplified during coding. So, an error is a mistake during SDLC.

2.Fault (or defect): A missing or incorrect statement in a program resulting from an error is a fault. So, a fault is the representation of an error. Representation here means the mode of expression, such as a narrative text, data flow diagrams, hierarchy charts, etc. Defect is a good synonym for fault. Faults can be elusive. They requires fixes.

3.Failure: A failure occurs when a fault executes. The manifested inability of a system or component to perform a required function within specified limits is known as a failure. A failure is evidenced by incorrect output, abnormal termination, or unmet time and space constraints. It is a dynamic process.

So, Error (or mistake or bug) image Fault (or defect) image Failure.
For example,

Error (e.g., * replaced by /) image Defect (e.g., C = A/B) image (e.g., C = 2 instead of 8)

4.Incident: When a failure occurs, it may or may not be readily apparent to the user. An incident is the symptom associated with a failure that alerts the user to the occurrence of a failure. It is an unexpected occurrence that requires further investigation. It may not need to be fixed.

5.Test: Testing is concerned with errors, faults, failures, and incidents. A test is the act of exercising software with test cases. A test has two distinct goals—to find failures or to demonstrate correct execution.

6.Test case: A test case has an identity and is associated with program behavior. A test case also has a set of inputs and a list of expected outputs. The essence of software testing is to determine a set of test cases for the item to be tested.

The test case template is shown below.

image

FIGURE 1.2Test Case Template.

There are 2 types of inputs:

a.Preconditions: Circumstances that hold prior to test case execution.

b.Actual inputs: That were identified by some testing method.

Expected outputs are also of two types:

a.Post conditions

b.Actual outputs

The act of testing entails establishing the necessary preconditions, providing the test case inputs, observing the outputs, and then comparing these with the expected outputs to determine whether the test passed.

The remaining information in a test case primarily supports testing team. Test cases should have an identity and a reason for being. It is also useful to record the execution history of a test case, including when and by whom it was run, the pass/fail result of each execution, and the version of the software on which it was run. This makes it clear that test cases are valuable, at least as valuable as source code. Test cases need to be developed, reviewed, used, managed, and saved. So, we can say that test cases occupy a central position in testing.

Test cases for ATM:

Preconditions: System is started.

image

image

image

image

7.Test suite: A collection of test scripts or test cases that is used for validating bug fixes (or finding new bugs) within a logical or physical area of a product. For example, an acceptance test suite contains all of the test cases that were used to verify that the software has met certain predefined acceptance criteria.

8.Test script: The step-by-step instructions that describe how a test case is to be executed. It may contain one or more test cases.

9.Test ware: It includes all of testing documentation created during the testing process. For example, test specification, test scripts, test cases, test data, the environment specification.

10.Test oracle: Any means used to predict the outcome of a test.

11.Test log: A chronological record of all relevant details about the execution of a test.

12.Test report: A document describing the conduct and results of testing carried out for a system.

1.10.TESTING LIFE CYCLE

image

FIGURE 1.3A Testing Life Cycle.

In the development phase, three opportunities arise for errors to be made resulting in faults that propagate through the remainder of the development process. The first three phases are putting bugs IN, the testing phase is finding bugs, and the last three phases are getting bugs OUT. The fault resolution step is another opportunity for errors and new faults. When a fix causes formerly correct software to misbehave, the fix is deficient.

1.11.WHEN TO STOP TESTING?

Testing is potentially endless. We cannot test until all defects are unearthed and removed. It is simply impossible. At some point, we have to stop testing and ship the software. The question is when?

Realistically, testing is a trade-off between budget, time, and quality. It is driven by profit models.

The pessimistic approach is to stop testing whenever some or any of the allocated resources—time, budget, or test cases—are exhausted.

The optimistic stopping rule is to stop testing when either reliability meets the requirement, or the benefit from continuing testing cannot justify the testing cost.

[Yang]

1.12.PRINCIPLES OF TESTING

To make software testing effective and efficient we follow certain principles. These principles are stated below.

1.Testing should be based on user requirements: This is in order to uncover any defects that might cause the program or system to fail to meet the client’s requirements.

2.Testing time and resources are limited: Avoid redundant tests.

3.Exhaustive testing is impossible: As stated by Myer, it is impossible to test everything due to huge data space and the large number of paths that a program flow might take.

4.Use effective resources to test: This represents the most suitable tools, procedures, and individuals to conduct the tests. The test team should use tools like:

a.Deja Gnu: It is a testing frame work for interactive or batch-oriented applications. It is designed for regression and embedded system testing. It runs on UNIX platform. It is a cross-platform operating system.

5.Test planning should be done early: This is because test planning can begin independently of coding and as soon as the client requirements are set.

6.Testing should begin “in small” and progress toward testing “in large”: The smallest programming units (or modules) should be tested first and then expanded to other parts of the system.

7.Testing should be conducted by an independent third party.

8.All tests should be traceable to customer requirements.

9.Assign best people for testing. Avoid programmers.

10.Test should be planned to show software defects and not their absence.

11.Prepare test reports including test cases and test results to summarize the results of testing.

12.Advance test planning is a must and should be updated in a timely manner.

1.13.LIMITATIONS OF TESTING

1.Testing can show the presence of errors—not their absence.

2.No matter how hard you try, you will never find the last bug in an application.

3.The domain of possible inputs is too large to test.

4.There are too many possible paths through the program to test.

5.In short, maximum coverage through minimum test-cases. That is the challenge of testing.

6.Various testing techniques are complementary in nature and it is only through their combined use that one can hope to detect most errors.

NOTE

To see some of the most popular testing tools of 2017, visit the following site: https://www.guru99.com/testing-tools.html

1.14.AVAILABLE TESTING TOOLS, TECHNIQUES, AND METRICS

There are an abundance of software testing tools that exist. Some of the early tools are listed below:

a.Mothora: It is an automated mutation testing tool-set developed at Purdue University. Using Mothora, the tester can create and execute test cases, measure test case adequacy, determine input-output correctness, locate and remove faults or bugs, and control and document the test.

b.NuMega’s Bounds Checker, Rational’s Purify: They are run-time checking and debugging aids. They can both check and protect against memory leaks and pointer problems.

c.Ballista COTS Software Robustness Testing Harness [Ballista]: It is a full-scale automated robustness testing tool. It gives quantitative measures of robustness comparisons across operating systems. The goal is to automatically test and harden commercial off-the-shelf (COTS) software against robustness failures.

SUMMARY

1.Software testing is an art. Most of the testing methods and practices are not very different from 20 years ago. It is nowhere near maturity, although there are many tools and techniques available to use. Good testing also requires a tester’s creativity, experience, and intuition, together with proper techniques.

2.Testing is more than just debugging. It is not only used to locate errors and correct them. It is also used in validation, verification process, and reliability measurement.

3.Testing is expensive. Automation is a good way to cut down cost and time. Testing efficiency and effectiveness is the criteria for coverage based testing techniques.

4.Complete testing is infeasible. Complexity is the root of the problem.

5.Testing may not be the most effective method to improve software quality.

MULTIPLE CHOICE QUESTIONS

1.Software testing is the process of

a.Demonstrating that errors are not present

b.Executing the program with the intent of finding errors

c.Executing the program to show that it executes as per SRS

d.All of the above.

2.Programmers make mistakes during coding. These mistakes are known as

a.Failures

b.Defects

c.Bugs

d.Errors

3.Software testing is nothing else but

a.Verification only

b.Validation only

c.Both verification and validation

d.None of the above.

4.Test suite is a

a.Set of test cases

b.Set of inputs

c.Set of outputs

d.None of the above.

5.Which one is not the verification activity?

a.Reviews

b.Path testing

c.Walkthroughs

d.Acceptance testing

6.A break in the working of a system is called a(n)

a.Defect

b.Failure

c.Fault

d.Error

7.One fault may lead to

a.One failure

b.No failure

c.Many failures

d.All of the above.

8.Verification is

a.Checking product with respect to customer’s expectations

b.Checking product with respect to SRS

c.Checking product with respect to the constraints of the project

d.All of the above.

9.Validation is

a.Checking the product with respect to customer’s expectations

b.Checking the product with respect to specification

c.Checking the product with respect to constraints of the project

d.All of the above.

10.Which one of the following is not a testing tool?

a.Deja Gnu

b.TestLink

c.TestRail

d.SOLARIS

ANSWERS

1.b.

2.c.

3.c.

4.a.

5.d.

6.b.

7.d.

8.b.

9.a.

10.d.

CONCEPTUAL SHORT QUESTIONS WITH ANSWERS

Q. 1.Are there some myths associated to software testing?

Ans.Some myths related to software testing are as follows:

1.Testing is a structured waterfall idea: Testing may be purely independent, incremental, and iterative activity. Its nature depends upon the context and adapted strategy.

2.Testing is trivial: Adequate testing means a complete understanding of application under test (AUT) and appreciating testing techniques.

3.Testing is not necessary: One can minimize the programming errors but cannot eliminate them. So, testing is necessary.

4.Testing is time consuming and expensive: We remember a saying—“Pay me now or pay me much more later” and this is true in the case of software testing as well. It is better to apply testing strategies now or else defects may appear in the final product which is more expensive.

5.Testing is destructive process: Testing a software is a diagnostic and creative activity which promotes quality.

Q. 2.Give one example of

a.Interface specification bugs

b.Algorithmic bugs

c.Mechanical bugs

Ans.a.Examples of interface specification bugs are:

 i.Mismatch between what the client needs and what the server offers.

ii.Mismatch between requirements and implementation.

b.Examples of algorithmic bugs are:

 i.Missing initialization.

ii.Branching errors.

c.Examples of mechanical bugs are:

 i.Documentation not matching with the operating procedures.

Q. 3.Why are developers not good testers?

Ans.A person checking his own work using his own documentation has some disadvantages:

 i.Misunderstandings will not be detected. This is because the checker will assume that what the other individual heard from him was correct.

 ii.Whether the development process is being followed properly or not cannot be detected.

iii.The individual may be “blinded” into accepting erroneous system specifications and coding because he falls into the same trap during testing that led to the introduction of the defect in the first place.

iv.It may result in underestimation of the need for extensive testing.

v.It discourages the need for allocation of time and effort for testing.

Q. 4.Are there any constraints on testing?

Ans.The following are the constraints on testing:

 i.Budget and schedule constraints

 ii.Changes in technology

iii.Limited tester’s skills

iv.Software risks

Q. 5.What are test matrices?

Ans.A test matrix shows the inter-relationship between functional events and tests. A complete test matrix defines the conditions that must be tested during the test process.

The left side of the matrix shows the functional events and the top identifies the tests that occur in those events. Within the matrix, cells are the process that needs to be tested. We can even cascade these test matrices.

Q. 6.What is a process?

Ans.It is defined as a set of activities that represent the way work is performed. The outcome of a process is usually a product or service. For example,

Examples of IT processes Outcomes

1.Analyze business needs

2.Run job

3.UNIT test

Needs statement

Executed job

Defect-free unit

Q. 7.Explain PDCA view of a process?

Ans.A PDCA cycle is a conceptual view of a process. It is shown in Figure 1.4.
It has four components:

image

FIGURE 1.4

i.P-Devise a plan: Define your objective and find the conditions and methods required to achieve your objective. Express a specific objective numerically.

ii.D-Execute (or Do) the plan: Create the conditions and perform the necessary teaching and training to execute the plan. Make sure everyone thoroughly understands the objectives and the plan. Teach workers all of the procedures and skills they need to fulfill the plan and a thorough understanding of job is needed. Then they can perform the work according to these procedures.

iii.C-Check the results: Check to determine whether work is progressing according to the plan and that the expected results are obtained. Also, compare the results of the work with the objectives.

iv.A-Take necessary action: If a check finds out an abnormality, i.e., if the actual value differs from the target value then search for its cause and try to mitigate the cause. This will prevent the recurrence of the defect.

Q. 8.Explain the V-model of testing?

Ans.According to the waterfall model, testing is a post-development activity. The spiral model took one step further by breaking the product into increments each of which can be tested separately. However, V-model brings in a new perspective that different types of testing apply at different levels. The V-model splits testing into two parts—design and execution. Please note that the test design is done early while the test execution is done in the end. This early design of tests reduces overall delay by increasing parallelism between development and testing. It enables better and more timely validation of individual phases. The V-model is shown in Figure 1.5.

image

FIGURE 1.5

The levels of testing echo the levels of abstractions found in the waterfall model of SDLC. Please note here, especially in terms of functional testing, that the three levels of definition, i.e., specification, initial design, and detailed design; correspond directly to three levels of testing—unit, integration, and system testing.

A practical relationship exists between the levels of testing versus black- and white-box testing. Most practitioners say that structural testing is most appropriate at the unit level while functional testing is most appropriate at the system level.

REVIEW QUESTIONS

1.What is software testing?

2.Distinguish between positive and negative testing?

3.Software testing is software verification plus software validation. Discuss.

4.What is the need of testing?

5.Who should do testing? Discuss various people and their roles during development and testing.

6.What should we test?

7.What criteria should we follow to select test cases?

8.Can we measure the progress of testing?

9.“Software testing is an incremental process.” Justify the statement.

10.Define the following terms:

a.Error

b.Fault

c.Failure

d.Incident

e.Test

f.Test case

g.Test suite

h.Test script

i.Testware

11.Explain the testing life cycle?

12.When should we stop testing? Discuss pessimistic and optimistic approaches.

13.Discuss the principles of testing.

14.What are the limitations of testing?

15.a.What is debugging?

b.Why exhaustive testing is not possible?

16.What are modern testing tools?

17.Write a template for a typical test case.

18.Differentiate between error and fault.

19.What is software testing? How is it different from debugging?

20.Differentiate between verification and validation?

21.Explain the concept of a test case and test plan.

22.a.Differentiate between positive testing and negative testing.

b.Why is 100% testing not possible through either black-box or white-box testing techniques?

c.Name two testing tools used for functional testing.

d.What is static testing? Name two techniques to perform static testing.

23.“Software testing is an incremental process.” Justify the statement.

24.a.Why are developers not good testers?

b.Which approach should be followed to step testing?

25.a.Discuss the role of software testing during the software life cycle and why it is so difficult?

b.What should we test? Comment on this statement. Illustrate the importance of testing.

c.Will exhaustive testing (even if possible for very small programs) guarantee that the program is 100% correct?

d.Define the following terms:

  i.Test suite

 ii.Bug

iii.Mistake

iv.Software failure