Testing tools in software testing pdf
The purpose of testing can be quality assurance, verification and validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality. Software Testing is the process of executing a program or system with the intent of finding errors.
Where software differs is in the manner in which it fails. Most physical systems fail in a fixed and reasonably small set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for software is generally infeasible. Unlike most physical systems, most of the defects in software are design errors, not manufacturing defects. Software does not suffer from corrosion, wear-and-tear -- generally it will not change until upgrades, or until obsolescence.
So once the software is shipped, the design defects -- or bugs -- will be buried in and remain latent until activation. Software bugs will almost always exist in any software module with moderate size: not because programmers are careless or irresponsible, but because the complexity of software is generally intractable -- and humans have only limited ability to manage complexity.
It is also true that for any complex systems, design defects can never be completely ruled out. Discovering the design defects in software, is equally difficult, for the same reason of complexity.
Because software and any digital systems are not continuous, testing boundary values are not sufficient to guarantee correctness. All the possible values need to be tested and verified, but complete testing is infeasible. Obviously, for a realistic software module, the complexity can be far beyond the example mentioned here. If inputs from the real world are involved, the problem will get worse, because timing and unpredictable environmental effects and human interactions are all possible input parameters under consideration.
A further complication has to do with the dynamic nature of programs. If a failure occurs during preliminary testing and the code is changed, the software may now work for a test case that it didn't work for previously.
But its behavior on pre-error test cases that it passed before can no longer be guaranteed. To account for this possibility, testing should be restarted. The expense of doing this is often prohibitive. An interesting analogy parallels the difficulty in software testing with the pesticide, known as the Pesticide Paradox [Beizer90] : Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual. But this alone will not guarantee to make the software better, because the Complexity Barrier [Beizer90] principle states: Software complexity and therefore that of bugs grows to the limits of our ability to manage that complexity.
By eliminating the previous easy bugs you allowed another escalation of features and complexity, but his time you have subtler bugs to face, just to retain the reliability you had before. Society seems to be unwilling to limit complexity because we all want that extra bell, whistle, and feature interaction. Thus, our users always push us to the complexity barrier and how close we can approach that barrier is largely determined by the strength of the techniques we can wield against ever more complex and subtle bugs.
Regardless of the limitations, testing is an integral part in software development. It is broadly deployed in every phase in the software development cycle. Testing is usually performed for the following purposes:. As computers and software are used in critical applications, the outcome of a bug can be severe. Bugs can cause huge losses. Bugs in critical systems have caused airplane crashes, allowed space shuttle missions to go awry, halted trading on the stock market, and worse.
Bugs can kill. Bugs can cause disasters. The so-called year Y2K bug has given birth to a cottage industry of consultants and programming tools dedicated to making sure the modern world doesn't come to a screeching halt on the first day of the next century.
Quality means the conformance to the specified design requirement. Being correct, the minimum requirement of quality, means performing as required under specified circumstances. Debugging, a narrow view of software testing, is performed heavily to find out design defects by the programmer. The imperfection of human nature makes it almost impossible to make a moderately complex program correct the first time.
Finding the problems and get them fixed [Kaner93] , is the purpose of debugging in programming phase. Testing can serve as metrics. Testers can make claims based on interpretations of the testing results, which either the product works under certain situations, or it does not work. We can also compare the quality among different products under the same specification, based on results from the same test. We can not test quality directly, but we can test related factors to make quality visible.
Quality has three sets of factors -- functionality, engineering, and adaptability. These three sets of factors can be thought of as dimensions in the software quality space. Each dimension may be broken down into its component factors and considerations at successively lower levels of detail. Table 1 illustrates some of the most frequently cited quality considerations. Good testing provides measures for all relevant factors. The importance of any particular factor varies from application to application.
Any system where human lives are at stake must place extreme emphasis on reliability and integrity. In the typical business system usability and maintainability are the key factors, while for a one-time scientific program neither may be significant.
Our testing, to be fully effective, must be geared to measuring each relevant factor and thus forcing quality to become tangible and visible. Tests with the purpose of validating the product works are named clean tests, or positive tests. The drawbacks are that it can only validate that the software works for the specified test cases. A finite number of tests can not validate that the software works for all situations.
On the contrary, only one failed test is sufficient enough to show that the software does not work. Dirty tests, or negative tests, refers to the tests aiming at breaking the software, or showing that it does not work. A piece of software must have sufficient exception handling capabilities to survive a significant level of dirty tests. A testable design is a design that can be easily validated, falsified and maintained. Because testing is a rigorous effort and requires significant time and cost, design for testability is also an important design rule for software development.
Software reliability has important relations with many aspects of software, including the structure, and the amount of testing it has been subjected to. Based on an operational profile an estimate of the relative frequency of use of various inputs to the program [Lyu95] , testing can serve as a statistical sampling method to gain failure data for reliability estimation.
Software testing is not mature. It still remains an art, because we still cannot make it a science. We are still using the same testing techniques invented years ago, some of which are crafted methods or heuristics rather than good engineering methods. Software testing can be costly, but not testing software is even more expensive, especially in places that human lives are at stake.
Solving the software-testing problem is no easier than solving the Turing halting problem. We can never be sure that a piece of software is correct. We can never be sure that the specifications are correct.
No verification system can verify every correct program. We can never be certain that a verification system is correct either. There is a plethora of testing methods and testing techniques, serving multiple purposes in different life cycle phases.
Classified by purpose, software testing can be divided into: correctness testing, performance testing, reliability testing and security testing. Classified by life-cycle phase, software testing can be classified into the following categories: requirements phase testing, design phase testing, program phase testing, evaluating test results, installation phase testing, acceptance testing and maintenance testing. By scope, software testing can be categorized as follows: unit testing, component testing, integration testing, and system testing.
Correctness is the minimum requirement of software, the essential purpose of testing. Correctness testing will need some type of oracle, to tell the right behavior from the wrong one. The tester may or may not know the inside details of the software module under test, e. Therefore, either a white-box point of view or black-box point of view can be taken in testing software.
We must note that the black-box and white-box ideas are not limited in correctness testing only. Jmeter JMeter is an open-source software used for executing performance testing, load testing and functional testing of web applications. JMeter simulates a heavy load on a server by creating tons of virtual concurrent users to the webserver.
LoadNinja LoadNinja is a cloud-based load and performance testing tool for web applications. With LoadNinja QA teams are able to check if their web servers can sustain a heavy load of users and if the servers are robust and scalable.
NeoLoad NeoLoad is an automated performance testing framework for organizations that are continuously testing from APIs to applications. It provides testers and developers automatic test design and maintenance, the most realistic simulation of user behavior, fast root cause analysis and built-in integrations with the entire software development lifecycle toolchain.
Log in. Storage and retrieval — Beyond the ability to view history and logs of bugs handled or not , each bug should be assigned a unique ID, in order to enable search for individual issues or tasks.
Intuitive Worflow - This is important for any testing tool, because you want your team to quickly learn how to use it, and keep using it because they are pleased. Otherwise, you will just find yourself back here looking for a bug tracker again. Open Source or Proprietary Software — While open-source can be cheaper, and can by nature with the help of developers be exactly the solution you need, there are also tangible costs - including setup and maintenance by knowledgeable professionals to consider.
Proprietary Software, on the other hand, often comes with built-in support and maintenance, which makes the whole process more pleasant and reliable. Software-as-a-Service or Self-hosted — While this decision might not be up to you, and will be influenced by the nature of your organization and management, it is a criterion to be aware of.
Self-hosted might give you sense of control and security, but how easy will it be to maintain and update, and how much will it cost? And in both cases hosted or cloud you should consider how easy or complicated migrating elsewhere might be. Integration with Existing Tools — This point should be a core consideration. Unless you are setting something up for the first time, you will want to be sure that the solution you choose integrates with any of your existing testing tools other automation tools, test management tools, task management, etc.
So, ensure that your CI server integrates properly. Script creation time — Fast or slow? Object storage and maintenance Image-based testing Continuous integrations possibilities. Source Code Requirements and Build Security — Make sure you can only share what you want with who you want. Result and Error Logging — You must be able to keep track of your efforts and changes. Continuous Testing — with rapid changes, it is critical to recognize how new code changes have impacted the existing system.
Calabash 1. When evaluating an accessibility tool pay attention to the following and compare these points to your requirements: Guideline Compatibility — Different organizations and governments may require conformance with different accessibility standards, thus different tools support these standards. Languages Support Type of Tool — is it a plugin, can run as an add-on on your browser, or a designated online service.
Reporting capabilities, scope, and format Wave 1. Use this dynamic features checklist to help you make a wise decision. Share the article. Shift your testing Forward Schedule a free demo. We use cookies to improve performance and enhance your experience. By using our website you agree to our use of cookies in accordance with our cookie policy OK!
Please find the download links of Software Testing Methodologies which are listed below:. Introduction: Purpose of testing, Dichotomies, the model for testing, consequences of bugs, the taxonomy of bugs.
Flow graphs and Path testing: Basics concepts of path testing, predicates, path predicates, and achievable paths, path sensitizing, path instrumentation, application of path testing. Transaction Flow Testing: Transaction flows, transaction flow testing techniques. Dataflow testing:-Basics of dataflow testing, strategies in dataflow testing, application of dataflow testing. Logic-Based Testing: Overview, decision tables, path expressions, kV charts, specifications.
0コメント