Software testing is a validation process which confirms that a system works as per the business requirements. It qualifies a system on various aspects such as usability, accuracy, completeness, efficiency, etc. ANSI/IEEE 1059 is the global standard that defines the basic principles of testing.
The testing activity ends when the testing team completes the following milestones.
Test case execution
The successful completion of a full test cycle after the final bug fix marks the end of the testing phase.
The end date of the validation stage also declares the closure of the validation if no critical or high-priority defects remain in the system.
Code Coverage(CC) ratio
It is the amount of code concealed via automated tests. If the team achieves the intended level of code coverage (CC) ratio, then it can choose to end the validation.
Mean Time Between Failure (MTBF) rate
- Mean time between failure (MTBF) refers to the average amount of time that a device or product functions before failing. This unit of measurement includes only operational time between failures and does not include repair times, assuming the item is repaired and begins functioning again. MTBF figures are often used to project how likely a single unit is to fail within a certain period of time
In software testing, verification is a process to confirm that product development is taking place as per the specifications and using the standard development procedures. The process comprises the following activities:
Validation is a means to confirm that the developed product doesn’t have any bugs and is working as expected. It comprises the following activities:
- Functional testing
- Non-functional testing
Static testing is a white-box testing technique that directs developers to verify their code with the help of a checklist to find errors in it. Developers can start the static testing without actually finalizing the application or program. Static testing is more cost-effective than dynamic testing as it conceals more areas than dynamic testing in a shorter time.
It is a standard software testing approach that requires testers to assess the functionality of the software as per the business requirements. The software is treated as a black box and validated as per the end user’s point of view.
A test plan stores all possible testing activities to ensure a quality product. It gathers data from the product description, requirement, and use case documents.
The test plan document includes the following:
- Testing objectives
- Test scope
- Testing the frame
- Reason for testing
- Criteria for entrance and exit
- Risk factors
Test coverage is a quality metric to represent the amount (in percentage) of testing completed for a product. It is relevant for both functional and non-functional testing activities. This metric is used to add missing test cases.
It’s considered not possible to perform 100% testing of any product. But you can follow the below steps to come closer.
- Set a hard limit on the following factors:
- Percentage of test cases passed
- Number of bugs found
- Set a red flag if:
- Test budget is depleted
- Deadlines are breached
- Set a green flag if:
- The entire functionality gets covered in test cases
- All critical and major bugs must have a ‘CLOSED’ status
Unit testing has many names such as module testing or component testing.
Many times, it is the developers who test individual units or modules to check if they are working correctly.
Whereas, integration testing validates how well two or more units of software interact with each other.
There are three ways to validate integration:
- Big Bang approach
- Top-down approach
- Bottom-up approach
No. System testing should start only if all modules are in place and they work correctly. However, it should be performed before UAT (user acceptance testing).
Various testing types used by manual testers are as follows:
- Unit testing
- Integration testing
- Regression testing
- Shakeout testing
- Smoke testing
- Functional testing
- Performance testing
- Load testing
- Stress testing
- Endurance testing
- White-box and Black-box testing
- Alpha and Beta testing
- System testing
The test driver is a section of code that calls a software component under test. It is useful in testing that follows the bottom-up approach.
The test stub is a dummy program that integrates with an application to complete its functionality. It is relevant for testing that uses the top-down approach.
- Let’s assume a scenario where we have to test the interface between Modules A and B. We have developed only Module A. Here, we can test Module A if we have the real Module B or a dummy module for it. In this case, we call Module B as the test stub.
- Now, Module B can’t send or receive data directly from Module A. In such a scenario, we’ve to move data from one module to another using some external features called test driver.
Agile testing is a software testing process that evaluates software from the customers’ point of view. It is favorable as it does not require the development team to complete coding for starting QA. Instead, both coding and testing go hand in hand. However, it may require continuous customer interaction.
It is one of the white-box testing techniques.
Data flow testing emphasizes for designing test cases that cover control flow paths around variable definitions and their uses in the modules. It expects test cases to have the following attributes:
- The input to the module
- The control flow path for testing
- A pair of an appropriate variable definition and its use
- The expected outcome of the test case
End-to-end testing is a testing strategy to execute tests that cover every possible flow of an application from its start to finish. The objective of performing end-to-end tests is to discover software dependencies and to assert that the correct input is getting passed between various software modules and sub-systems.
When a bug occurs, we can follow the below steps.
- We can run more tests to make sure that the problem has a clear description.
- We can also run a few more tests to ensure that the same problem doesn’t exist with different inputs.
- Once we are certain of the full scope of the bug, we can add details and report it.
Here are the two principal reasons that make it impossible to test a program entirely.
- Software specifications can be subjective and can lead to different interpretations.
- A software program may require too many inputs, outputs, and path combinations.
If the required specifications are not available for a product, then a test plan can be created based on the assumptions made about the product. But we should get all assumptions well-documented in the test plan.
It is suggested to perform a regression testing and run tests for all the other modules as well. Finally, the QA should also carry out a system testing.
If the standard documents like System Requirement Specification or Feature Description Document are not available, then QAs may have to rely on the following references, if available.
- A previous version of the application
Another reliable way is to have discussions with the developer and the business analyst. It helps in solving the doubts, and it opens a channel for bringing clarity on the requirements. Also, the emails exchanged could be useful as a testing reference.
Smoke testing is yet another option that would help verify the main functionality of the application. It would reveal some very basic bugs in the application. If none of these work, then we can just test the application from our previous experiences.
Possible differences between retesting and regression testing are as follows:
- We perform retesting to verify the defect fixes. But, the regression testing assures that the bug fix does not break other parts of the application.
- Regression test cases verify the functionality of some or all modules.
- Regression testing ensures the re-execution of passed test cases. Whereas, retesting involves the execution of test cases that are in a failed state.
- Retesting has a higher priority over regression. But in some cases, both get executed in parallel.
Following are some of the key challenges of software testing:
- The lack of availability of standard documents to understand the application
- Lack of skilled testers
- Understanding the requirements: Testers require good listening and understanding capabilities to be able to communicate with the customers the application requirements.
- The decision-making ability to analyze when to stop testing
- Ability to work under time constraints
- Ability to decide which tests to execute first
- Testing the entire application using an optimized number of test cases
Functional testing covers the following types of validation techniques:
- Unit testing
- Smoke testing
- Sanity testing
- Interface testing
- Integration testing
- System testing
- Regression testing
- Functional testing: It is testing the ‘functionality’ of a software or an application under test. It tests the behavior of the software under test. Based on the requirement of the client, a document called a software specification or requirement specification is used as a guide to test the application.
- Non-functional testing: In software terms, when an application works as per the user’s expectation, smoothly and efficiently under any condition, then it is stated as a reliable application. Based on quality, it is very critical to test these parameters. This type of testing is called non-functional testing.
Software testing life cycle (STLC) proposes the test execution in a planned and systematic manner. In the STLC model, many activities occur to improve the quality of the product.
The STLC model lays down the following steps:
- Requirement Analysis
- Test Planning
- Test Case Development
- Environment Setup
- Test Execution
- Test Cycle Closure
Fault is a condition that makes the software fail to execute while performing the considered function.
A slip in coding is indicated as an error. The error spotted by a manual tester becomes a defect. The defect which the development team admits is known as a bug. If a built code misses on the requirements, then it is a functional failure.
Severity: It represents the gravity/depth of a bug. It describes the application point of view.
Priority: It specifies which bug should get fixed first. It defines the user’s point of view.
The criticality of a bug can be low, medium, or high depending on the context.
- User interface defects – Low
- Boundary related defects – Medium
- Error handling defects – Medium
- Calculation defects – High
- Misinterpreted data – High
- Hardware failures – High
- Compatibility issues – High
- Control flow defects – High
- Load conditions – High
Defect detection percentage (DDP) is a type of testing metric. It indicates the effectiveness of a testing process by measuring the ratio of defects discovered before the release and reported after the release by customers.
For example, let’s say, the QA has detected 70 defects during the testing cycle and the customer reported 20 more after the release. Then, DDP would be: 70/(70 + 20) = 72.1%
Defect removal efficiency (DRE) is one of the testing metrics. It is an indicator of the efficiency of the development team to fix issues before the release.
It gets measured as the ratio of defects fixed to total the number of issues discovered.
For example, let’s say, there were 75 defects discovered during the test cycle while 62 of them got fixed by the development team at the time of measurement. The DRE would be 62/75 = 82.6%
Go through the Manual Testing Training to get clear understanding of Weak AI and Strong AI.
Defect age is the time elapsed between the day the tester discovered a defect and the day the developer got it fixed.
While estimating the age of a defect, consider the following points:
- The day of birth of a defect is the day it got assigned and accepted by the development team.
- The issues which got dropped are out of the scope.
- Age can be both in hours or days.
- The end time is the day the defect got verified and closed, not just the day it got fixed by the development team.
Automation testing is a process of executing tests automatically. It reduces the human intervention to a great extent. We use different test automation tools like QTP, Selenium, and WinRunner. Testing tools help in speeding up the testing tasks. These tools allow you to create test scripts to verify the application automatically and also to generate the test reports.
Quality Assurance (QA) refers to the planned and systematic way of monitoring the quality of the process which is followed to produce a quality product. QA tracks the test reports and modifies the process to meet the expectation.
Quality Control (QC) is relevant to the quality of the product. QC not only finds the defects but suggests improvements too. Thus, a process that is set by QA is implemented by QC. QC is the responsibility of the testing team.
Software testing is the process of ensuring that the product which is developed by developers meets the users’ requirements. The aim of performing testing is to find bugs and make sure that they get fixed. Thus, it helps to maintain the quality of the product to be delivered to the customer.
A QA or Test Lead should have the following qualities:
- Well-versed in software testing processes
- Ability to accelerate teamwork to increase productivity
- Improve coordination between QA and Dev engineers
- Provide ideas to refine QA processes
- Skill to conduct RCA meetings and draw conclusions
- Excellent written and interpersonal communication skills
- Ability to learn fast and to groom the team members
Here are some facts about the Silk Test tool:
- Skill tool is developed for performing regression and functionality testing of an application.
- It is used when we are testing Window-based, Java, web, and the traditional client/server applications.
- Silk Test helps in preparing the test plan and managing it to provide direct accessing of the database and validation of the field.
Choosing automated testing over manual testing depends on the following factors:
- Tests require periodic execution.
- Tests include repetitive steps.
- Tests execute in a standard runtime environment.
- Automation is expected to take less time.
- Automation is increasing reusability.
- Automation reports are available for every execution.
- Small releases like service packs include a minor bug fix. In such cases, executing the regression test is sufficient for validation.
An ideal bug report should consist of the following key points:
- A unique ID
- Defect description: A short description of the bug
- Steps to reproduce: They include the detailed test steps to emulate the issue. They also provide the test data and the time when the error has occurred
- Environment: Add any system settings that could help in reproducing the issue
- Module/section of the application in which the error has occurred
- Responsible QA: This person is a point of contact in case you want to follow-up regarding this issue
Bug leakage: Bug leakage is something, when the bug is discovered by the end user/customer and missed by the testing team to detect while testing the software. It is a defect that exists in the application and not detected by the tester, which is eventually found by the customer/end user.
Bug release: A bug release is when a particular version of the software is released with a set of known bug(s). These bugs are usually of low severity/priority. It is done when a software company can afford the existence of bugs in the released software but not the time/cost for fixing it in that particular version.
Performance testing checks the speed, scalability, and/or stability characteristics of a system. Performance is identified with achieving response time, throughput, and resource-utilization levels that meet the performance objectives for a project or a product.
Monkey testing is a technique in software testing where the user tests the application by providing random inputs, checking the behavior of the application (or trying to crash the application).