FAQ on Software Engineering (Unit 4)
Multiple Choice Questions (MCQ)
1) Acceptance testing is ________________________.
a) running the system with line data by the actual user
b) making sure that the new programs do in fact process certain transactions according to Specifications
c) is checking the logic of one or more programs in the candidate systems
d) testing changes made in an existing or a new program
2) Unit testing is ___________________.
a) running the system with line data by the actual user
b) making sure that the new programs do in fact process certain transactions according to Specifications
c) is checking the logic of one or more programs in the candidate systems
d) testing changes made in an existing or a new program
3) Alpha and Beta Testing are forms of __________
a) Acceptance Testing
b) Integration Testing
c) System Testing
d) Unit Testing
4) Which Testing is the re-execution of some subsets of tests that have already been conducted to ensure the changes that are not propagated?
a) Unit Testing
b) Regression Testing
c) Integration Testing
d) Thread-based Testing
5) Mention any two indirect measures of software.
a) Quality
b) Efficiency
c) Accuracy
d) All the above
Descriptive Questions
Question 1:
Explain how Object Oriented software testing is different from conventional software testing.
- Object-Oriented (OO) software testing differs significantly from conventional (procedural or functional) software testing due to the fundamental differences in how the software is structured and developed.
- OO testing focuses on the concepts inherent in OO programming, such as encapsulation, inheritance, and polymorphism.
| Feature | Conventional Testing | Object-Oriented Testing |
| Primary Focus | Algorithms, functions, procedures, and data separation. | Objects, classes, their states, behaviors, and interactions. |
| Unit of Test | A module, function, procedure, or subroutine. | A class (which bundles data and operations) or a single method within a class. |
| Approach | Primarily algorithmic-centric (focuses on sequential execution and data flow). | Primarily data-centric (focuses on object state and manipulation). |
| Decomposition | Focuses on functional decomposition (breaking the system into sequential steps/functions). | Focuses on composition (assembling objects/classes to form the system). |
The distinct structure of OO programs changes the nature of testing at different levels:
Conventional Software:
A unit (function/procedure) is typically tested in isolation. Its output is generally dependent only on the input data.
Object-Oriented Software:
- The "unit" is a class. Testing a class's methods is harder because the output of one method can depend on the current state (data values) of the object, which might have been set by a sequence of previous method calls.
- Testers must focus on testing the sequences of operations that change the object's state.
Conventional Software:
- Integration testing focuses on the interface between modules, often following a well-defined hierarchical control structure (like top-down or bottom-up).
- The goal is to ensure data and control flow correctly between independent modules.
Object-Oriented Software:
- Integration testing focuses on verifying the interactions and collaborations between different classes and objects via message passing (method calls).
- The presence of inheritance and polymorphism makes the control flow more dynamic and less hierarchical, requiring techniques like thread-based or use-case-based testing.
- Conformance to Requirements: Does the software do what it's supposed to do as defined in the requirements and design? (This is often called functional quality).
- Fitness for Use: Does the software meet user expectations in terms of its behaviour, performance, and attributes? (This includes structural quality or non-functional aspects like reliability, usability, and maintainability).
- Metrics in software engineering are quantifiable measures used to assess the quality of the software product, the development process, and the project itself.
- Metrics help developers/testers to monitor and improve the overall quality of the software being developed.
- Metrics are typically categorised into three main groups: Product, Process, and Project metrics.
These metrics evaluate the characteristics of the software product.
| Metric | Definition | Example & Interpretation |
| Defect Density | The number of defects (bugs) found in a specific amount of code or documentation. | Calculated as: Total Defects/Size (e.g., KLOC or Function Points). A lower number indicates better code quality. |
| Mean Time to Failure (MTTF) | The average time the system operates without any failure. | Measured in time units (e.g., hours). A higher MTTF indicates better reliability. |
| Mean Time to Repair (MTTR) | The average time required to diagnose and fix a failure and restore the system to full operation. | Measured in time units (e.g., minutes). A lower MTTR indicates better maintainability and response efficiency. |
| Test Coverage | The percentage of the source code that is executed by test cases (e.g., Line, Branch, or Function coverage). | Calculated as: Tested Code Units/Total Code Units * 100. A higher percentage suggests more thorough testing, but doesn't guarantee quality. |
| Code Complexity | Measures the complexity of the code's control flow, often using Cyclomatic Complexity. | A lower score (e.g., less than 10 for a function) suggests code that is easier to read, understand, and maintain. |
| Security Vulnerability Density | The number of security flaws or vulnerabilities per unit of code size. | A lower density indicates a more secure product. |
These metrics evaluate the effectiveness and efficiency of the processes used to develop and maintain the software.
| Metric | Definition | Example & Interpretation |
| Defect Removal Efficiency (DRE) | A measure of the effectiveness of the defect filtering process (like testing and reviews). | Calculated as: Defects found before release/Total defects (found before and after release). A higher DRE (closer to 1 or 100%) indicates effective internal quality assurance. |
| Defect Leakage | The number of defects that "leak" or escape the testing process and are found by the customer/end-user after deployment. | A lower number indicates better testing effectiveness. |
| Change Failure Rate | The percentage of deployments to production that result in a degraded service or require immediate remediation (e.g., a hotfix or rollback). | A lower rate indicates safer and more stable deployment and release practices. |
| Bug Reopen Rate | The percentage of defects that are logged as "fixed" but are later reopened because the fix was inadequate or incomplete. | A lower rate indicates better quality of defect resolution. |
iii) Project Metrics (Focus on Project Management)
These metrics relate to the project characteristics, resource allocation, and overall success.
| Metric | Definition | Example & Interpretation |
| Customer Satisfaction (CSAT/NPS) | Measures how satisfied users are with the software product and their overall experience. | Derived from surveys (e.g., Net Promoter Score - NPS, or Customer Satisfaction Score - CSAT). Higher scores directly reflect the perceived external quality. |
| Lead Time for Changes | The time it takes for a code change to go from initial commit to successfully deployed and running in production. | A shorter time indicates high agility, efficient Continuous Integration/Continuous Delivery (CI/CD), and fast feedback loops. |
| Cost of Quality (CoQ) | The total cost associated with maintaining or achieving quality, often broken into Prevention, Appraisal (testing/reviews), Internal Failure (bugs found pre-release), and External Failure (bugs found post-release). | A desirable trend is to shift costs from Failure to Prevention and Appraisal. |
