Search This Blog

Wednesday, 29 October 2025

Questions and Answers (Q&A) on SE (Unit IV)

0 comments

 FAQ on Software Engineering (Unit 4)


Multiple Choice Questions (MCQ)

1) Acceptance testing is ________________________.

a) running the system with line data by the actual user 

b) making sure that the new programs do in fact process certain transactions according to Specifications 

c) is checking the logic of one or more programs in the candidate systems 

d) testing changes made in an existing or a new program


2) Unit testing is ___________________.

a) running the system with line data by the actual user 

b) making sure that the new programs do in fact process certain transactions according to Specifications 

c) is checking the logic of one or more programs in the candidate systems 

d) testing changes made in an existing or a new program

  

3) Alpha and Beta Testing are forms of __________

a) Acceptance Testing

b) Integration Testing

c) System Testing

d) Unit Testing


4) Which Testing is the re-execution of some subsets of tests that have already been conducted to ensure the changes that are not propagated?

a) Unit Testing

b) Regression Testing

c) Integration Testing

d) Thread-based Testing


5) Mention any two indirect measures of software.

        a) Quality

        b) Efficiency

        c) Accuracy

        d) All the above

 

Descriptive Questions

Question 1: 

Explain how Object Oriented software testing is different from conventional software testing.

  • Object-Oriented (OO) software testing differs significantly from conventional (procedural or functional) software testing due to the fundamental differences in how the software is structured and developed.
  • OO testing focuses on the concepts inherent in OO programming, such as encapsulation, inheritance, and polymorphism.
Key Differences between Conventional and Object-Oriented Testing:

FeatureConventional TestingObject-Oriented Testing
Primary FocusAlgorithms, functions, procedures, and data separation.Objects, classes, their states, behaviors, and interactions.
Unit of TestA module, function, procedure, or subroutine.A class (which bundles data and operations) or a single method within a class.
ApproachPrimarily algorithmic-centric (focuses on sequential execution and data flow).Primarily data-centric (focuses on object state and manipulation).
DecompositionFocuses on functional decomposition (breaking the system into sequential steps/functions).Focuses on composition (assembling objects/classes to form the system).

Differences in Testing Levels:

The distinct structure of OO programs changes the nature of testing at different levels:

1. Unit Testing (Class Testing)

Conventional Software: 

A unit (function/procedure) is typically tested in isolation. Its output is generally dependent only on the input data.

Object-Oriented Software: 

  • The "unit" is a class. Testing a class's methods is harder because the output of one method can depend on the current state (data values) of the object, which might have been set by a sequence of previous method calls. 
  • Testers must focus on testing the sequences of operations that change the object's state.
 
2. Integration Testing

Conventional Software: 

  • Integration testing focuses on the interface between modules, often following a well-defined hierarchical control structure (like top-down or bottom-up). 
  • The goal is to ensure data and control flow correctly between independent modules.

Object-Oriented Software: 

  • Integration testing focuses on verifying the interactions and collaborations between different classes and objects via message passing (method calls).  
  • The presence of inheritance and polymorphism makes the control flow more dynamic and less hierarchical, requiring techniques like thread-based or use-case-based testing.

 

Question 2:
Define Software Quality.  Write notes on different quality metrics. 
 
Software quality refers to the degree to which a software product meets its specified requirements (both functional and non-functional) and satisfies the stated and implied needs of its users and stakeholders when used under specified conditions. 
 
Quality is basically measured based on the following criteria:
  • Conformance to Requirements: Does the software do what it's supposed to do as defined in the requirements and design? (This is often called functional quality).
  • Fitness for Use: Does the software meet user expectations in terms of its behaviour, performance, and attributes? (This includes structural quality or non-functional aspects like reliability, usability, and maintainability). 
 
Software Quality Metrics:
  • Metrics in software engineering are quantifiable measures used to assess the quality of the software product, the development process, and the project itself.  
  • Metrics help developers/testers to monitor and improve the overall quality of the software being developed.  
  • Metrics are typically categorised into three main groups: ProductProcess, and Project metrics.
 
i) Product Metrics (Focus on the Software Itself)

These metrics evaluate the characteristics of the software product.

MetricDefinitionExample & Interpretation
Defect DensityThe number of defects (bugs) found in a specific amount of code or documentation.Calculated as: Total Defects/Size (e.g., KLOC or Function Points). A lower number indicates better code quality.
Mean Time to Failure (MTTF)The average time the system operates without any failure.Measured in time units (e.g., hours). A higher MTTF indicates better reliability.
Mean Time to Repair (MTTR)The average time required to diagnose and fix a failure and restore the system to full operation.Measured in time units (e.g., minutes). A lower MTTR indicates better maintainability and response efficiency.
Test CoverageThe percentage of the source code that is executed by test cases (e.g., Line, Branch, or Function coverage).Calculated as: Tested Code Units/Total Code Units * 100.  A higher percentage suggests more thorough testing, but doesn't guarantee quality.
Code ComplexityMeasures the complexity of the code's control flow, often using Cyclomatic Complexity.lower score (e.g., less than 10 for a function) suggests code that is easier to read, understand, and maintain.
Security Vulnerability DensityThe number of security flaws or vulnerabilities per unit of code size.lower density indicates a more secure product.
 

ii) Process Metrics (Focus on Development/QA Activities)

These metrics evaluate the effectiveness and efficiency of the processes used to develop and maintain the software.

MetricDefinitionExample & Interpretation
Defect Removal Efficiency (DRE)A measure of the effectiveness of the defect filtering process (like testing and reviews).Calculated as: Defects found before release/Total defects (found before and after release). A higher DRE (closer to 1 or 100%) indicates effective internal quality assurance.
Defect LeakageThe number of defects that "leak" or escape the testing process and are found by the customer/end-user after deployment.lower number indicates better testing effectiveness.
Change Failure RateThe percentage of deployments to production that result in a degraded service or require immediate remediation (e.g., a hotfix or rollback).lower rate indicates safer and more stable deployment and release practices.
Bug Reopen RateThe percentage of defects that are logged as "fixed" but are later reopened because the fix was inadequate or incomplete.lower rate indicates better quality of defect resolution.


iii) Project Metrics (Focus on Project Management)

These metrics relate to the project characteristics, resource allocation, and overall success. 

MetricDefinitionExample & Interpretation
Customer Satisfaction (CSAT/NPS)Measures how satisfied users are with the software product and their overall experience.Derived from surveys (e.g., Net Promoter Score - NPS, or Customer Satisfaction Score - CSAT). Higher scores directly reflect the perceived external quality.
Lead Time for ChangesThe time it takes for a code change to go from initial commit to successfully deployed and running in production.shorter time indicates high agility, efficient Continuous Integration/Continuous Delivery (CI/CD), and fast feedback loops.
Cost of Quality (CoQ)The total cost associated with maintaining or achieving quality, often broken into Prevention, Appraisal (testing/reviews), Internal Failure (bugs found pre-release), and External Failure (bugs found post-release).A desirable trend is to shift costs from Failure to Prevention and Appraisal.


Leave a Reply