Search This Blog

Wednesday, 29 October 2025

Quality Management (Unit V)

0 comments

 

Quality Management (Unit 5)



Quality Concepts

Quality is a concept that can be defined from different point of view as follows:
  • From the user's point of view, quality is determined based on the ability of the product (software) in meetings its customer's needs (goals)
  • From the manufacturer's point of view, quality product confirms to the original specification of the product (system specifications)
  • From the product view, quality refers to the various inherent characteristics (e.g., functions and features) of the product
  • From the value-based view, quality is determined based on how much it is valued in the market place 
In software development, quality is imparted in software during each and every phase of its development - Analysis, Design and Development.
  • During analysis, requirements are identified and documented to identify the needs of the user
  • In design phase, the blue print of the software is prepared based on the specifications identified.  
  • Development is carried out based on the design documents and then the testing takes place to verify and validate the quality of the software

Process View of Software Quality:

In general, software quality can be defined as "an effective software process applied in a manner that creates a useful product (software) that provides measurable value (quality software) for those who produce it and those who use it."
  • An effective software process establishes the infrastructure that supports any effort at building a high-quality software product.
  • Quality of conformance is a metric that determines the degree to which the implementation (3rd phase of SDLC) follows the design and the resulting system meets its requirements and performance goals.  
  • The quality of the design is based on the degree to which the design (2nd phase of SDLC) meets the functions and features specified in the requirements model.

Software Quality is basically measured based on the following criteria:
  • Conformance to Requirements: Does the software do what it's supposed to do as defined in the requirements and design? This is often called as functional quality.
  • Fitness for Use: Does the software meet user expectations in terms of its behaviour, performance, and attributes?  This includes non-functional aspects like reliability, usability, and maintainability (characteristics) of the software.
  • A useful product delivers the content, functions, and features that the end user desires in a reliable, error-free way.  It always satisfies the requirements that have been explicitly stated by stakeholders.

 
Software Quality Dilemma

  • Software quality dilemma refers to the constant pressure applied to compromise the desire for high-quality software in order to yield the constraints of development work in terms of time, cost, and scope.
  • Software quality dilemma manifests most clearly in the trade-offs development teams have to make.

Speed vs. Quality: 

  • Rushing to release a product or some portion of the product quickly often means cutting corners on testing, code reviews, and proper design. 
  • This leads to technical debt—a hidden cost of fixing bugs and maintaining the poorly built system later, which ultimately slows down future development.

Cost vs. Quality: 

  • Investing in quality processes (hiring senior engineers, implementing rigorous testing, setting up robust automation) costs money and time upfront.
  • A smaller budget often forces a team to skimp on these investments, leading to a cheaper, faster initial release but a much more expensive product to maintain in the long run.

Scope Creep vs. Stability: 

Trying to pack too many features into a release (expanding scope) inevitably puts pressure on the schedule and budget, leading to the same shortcuts that degrade quality.

The "Good, Fast, Cheap" Triad: 

In this triad of Good quality, Fast delivery and Low cost production, we are allowed to pick any two of the three in order to overcome the dilemma in producing the quality software:

  • Good (High Quality): Reliable, maintainable, secure, and user-friendly.
  • Fast (Quick Delivery): Meeting tight deadlines and time-to-market.
  • Cheap (Low Cost/Resource Use): Staying within a strict budget.

For instance:

  • Developing software with low budget (cheap) and quick delivery (fast) will result in software that may not have the expected quality
  • Compromising any one of the two factors - quick delivery (fast) or low budget (cheap) can help in developing software with low budget or speedy delivery

 


Software Quality Assurance (SQA)

  • Software Quality Assurance (SQA) is a systematic process that ensures software development processes and developed products meet pre-defined standards and requirements. 
  • It's an umbrella activity applied throughout the entire software development lifecycle (SDLC) to prevent defects and improve the overall quality of the product and the process used to create it. 
 
Goals of Software Quality Assurance:

SQA encompasses testing, but it is not only testing.  The primary goals of SQA include:

  • Defect Prevention: Stick to the procedures, standards, and guidelines to avoid errors (defects) in the development process
  • Process Improvement: Continuously evaluate and refine the development process to increase efficiency and quality of the development process
  • Verification and Validation (V&V):
  • Verification: Answers the questions, "Are we building the product right?" to verify if the product conforms to stated specifications of the user
  • Validation: Answers the question, "Have we built the right product?" to confirm that the product meets the needs of the user, both functional and non-functional.
  • Standard Conformance: Ensures that the software and the process adhere to internal organizational standards, external regulatory standards (like ISO 9000), and best practices.
 
Core Activities involved in SQA:

SQA is broader than just testing and involves various activities as follows:

  • Defining Standards and Metrics: Establishes a set of engineering practices, development standards, and metrics (e.g., defect density, mean time to failure) that the project must adhere to.
  • Reviews and Audits: Conduct of formal technical reviews, quality audits, and inspections on artifacts like requirements, design documents, and code to catch errors early in the development process
  • Process Monitoring: Tracking the development process against the defined plan to ensure compliance and identify deviations
  • Testing: Perform various types of testing (unit, integration, system, acceptance, etc.) to confirm functionality and identify defects
  • Configuration Management: Monitors and controls changes to the software artifacts throughout the SDLC

 

SQA Vs. Software Testing:

While related, SQA and Software Testing are distinct concepts: 

FeatureSoftware Quality Assurance (SQA)Software Testing
FocusProcess Management and Prevention of defectsProduct Management and Identification of defects
GoalImproving development methodologyEnsuring that the product meets requirements of the user
ScopeEncompasses the entire SDLC (requirements, design, coding, testing, etc.).Primarily a phase within the SDLC (systematically executing the product).
QuestionHow can we build the software better?Does the software being developed work as expected?

 



Continue reading →

Questions and Answers (Q&A) on SE (Unit IV)

0 comments

 FAQ on Software Engineering (Unit 4)


Multiple Choice Questions (MCQ)

1) Acceptance testing is ________________________.

a) running the system with line data by the actual user 

b) making sure that the new programs do in fact process certain transactions according to Specifications 

c) is checking the logic of one or more programs in the candidate systems 

d) testing changes made in an existing or a new program


2) Unit testing is ___________________.

a) running the system with line data by the actual user 

b) making sure that the new programs do in fact process certain transactions according to Specifications 

c) is checking the logic of one or more programs in the candidate systems 

d) testing changes made in an existing or a new program

  

3) Alpha and Beta Testing are forms of __________

a) Acceptance Testing

b) Integration Testing

c) System Testing

d) Unit Testing


4) Which Testing is the re-execution of some subsets of tests that have already been conducted to ensure the changes that are not propagated?

a) Unit Testing

b) Regression Testing

c) Integration Testing

d) Thread-based Testing


5) Mention any two indirect measures of software.

        a) Quality

        b) Efficiency

        c) Accuracy

        d) All the above

 

Descriptive Questions

Question 1: 

Explain how Object Oriented software testing is different from conventional software testing.

  • Object-Oriented (OO) software testing differs significantly from conventional (procedural or functional) software testing due to the fundamental differences in how the software is structured and developed.
  • OO testing focuses on the concepts inherent in OO programming, such as encapsulation, inheritance, and polymorphism.
Key Differences between Conventional and Object-Oriented Testing:

FeatureConventional TestingObject-Oriented Testing
Primary FocusAlgorithms, functions, procedures, and data separation.Objects, classes, their states, behaviors, and interactions.
Unit of TestA module, function, procedure, or subroutine.A class (which bundles data and operations) or a single method within a class.
ApproachPrimarily algorithmic-centric (focuses on sequential execution and data flow).Primarily data-centric (focuses on object state and manipulation).
DecompositionFocuses on functional decomposition (breaking the system into sequential steps/functions).Focuses on composition (assembling objects/classes to form the system).

Differences in Testing Levels:

The distinct structure of OO programs changes the nature of testing at different levels:

1. Unit Testing (Class Testing)

Conventional Software: 

A unit (function/procedure) is typically tested in isolation. Its output is generally dependent only on the input data.

Object-Oriented Software: 

  • The "unit" is a class. Testing a class's methods is harder because the output of one method can depend on the current state (data values) of the object, which might have been set by a sequence of previous method calls. 
  • Testers must focus on testing the sequences of operations that change the object's state.
 
2. Integration Testing

Conventional Software: 

  • Integration testing focuses on the interface between modules, often following a well-defined hierarchical control structure (like top-down or bottom-up). 
  • The goal is to ensure data and control flow correctly between independent modules.

Object-Oriented Software: 

  • Integration testing focuses on verifying the interactions and collaborations between different classes and objects via message passing (method calls).  
  • The presence of inheritance and polymorphism makes the control flow more dynamic and less hierarchical, requiring techniques like thread-based or use-case-based testing.

 

Question 2:
Define Software Quality.  Write notes on different quality metrics. 
 
Software quality refers to the degree to which a software product meets its specified requirements (both functional and non-functional) and satisfies the stated and implied needs of its users and stakeholders when used under specified conditions. 
 
Quality is basically measured based on the following criteria:
  • Conformance to Requirements: Does the software do what it's supposed to do as defined in the requirements and design? (This is often called functional quality).
  • Fitness for Use: Does the software meet user expectations in terms of its behaviour, performance, and attributes? (This includes structural quality or non-functional aspects like reliability, usability, and maintainability). 
 
Software Quality Metrics:
  • Metrics in software engineering are quantifiable measures used to assess the quality of the software product, the development process, and the project itself.  
  • Metrics help developers/testers to monitor and improve the overall quality of the software being developed.  
  • Metrics are typically categorised into three main groups: ProductProcess, and Project metrics.
 
i) Product Metrics (Focus on the Software Itself)

These metrics evaluate the characteristics of the software product.

MetricDefinitionExample & Interpretation
Defect DensityThe number of defects (bugs) found in a specific amount of code or documentation.Calculated as: Total Defects/Size (e.g., KLOC or Function Points). A lower number indicates better code quality.
Mean Time to Failure (MTTF)The average time the system operates without any failure.Measured in time units (e.g., hours). A higher MTTF indicates better reliability.
Mean Time to Repair (MTTR)The average time required to diagnose and fix a failure and restore the system to full operation.Measured in time units (e.g., minutes). A lower MTTR indicates better maintainability and response efficiency.
Test CoverageThe percentage of the source code that is executed by test cases (e.g., Line, Branch, or Function coverage).Calculated as: Tested Code Units/Total Code Units * 100.  A higher percentage suggests more thorough testing, but doesn't guarantee quality.
Code ComplexityMeasures the complexity of the code's control flow, often using Cyclomatic Complexity.lower score (e.g., less than 10 for a function) suggests code that is easier to read, understand, and maintain.
Security Vulnerability DensityThe number of security flaws or vulnerabilities per unit of code size.lower density indicates a more secure product.
 

ii) Process Metrics (Focus on Development/QA Activities)

These metrics evaluate the effectiveness and efficiency of the processes used to develop and maintain the software.

MetricDefinitionExample & Interpretation
Defect Removal Efficiency (DRE)A measure of the effectiveness of the defect filtering process (like testing and reviews).Calculated as: Defects found before release/Total defects (found before and after release). A higher DRE (closer to 1 or 100%) indicates effective internal quality assurance.
Defect LeakageThe number of defects that "leak" or escape the testing process and are found by the customer/end-user after deployment.lower number indicates better testing effectiveness.
Change Failure RateThe percentage of deployments to production that result in a degraded service or require immediate remediation (e.g., a hotfix or rollback).lower rate indicates safer and more stable deployment and release practices.
Bug Reopen RateThe percentage of defects that are logged as "fixed" but are later reopened because the fix was inadequate or incomplete.lower rate indicates better quality of defect resolution.


iii) Project Metrics (Focus on Project Management)

These metrics relate to the project characteristics, resource allocation, and overall success. 

MetricDefinitionExample & Interpretation
Customer Satisfaction (CSAT/NPS)Measures how satisfied users are with the software product and their overall experience.Derived from surveys (e.g., Net Promoter Score - NPS, or Customer Satisfaction Score - CSAT). Higher scores directly reflect the perceived external quality.
Lead Time for ChangesThe time it takes for a code change to go from initial commit to successfully deployed and running in production.shorter time indicates high agility, efficient Continuous Integration/Continuous Delivery (CI/CD), and fast feedback loops.
Cost of Quality (CoQ)The total cost associated with maintaining or achieving quality, often broken into Prevention, Appraisal (testing/reviews), Internal Failure (bugs found pre-release), and External Failure (bugs found post-release).A desirable trend is to shift costs from Failure to Prevention and Appraisal.


Continue reading →
Thursday, 23 October 2025

Metrics and Measurement

0 comments

Metrics and Measurement 
(Unit IV)


  • A key element of any engineering process is measurement. 
  • Using measures, we can assess the quality of the engineered products or systems that we build.
  • Within the software engineering context, a measure provides a quantitative indication of the extent, amount, dimension, capacity, or size of some attribute of product or a process. 
  • Measurement is the act of determining a measure.  It is the process by which numbers or symbols are assigned to the attributes of entities is the real world in such a way as to define them according to clearly defined rules.
Product Metrics:
  • IEEE defines metrics as "a quantitative measure of the degree to which a system, component, or process possesses a given attribute" 
  • By its nature, Software Engineering is a quantifiable approach.  In this approach product metrics help Software Engineers gain insight into the design and construction of the software they build by focusing on specific, measurable attributes of software engineering work products.
  • Product metrics provide a systematic way to assess the quality of software based on a set of clearly defined rules.

Indicators:
  • An indicator is a metric or combination of metrics that provides insight into the software process, a software project or the product itself. 
  • It enables the project manager or software engineer to adjust the process, the project, or the product to make things better.

Quality and Efficiency:
  • Quality and Efficiency are considered indirect because they are external attributes of a product and cannot be measured directly by metrics like lines of code or cost, which are direct measures.
  • Quality is indirect measure that encompasses many other attributes, such as reliability, usability, and security 
  • Efficiency is also an indirect measure that assesses how well a software product performs its function, like its response time or use of resources.  

 

Software Metrics

  • Software quality metrics are quantifiable measures used to assess the characteristics of software products, the effectiveness of the development process, and the progress of a software project.
  • These metrics are often categorized into three main types: Product, Process, and Project metrics.

Product Metrics:

These metrics evaluate the inherent quality characteristics of the software itself, focusing on aspects visible to the end-user or related to the codebase's structural integrity.

MetricDescriptionExample/Measurement
Defect DensityThe number of confirmed defects (bugs) found in a specific size of the software.Defects per Thousand Lines of Code (KLOC) or per Function Point.
Reliability MetricsMeasures the software's ability to perform its required functions under stated conditions for a specified period of time.Mean Time Between Failures (MTBF): Average time the system operates without failure. Mean Time To Failure (MTTF): Average time until the first failure.
MaintainabilityThe ease with which the software can be modified to correct defects, improve performance, or adapt to a changing environment.Mean Time To Change (MTTC): Average time taken to implement a change.
Test CoverageThe percentage of the source code executed by test cases.Percentage of code lines, branches, or paths covered by tests.
PerformanceMeasures the system's efficiency in terms of responsiveness, throughput, and resource utilization.Response time, transaction rate, resource usage (CPU/Memory).
SecurityMeasures the software's ability to protect against unauthorized access, modification, or destruction.Number of known security vulnerabilities per module.
Code Quality/ComplexityMeasures how well-written, readable, and non-redundant the code is.Cyclomatic Complexity: Measures the number of independent paths through a program's source code.
Customer SatisfactionGauges how well the product meets user needs and expectations.Net Promoter Score (NPS), Customer Satisfaction Score (CSAT), or crash rate.

Process Metrics:

These metrics focus on the effectiveness and efficiency of the development, testing, and maintenance processes.

MetricDescriptionExample/Measurement
Defect Removal Efficiency (DRE)A measure of the development team's ability to remove defects before the software reaches the end-user.DRE = Defects Found Before Release / Total Defects Found
Defect LeakageThe percentage of defects that escape the testing process and are found by customers in production.Leakage Rate = Defects Found in Production / (Defects Found in Testing + Production)
Mean Time to Recovery (MTTR)The average time required to restore the system to full functionality after a failure or incident.Time from incident start to full service restoration.
Change Failure RateThe percentage of changes (e.g., software releases) to productioWhich Testing is the re-execution of some subsets of
tests that have already been conducted to ensure the
changes that are not propagated?
1) Unit Testing
2) Regression Testing
3) Integration Testing 4) Thread-based Testingn that result in a degraded service or require remediation.
Number of failed deployments / Total number of deployments.
Test Case Pass RateThe percentage of executed test cases that pass successfully.Number of Passed Test Cases / Total Number of Test Cases Executed

Project Metrics:

These metrics track project characteristics and execution, often related to resources, timelines, and costs, which indirectly affect quality.

MetricDescriptionExample/Measurement
Lead Time for ChangesThe time it takes for a code change to go from initial commit to successful deployment in a production environment.Time elapsed from code commit to deployment.
Deployment FrequencyHow often a team successfully releases to production.Deployments per day, week, or month.
Cost of Quality (CoQ)The total investment in achieving and maintaining product quality, including prevention, appraisal, and failure costs.Sum of costs for testing, quality assurance, bug fixing, and downtime.
Schedule/Effort VarianceThe difference between the planned and actual schedule or effort for project tasks.Percentage deviation from the planned schedule or budget.
  • The choice of which metrics to track depends heavily on the project goals, the development methodology (like Agile or Waterfall), and the critical nature of the software. 
  • A balanced approach using a combination of these metrics provides the most comprehensive view of software quality.

Continue reading →