Tutorial-1 on Finite Automata
Q2. Consider the following NFA and answer the following:
Q5. Convert the given ε-NFA to its equivalent DFA:
Q2. Consider the following NFA and answer the following:
Q5. Convert the given ε-NFA to its equivalent DFA:
Q1. Given the language L = {ab, aa, baa}, which of the following strings are in L*?
1) abaabaaabaa
2) aaaabaaaa
3) baaaaabaaaab
4) baaaaabaa
Possible Answers:
(A) 1, 2 and 3
(B) 2, 3 and 4
(C) 1, 2 and 4
(D) 1, 3 and 4
Q2. What is the Regular Expression (RE) over the alphabet Σ={a,b} for the language that indicate the strings containing "ab" as substrings?
(A) (a+b)ab(a+b)
(B) (a+b)*ab(a+b)*
(C) (ab)*ab(ab)*
(D) ab(a+b)*
Q3. Consider the FA shown in the figure given below, where ‘-’ is the start sate and ‘+’ is the ending state. The language accepted by the FA is
Q4. Which regular expression best describe the language accepted by the non-deterministic automaton below?
Q1. Construct an NFA and a DFA for recognizing the language denoted by the regular expression aa* + bb*
Q1. Consider the following NFA and answer the following:
Introduction to Finite Automata: Structural Representations, Automata and Complexity, the Central Concepts of Automata Theory – Alphabets, Strings, Languages, Problems.
Nondeterministic Finite Automata: Formal Definition, an application, Text Search, Finite Automata with Epsilon-Transitions.
Deterministic Finite Automata: Definition of DFA, How A DFA Process Strings, The language of DFA, Conversion of NFA with €-transitions to NFA without €-transitions, Conversion of NFA to DFA
Regular Expressions: Finite Automata and Regular Expressions, Applications of Regular Expressions, Algebraic Laws for Regular Expressions, Conversion of Finite Automata to Regular Expressions.
Pumping Lemma for Regular Languages: Statement of the pumping lemma, Applications of the Pumping Lemma.
Context-Free Grammars: Definition of Context-Free Grammars, Derivations Using a Grammar, Leftmost and Rightmost Derivations, the Language of a Grammar, Parse Trees, Ambiguity in Grammars and Languages.
Push Down Automata: Definition of the Pushdown Automaton, the Languages of a PDA, Equivalence of PDA and CFG’s, Acceptance by final state
Turing Machines: Introduction to Turing Machine, Formal Description, Instantaneous description, The language of a Turing machine
Undecidability: Undecidability, A Language that is Not Recursively Enumerable, An Undecidable Problem That is RE, Undecidable Problems about Turing Machines
Introduction: The structure of a compiler,
Lexical Analysis: The Role of the Lexical Analyzer, Input Buffering, Recognition of Tokens, The Lexical- Analyzer Generator Lex,
Syntax Analysis: Introduction, Context-Free Grammars, Writing a Grammar, Top-Down Parsing, Bottom- Up Parsing, Introduction to LR Parsing: Simple LR, More Powerful LR Parsers
Syntax-Directed Translation: Syntax-Directed Definitions, Evaluation Orders for SDD's, Syntax- Directed Translation Schemes, Implementing L-Attributed SDD's.
Intermediate-Code Generation: Variants of Syntax Trees, Three-Address Code
Run-Time Environments: Stack Allocation of Space, Access to Nonlocal Data on the Stack, Heap Management
Speed vs. Quality:
Cost vs. Quality:
Scope Creep vs. Stability:
Trying to pack too many features into a release (expanding scope) inevitably puts pressure on the schedule and budget, leading to the same shortcuts that degrade quality.
The "Good, Fast, Cheap" Triad:
In this triad of Good quality, Fast delivery and Low cost production, we are allowed to pick any two of the three in order to overcome the dilemma in producing the quality software:
For instance:
SQA encompasses testing, but it is not only testing. The primary goals of SQA include:
SQA is broader than just testing and involves various activities as follows:
SQA Vs. Software Testing:
While related, SQA and Software Testing are distinct concepts:
| Feature | Software Quality Assurance (SQA) | Software Testing |
| Focus | Process Management and Prevention of defects | Product Management and Identification of defects |
| Goal | Improving development methodology | Ensuring that the product meets requirements of the user |
| Scope | Encompasses the entire SDLC (requirements, design, coding, testing, etc.). | Primarily a phase within the SDLC (systematically executing the product). |
| Question | How can we build the software better? | Does the software being developed work as expected? |
Software Reviews & Formal Technical Reviews
Software Reviews are a general, umbrella term for quality control activities used throughout the software development life cycle (SDLC) to uncover errors and ensure compliance with requirements and standards before they become defects.
Formal Technical Reviews (FTRs) are a highly structured and systematic type of software review, often considered the most rigorous, that focuses on a detailed, peer-driven examination of a specific software work product (like requirements, design documents, or source code) to find defects.
Common Types of Software Reviews:
Given below are some of the common types of software reviews:
Informal Reviews (e.g., Desk Checks, Pair Programming): Quick, ad-hoc, and often undocumented. They provide fast feedback to catch obvious errors.
Walkthroughs: A semi-formal process where the author presents the work product to the review team, walking them through it, and the team asks questions and makes suggestions. The focus is often on sharing understanding and finding defects.
Technical Reviews: A general term for peer reviews focused on technical content; FTRs are the most formal version of this.
Inspections: The most formal type of review, often considered synonymous with FTR. It emphasizes individual preparation using checklists before a structured meeting, is strictly focused on defect finding, and collects metrics for process improvement.
Audits: Conducted by personnel external to the project team, focused on compliance with standards, regulations, or contractual agreements.
Software Reviews Vs. Formal Technical Reviews:
| Feature | Software Review (General Term) | Formal Technical Review (A Specific Type) |
| Formality | Varies widely, ranging from informal (e.g., "buddy checks," quick code reviews) to formal (like FTRs or Inspections). | Highly formal and structured, following a predefined process (often based on standards like IEEE 1028). |
| Structure & Process | Can be unstructured (ad-hoc discussion) or semi-structured (walkthroughs). | Systematic process with defined steps: planning, preparation, a structured meeting led by a moderator, rework, and follow-up. |
| Key Objective | To serve as a quality control filter to uncover errors and ensure general quality throughout the SDLC. | Primarily to uncover defects (errors) in function, logic, and implementation, and to verify conformance to standards and requirements. |
| Documentation | Minimal to moderate, depending on the formality (e.g., informal notes). | Extensive documentation, including a Formal Technical Review Summary Report and a detailed list of issues. |
| Participants | Can involve just two people (informal) or a larger team. | A small, designated team (typically 3-5) with defined roles: Moderator, Producer (Author), and Reviewers/Recorder. |
| Focus | Can be technical, managerial (status, resources), or compliance-based (audit). | Strictly technical content of a specific work product. |
Multiple Choice Questions (MCQ)
1) Mention any two indirect measures of software.
a) Quality
b) Efficiency
c) Accuracy
d) All the above
2) A tester needs which of the following skills?
a) Programming skills
b) Analytical skills
c) Designing skills
d) All the above
3) Verification is an activity of ________________________.
a) checking the product with respect to customer's expectations
b) checking the product with respect to specifications
c) checking the product with respect to the constrains of the project
d) All the above
4) Validation is an activity of ________________________.
a) checking the product with respect to customer's expectations
b) checking the product with respect to specifications
c) checking the product with respect to the constrains of the project
d) All the above
5) Unit testing is ___________________.
a) running the system with line data by the actual user
b) making sure that the new programs do in fact process certain transactions according to Specifications
c) is checking the logic of one or more programs in the candidate systems
d) testing changes made in an existing or a new program
6) Path testing is a type of _____________________.
a) Black-box testing
b) White-box testing
c) Acceptance testing
d) Regression testing
7) Regression testing is done to ensure _____________________.
a) that new features work as expected
b) that the product doesn't cause compatibility problems
c) that the existing features continue to work correctly
d) better performance of the product
8) Which Testing is the re-execution of some subsets of tests that have already been conducted to ensure the changes that are not propagated?
a) Unit Testing
b) Regression Testing
c) Integration Testing
d) Thread-based Testing
9) Boundary Value Analysis should be based on _____________________.
a) Programming style
b) Software design
c) Requirements Specification
d) None of the above
(Note: Boundary Value Analysis is a software testing technique derived from the specifications or requirements of the system, focusing on testing the extreme ends of valid and invalid input ranges )
10) Acceptance testing is done by _____________________.
a) Developer
b) Tester
c) Client/Customer
d) Designer
11) Acceptance testing involves ________________________.
a) running the system with line data by the actual user
b) making sure that the new programs do in fact process certain transactions according to Specifications
c) checking the logic of one or more programs in the candidate systems
d) testing changes made in an existing or a new program
12) Alpha and Beta Testing are forms of __________
a) Acceptance Testing
b) Integration Testing
c) System Testing
d) Unit Testing
13) Site for Alpha testing is _______________.
a) development company
b) client place
c) anywhere
d) None of the above
14) Site for Beta testing is _______________.
a) development company
b) client place
c) anywhere
d) None of the above
Descriptive Questions
Question 1:
Explain how Object Oriented software testing is different from conventional software testing.
| Feature | Conventional Testing | Object-Oriented Testing |
| Primary Focus | Algorithms, functions, procedures, and data separation. | Objects, classes, their states, behaviors, and interactions. |
| Unit of Test | A module, function, procedure, or subroutine. | A class (which bundles data and operations) or a single method within a class. |
| Approach | Primarily algorithmic-centric (focuses on sequential execution and data flow). | Primarily data-centric (focuses on object state and manipulation). |
| Decomposition | Focuses on functional decomposition (breaking the system into sequential steps/functions). | Focuses on composition (assembling objects/classes to form the system). |
The distinct structure of OO programs changes the nature of testing at different levels:
Conventional Software:
A unit (function/procedure) is typically tested in isolation. Its output is generally dependent only on the input data.
Object-Oriented Software:
Conventional Software:
Object-Oriented Software:
These metrics evaluate the characteristics of the software product.
| Metric | Definition | Example & Interpretation |
| Defect Density | The number of defects (bugs) found in a specific amount of code or documentation. | Calculated as: Total Defects/Size (e.g., KLOC or Function Points). A lower number indicates better code quality. |
| Mean Time to Failure (MTTF) | The average time the system operates without any failure. | Measured in time units (e.g., hours). A higher MTTF indicates better reliability. |
| Mean Time to Repair (MTTR) | The average time required to diagnose and fix a failure and restore the system to full operation. | Measured in time units (e.g., minutes). A lower MTTR indicates better maintainability and response efficiency. |
| Test Coverage | The percentage of the source code that is executed by test cases (e.g., Line, Branch, or Function coverage). | Calculated as: Tested Code Units/Total Code Units * 100. A higher percentage suggests more thorough testing, but doesn't guarantee quality. |
| Code Complexity | Measures the complexity of the code's control flow, often using Cyclomatic Complexity. | A lower score (e.g., less than 10 for a function) suggests code that is easier to read, understand, and maintain. |
| Security Vulnerability Density | The number of security flaws or vulnerabilities per unit of code size. | A lower density indicates a more secure product. |
These metrics evaluate the effectiveness and efficiency of the processes used to develop and maintain the software.
| Metric | Definition | Example & Interpretation |
| Defect Removal Efficiency (DRE) | A measure of the effectiveness of the defect filtering process (like testing and reviews). | Calculated as: Defects found before release/Total defects (found before and after release). A higher DRE (closer to 1 or 100%) indicates effective internal quality assurance. |
| Defect Leakage | The number of defects that "leak" or escape the testing process and are found by the customer/end-user after deployment. | A lower number indicates better testing effectiveness. |
| Change Failure Rate | The percentage of deployments to production that result in a degraded service or require immediate remediation (e.g., a hotfix or rollback). | A lower rate indicates safer and more stable deployment and release practices. |
| Bug Reopen Rate | The percentage of defects that are logged as "fixed" but are later reopened because the fix was inadequate or incomplete. | A lower rate indicates better quality of defect resolution. |
iii) Project Metrics (Focus on Project Management)
These metrics relate to the project characteristics, resource allocation, and overall success.
| Metric | Definition | Example & Interpretation |
| Customer Satisfaction (CSAT/NPS) | Measures how satisfied users are with the software product and their overall experience. | Derived from surveys (e.g., Net Promoter Score - NPS, or Customer Satisfaction Score - CSAT). Higher scores directly reflect the perceived external quality. |
| Lead Time for Changes | The time it takes for a code change to go from initial commit to successfully deployed and running in production. | A shorter time indicates high agility, efficient Continuous Integration/Continuous Delivery (CI/CD), and fast feedback loops. |
| Cost of Quality (CoQ) | The total cost associated with maintaining or achieving quality, often broken into Prevention, Appraisal (testing/reviews), Internal Failure (bugs found pre-release), and External Failure (bugs found post-release). | A desirable trend is to shift costs from Failure to Prevention and Appraisal. |
FFY Center for Learning (CSE) © 2011 FutureForYou Network. Supported by W3G Team and FutureForYou.in