Software Engineering Testing (Concepts and Principles)

  • Published on

  • View

  • Download


Software EngineeringTesting (Concepts and Principles) ObjectivesTo introduce the concepts and principles of testingTo summarize the debugging processTo consider a variety of testing and debugging methodsanalysisdesigncodetestSoftware TestingNarrow View: Testing is the process of exercising a program with the specific intent of finding errors prior to delivery to the end user.A good test case is one that has a high probability of an as-yet-undiscovered errorA successful test is one that uncovers an as-yet-undiscovered errorBroad View: Testing is the process used to ensure that the software conforms to its specification and meets the user requirementsValidation Are we building the right product?Verification Are we building the product right?Takes place at all stages of software engineeringWhat Testing ShowsTesting PrinciplesAll tests should be traceable to customer requirementsTests should be planned long before testing begins80% of errors occur in 20% of classesTesting should begin in the small and progress toward testing in the largeExhaustive testing is not possibleTo be most effective, testing should be conducted by an independent third partyWho Tests the Software?developerindependent testerUnderstands the system but will test gently and is driven by deliveryMust learn about the system but will attempt to break it and is driven by qualitySoftware TestabilitySoftware that is easy to test:Operability the better it works, the more efficiently it can be tested. Bugs are easier to find in software which at least executesObservabilitywhat you see is what you test. The results of each test case are readily observedControlabilitythe better we can control the software, the more testing can be automated and optimized. Easier to set up test casesDecomposabilityby controlling the scope of testing, we can more quickly isolate problems and perform smarter retesting. Testing can be targetedSimplicitythe less there is to test, the more quickly we can test it. Reduce complex architecture and logic to simplify testsStabilitythe fewer the changes the fewer the disruptions to testing. Changes disrupt test casesUnderstandabilitythe more information we have the smarter we will testTest Case DesignA test case is a controlled experiment that tests the systemProcess:Objectivesto uncover errorsCriteriain a complete mannerConstraintswith a minimum of effort and timeOften badly designed in an ad hoc fashionBugs lurk in corners and congregate at boundaries. Good test case design applies this maximExhaustive Testing (infeasible)There are 10^14 possible paths! If we execute one test per millisecond, it would take 3170 years to test this programTwo nested loops containing four if..then..else statements. Each loop can execute up to 20 timesSelective Testing (feasible)Test a carefully selected execution path. Cannot be comprehensive Testing MethodsBlack Box: examines fundamental interface aspects without regard to internal structureWhite (Glass) Box: closely examine the internal procedural detail of system componentsDebugging: fixing errors identified during testingMethodsStrategieswhite-boxmethods black-box methods[1] White-Box TestingGoal:Ensure that all statements and conditions have been executed at least onceDerive test cases that:Exercise all independent execution pathsExercise all logical decisions on both true and false sidesExecute all loops at their boundaries and within operational boundsExercise internal data structures to ensure validityWhy Cover all Paths?Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed.We often believe that a logical path is not likely to be executed when, in fact, it may be executed on a regular basisTypographical error are random; it is likely that untested paths will contain someBasis Path TestingProvides a measure of the logical complexity of a method and provides a guide for defining a basis set of execution pathsRepresent control flow using flow graph notationNodes represent processing, arrows represent control flowCyclomatic ComplexityCompute the cyclomatic complexity V(G) of a flow graph G:Number of simple predicates (decisions) + 1 orV(G) = E-N+2 (where E are edges and N are nodes) orNumber of enclosed areas + 1In this case V(G) = 4Cyclomatic Complexity and ErrorsA number of industry studies have indicated that the higher V(G), the higher the probability of errorsBasis Path TestingV(G) is the number of linearly independent paths through the program (each has at least one edge not covered by any other path)Derive a basis set of V(G) independent pathsPath 1: 1-2-3-8Path 2: 1-2-3-8-1-2-3Path 3: 1-2-4-5-7-8 Path 4: 1-2-4-6-7-8Prepare test cases that will force the execution of each path in the basis set12345678Basis Path TipsYou dont need a flow graph, but it helps in tracing program pathsCount each simple logical test, compound tests (e.g. switch statements) count as 2 or moreBasis path testing should be applied to critical modules onlyWhen preparing test cases use boundary values for the conditionsOther White Box MethodsCondition Testing: exercises the logical (boolean) conditions in a programData Flow Testing: selects test paths according to the location of the definition and use of variables in a programLoop Testing: focuses on the validity of loop constructsLoop TestingNested LoopsConcatenated Loops Unstructured Loops Simple loopSimple LoopsTest cases for simple loops:Skip the loop entirelyOnly one pass through the loopTwo passes through the loopm passes through the loop (m < n)(n-1), n and (n+1) passes through the loopWhere n is the maximum number of allowable passesNested LoopsTest cases for nested loops:Start at the innermost loop. Set all the outer loops to their minimum iteration parameter valuesTest the min+1, typical, max-1 and max for the innermost loop, while holding the outer loops at their minimum valuesMove out one loop and set it up as in step 2, holding all other loops at typical values. Continue this step until the outermost loop has been testedTest cases for concatenated loops:If the loops are independent of one another then treat each as a simple loop, otherwise treat as nested loops[2] Black-Box TestingComplementary to white box testing. Derive external conditions that fully exercise all functional requirementsrequirementseventsinputoutputBlack Box StrengthsAttempts to find errors in the following categories:Incorrect or missing functionsInterface errorsErrors in data structures or external database accessBehaviour or performance errorsInitialization or termination errorsBlack box testing is performed during later stages of testingThere are a variety of black box techniques:comparison testing (develop independent versions of the system), orthogonal array testing (sampling of an input domain which has several variables)Black Box MethodsEquivalence Partitioning: Divide input domain into classes of data. Each test case then uncovers whole classes of errors. Examples: valid data (user supplied commands, file names, graphical data (e.g., mouse picks)), invalid data (data outside bounds of the program, physically impossible data, proper value supplied in wrong place)Boundary Value Analysis:More errors tend to occur at the boundaries of the input domainSelect test cases that exercises bounding valuesExamples: an input condition specifies a range bounded by values a and b. Test cases should be designed with values a and b and just above and below a and b[3] DebuggingTesting is a structured process that identifies an errors symptomsDebugging is a diagnostic process that identifies an errors sourcetest casesresultsDebuggingsuspectedcausesidentifiedcausescorrectionsregressiontestsnew testcasesexecution of casesDebugging Efforttime required to diagnose the symptom and determine the causetime requiredto correct the errorand conductregression testsDefinition (Regression Tests): re-execution of a subset of test cases to ensure that changes do not have unintended side effectsSymptoms and Causessymptomcausesymptom and cause may be geographically separated symptom may disappear when another problem is fixedcause may be due to a combination of non-errors cause may be due to a system or compiler errorcause may be due to assumptions that everyone believessymptom may be intermittentNot all bugs are equaldamagemildannoyingdisturbingseriousextremecatastrophicinfectiousBug TypeBug Categories: function-related bugs, system-related bugs, data bugs, coding bugs, design bugs, documentation bugs, standards violations, etc.Debugging TechniquesBrute Force: Use when all else fails.Memory dumps and run-time traces. Mass of information amongst which the error may be foundBacktracking:Works in small programs where there are few backward pathsTrace the source code backwards from the error to the sourceCause Elimination:Create a set of cause hypothesesUse error data (or further tests) to prove or disprove these hypothesesBut debugging is an art. Some people have innate prowess and others dontDebugging TipsDont immediately dive into the code, think about the symptom you are seeingUse tools (e.g. dynamic debuggers) to gain further insightIf you are stuck, get help from someone elseAsk these questions before fixing the bug:Is the cause of the bug reproduced in another part of the program?What bug might be introduced by the fix?What could have been done to fix the bug in the first place?Be absolutely sure to conduct regression tests when you do fix the bug