Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This lesson describes in more details

...

why complete testing is virtually impossible.

...

Find the document about the

...

topic mentioned in the lesson in the Resources section.

In this video you will learn about:

...

Learning Objectives

The section discusses the impossibility of complete software testing due to the large number of test cases required. It also explores different testing strategies and the challenges associated with testing variables, user input, and invalid input. Understand the need to prioritize and optimize testing efforts to achieve thoroughness.

Core Topics and Takeaways

  • Distinct tests

  • What would it mean to test everything

  • Human inputs

  • Timeouts

  • Input variables

  • Extreme inputs

Widget Connector
overlayyoutube
_templatecom/atlassian/confluence/extra/widgetconnector/templates/youtube.vm
width400px
urlhttps://www.youtube.com/watch?v=

...

HVLPCTNbj3Q
height300px

...

Discussed terminology:

  • distinct tests

  • what would it mean to test everything

  • human inputs

  • timeouts

  • input variables

  • extreme inputs

...

Video Highlights

Topic

Key Concepts

Video Location

This section discusses the impossibility of complete testing and introduces test estimation.

  • Complete structural coverage doesn't mean complete testing.

  • To achieve complete testing, all distinct tests would have to be run.

  • Testing all individual variables, combinations of values, and hardware/software configurations would require a lot of tests.

  • The usual approach is to test variables with a few values, including normal operation and handling of values that are too big, too small, or strange.

00:01

Doug Hoffman illustrated the incompleteness of testing by running all four billion inputs into a function on a super fast computer, and it only took six minutes.

  • Experienced testers may use domain testing techniques, but it does not provide a complete set of tests.

  • Most people would test the smallest and largest numbers, powers of 2, and some random or favorite numbers.

  • Hoffman decided to run all four billion inputs into a function on a masspar computer, and it took only six minutes.

  • Out of four billion tests, two tests failed.

02:09

Finding bugs in software cannot be done through exhaustive testing due to the enormous number of possible tests.

  • Exhaustive testing requires an impractical amount of time and resources.

  • Black box testing has limitations as it cannot consider all possible inputs and scenarios.

  • Input problems, such as editing while typing or timing out, can lead to bugs.

  • Result variables, like overflow in multiplication, also pose risks in testing.

04:19

Underflows, easter eggs, and invalid inputs can cause failures in programs, but testing for all possibilities is impossible.

  • Underflows can cause failures when no data is entered in a data entry dialog.

  • Easter eggs are hidden surprises in programs that can be triggered by typing a specific sequence of characters.

  • Testing for easter eggs at the user interface is challenging due to the vast number of possible keystroke sequences.

  • Clearly invalid inputs, such as entering a zero into a data entry field, can have unexpected consequences.

  • It is impossible to test for all the unexpected inputs that users might provide.

06:31

Learning Highlights

Software testing foundations, addressing the impossibility of achieving complete testing due to the vast number of variables, combinations, and user interactions. Emphasizing the limitations of exhaustive testing, the example of testing a function reading a 32-bit word illustrates the challenges. Doug Hoffman's case study on testing the MassPar computer reveals that even billions of tests may not be exhaustive, and overlooking certain inputs, unexpected user behaviors, or easter eggs can lead to critical issues. The lecture stresses the need for strategic, representative testing and acknowledges the impracticality of testing every possible scenario.

(info) Read more about the impossibility of complete testing here.