Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This lesson describes possible ways how to decide it’s a bug and how to explain it to the team.

...

...

Learning Objectives

In this section you are introduced to the concept of oracles in software testing, highlighting the importance of comparing expected results to actual results and the challenges of determining whether a bug is significant or not based on context and human judgment. Using consistency heuristics it argues that a program is not functioning correctly and the need for testers to do their own research to justify their perceptions.

Core Topics and Takeaways

  • No specification is complete

  • How serious is the bug

  • How to explain to others that it’s a bug

  • Where to look for information and expectations about software

Widget Connector
overlayyoutube
_templatecom/atlassian/confluence/extra/widgetconnector/templates/youtube.vm
width400px
urlhttps://www.youtube.com/watch?v=8guqhIddoyQ
height300px

Video Highlights

Topic

Key Concepts

Video Location

Example of issues to be tested: Open Office and WordPad have inconsistencies in displaying font sizes, which can be a serious problem for users.

  • Open Office displays different font sizes as the same.

  • WordPad also doesn't handle differences in size consistently.

  • Side-by-side comparisons with a reference program can help in making more thorough inspections.

00:01

The video discusses the challenges of testing fonts and formatting in different programs.

  • Open Office, Word, and professional desktop publishing programs are designed to provide professional quality formatting.

  • Testing fonts and formatting requires checking with every font and character, as well as interactions with other variables.

  • Measuring character height in points is complicated, so simpler methods like comparing character sizes within the same typeface are often used.

  • The Oracle idea is to determine whether a program passes or fails a test by comparing results to an expected result, but there are challenges with setting accurate expectations.

02:27

The section discusses the challenges of relying on human judgment, credibility, and consistency when determining if a program is working correctly.

  • The Oracle rarely helps in determining the seriousness of a bug.

  • The credibility problem arises when reporting a program's incorrect behavior.

  • Consistency heuristics can be used to compare a program's results with expectations.

04:54

Programmers should not be solely responsible for defects in a program; the responsibility lies with those who provide the specifications.

  • Programmers may claim they did what they were told to do, but the program can still be defective.

  • To challenge this perception, one must conduct their own research and find sources of information that either support or refute the impression.

  • Some companies may discourage testers from evaluating product design, focusing solely on implementation.

  • It is important to choose arguments carefully and ensure the validity of sources before speaking up.

  • Sharing valuable insights can benefit both the company and the individual in the long run.

07:22

Learning Highlights

Focus on the importance of comparing results to expectations and reference programs, highlighting inconsistencies as potential bugs. The concept of Oracle heuristics is introduced, acknowledging human judgment in evaluating test results. The example demonstrates the need for thorough testing, considering various fonts, characters, and interactions. The discussion explores the credibility problem in reporting bugs and the significance of consistency heuristics in guiding test design, concluding with the idea that understanding a product's purpose is crucial for effective testing and bug reporting.