Basics of Computing

This lesson contains basics about how numbers are stored in computers and what fundamental problems we have with them.

Learning Objectives

In this section you will learn how computers store and use data and how they execute instructions. It covers topics such as fixed-point and floating-point arithmetic, explaining how numbers are represented and the limitations of floating-point operations. The lecture emphasizes the importance of understanding these concepts in order to assess the coverage of tests in software testing.

 

Core Topics and Takeaways

  • Decimal numbers

  • Overflow

  • Fixed point representation

  • Floating point

  • Rounding

Video Highlights

Topic

Key Concepts

Video Location

Topic

Key Concepts

Video Location

This section provides a refresher on the basics of computing and emphasizes the importance of understanding how computers store and use data in order to comprehend structural code coverage measures.

  • Two primary references on structural code coverage measures are mentioned.

  • The lecture encourages further reading on the foundations of computing, with Charles Petzold's book being recommended.

  • The section explains how computers store data and do arithmetic using a 10-digit number system.

  • The concept of overflowing a digit when adding numbers is introduced.

00:01

Fixed-point arithmetic allows for representing fractions with a fixed decimal point position for every number.

  • In fixed-point arithmetic, there is a fixed maximum number of digits.

  • Fixed-point arithmetic allows for representing fractions, such as dollars and cents.

  • The decimal point is moved for every number in fixed-point arithmetic.

02:23

Floating point arithmetic represents numbers with two parts: the mantissa (significant digits) and the exponent (order of magnitude).

  • The mantissa includes all the significant digits of a number.

  • To get the actual number, the mantissa is multiplied by 10 to the power of the exponent.

  • Floating point numbers can represent a wide range of values, from billions to fractions in the quadrillions.

  • To simplify reading floating point numbers, the mantissa is written with a decimal point after the most significant digit.

  • The exponent is stored separately from the mantissa to accommodate larger numbers.

04:48

Floating point arithmetic in programming has a rounding error, making it difficult to distinguish between numbers with slight differences.

  • The lack of significant digits in floating point arithmetic can lead to rounding errors.

  • The magnitude of the error depends on whether a number is rounded up or down.

  • The difference between 1.99975 being rounded to 2 and 2 being rounded to 1.9999 is that the former is a bigger mistake in terms of magnitude.

07:12

Learning Highlights

The section emphasises understanding computing basics, such as data storage and arithmetic execution, to comprehend structural code coverage measures. The lecture covers arithmetic fundamentals, fixed-point arithmetic, and floating-point arithmetic. The limitations of floating-point arithmetic, specifically rounding errors, are explored, highlighting challenges in discerning real differences between numbers due to insufficient digit storage while illustrating the importance of grasping these concepts for effective testing.

Core Learning Concepts

Mantissa

The mantissa in a floating-point number is crucial for representing the precision of the number. It allows for the accurate representation of a wide range of values, especially very large or very small numbers, in scientific and engineering calculations.

In software testing, the mantissa of a floating-point number may matter when testing calculations or algorithms that involve precise numerical values, such as financial calculations, scientific simulations, or engineering applications. For example, if a software system involves complex calculations where the precision of floating-point numbers is critical, errors related to the mantissa could lead to incorrect results and impact the overall accuracy of the system.

Floating Number

A floating-point number, in programming, is a numerical representation that allows for a wide range of values, including very large or very small numbers, by using a fixed number of digits to represent the significant figures and the position of the decimal point. This method enables the representation of real numbers in a computer's memory and is commonly used for tasks requiring precision and a wide range of values.

Â