Basics of Computing
This lesson contains basics about how numbers are stored in computers and what fundamental problems we have with them.
Learning Objectives
In this section you will learn how computers store and use data and how they execute instructions. It covers topics such as fixed-point and floating-point arithmetic, explaining how numbers are represented and the limitations of floating-point operations. The lecture emphasizes the importance of understanding these concepts in order to assess the coverage of tests in software testing.
Â
Core Topics and Takeaways
Decimal numbers
Overflow
Fixed point representation
Floating point
Rounding
Video Highlights
Topic | Key Concepts | Video Location |
---|---|---|
This section provides a refresher on the basics of computing and emphasizes the importance of understanding how computers store and use data in order to comprehend structural code coverage measures. |
| 00:01 |
Fixed-point arithmetic allows for representing fractions with a fixed decimal point position for every number. |
| 02:23 |
Floating point arithmetic represents numbers with two parts: the mantissa (significant digits) and the exponent (order of magnitude). |
| 04:48 |
Floating point arithmetic in programming has a rounding error, making it difficult to distinguish between numbers with slight differences. |
| 07:12 |
Learning Highlights
The section emphasises understanding computing basics, such as data storage and arithmetic execution, to comprehend structural code coverage measures. The lecture covers arithmetic fundamentals, fixed-point arithmetic, and floating-point arithmetic. The limitations of floating-point arithmetic, specifically rounding errors, are explored, highlighting challenges in discerning real differences between numbers due to insufficient digit storage while illustrating the importance of grasping these concepts for effective testing.
Core Learning Concepts
Mantissa
The mantissa in a floating-point number is crucial for representing the precision of the number. It allows for the accurate representation of a wide range of values, especially very large or very small numbers, in scientific and engineering calculations.
In software testing, the mantissa of a floating-point number may matter when testing calculations or algorithms that involve precise numerical values, such as financial calculations, scientific simulations, or engineering applications. For example, if a software system involves complex calculations where the precision of floating-point numbers is critical, errors related to the mantissa could lead to incorrect results and impact the overall accuracy of the system.
Floating Number
A floating-point number, in programming, is a numerical representation that allows for a wide range of values, including very large or very small numbers, by using a fixed number of digits to represent the significant figures and the position of the decimal point. This method enables the representation of real numbers in a computer's memory and is commonly used for tasks requiring precision and a wide range of values.
Â