"Normal" math uses the number 10 as its base. Each digit of a number counts from zero to nine, and when you exceed 9 you add one to the number in the next place and start over at zero. Pretty basic and straightforward, and you probably never really think about it.
You may know or have heard that computers work in binary. Computer geeks refer to base-10 math as "decimal", and give the name binary to base-2 math. In base-2 each place in the number has only two values, either zero or one. You count in binary like this: 0, 1, 10, 11, 100, 101, 110, 111, 1000. These aren’t base-10 numbers, so you wouldn’t say "zero, one, ten, eleven, one hundred, one hundred one". You read them aloud like this: "zero, one, one-zero, one-one, one-zero-zero" etc. I don’t often read binary numbers out loud, but when the situation comes up you have to make it clear the base of the number you’re using. Ten, eleven, and 100 have no meaning in base-2.
In base- 10 a number has these places: ones, tens, hundreds, thousands, ten thousands, etc. In base-2 the places are: ones, twos, fours, eights, sixteens, etc. In computer parlance, each binary place value is called a bit. 8 bits make one byte.
In binary, the string of numbers can get long quite quickly. To represent the base-10 number 20,000 takes 16 places in binary: 0b100111000100000. To alleviate that, it is convenient to use base-16, also known as hexadecimal (or hex). In base-16, each place contains 16 values. Values zero to nine use the same symbols as base-10, but for the other 6 values, the convention is to substitute the first 6 characters of the alphabet, A-F.
Counting in base-16 is like this: 1 2 3 4 5 6 7 8 9 A B C D E F 10 11 12 etc. The place values in hex are: ones, sixteens, two hundred fifty sixes, etc. Again for hex, ten, hundred, don’t mean the same thing as they do in base-10, so people read the digits separately. For instance $A57 would be read aloud as "A-five-seven". For clarity I might also say hex, like "A-five-seven hex". In hex, $4E20 is the equivalent to decimal's 20,000 - a much more compact representation compared to the 16 digits binary uses.
I think hexadecimal was chosen because it is very natural to convert from binary to hex and back. Each hex digit corresponds to 4 places (4 bits) in the equivalent binary number. 2 hex digits make one byte (8 bits). One hex digit can be called a nibble, and although this term is not really in use any more, it is cute. Some people even spell it with a y, like "nybble".
|Each hex digit is 4 binary digits|
When writing C code, a number is presumed to be decimal (base-10) unless you mark it otherwise. To tell the C compiler a number is binary, you proceed it with the number zero and a lowercase b like this:
0b1101101. Hex can be written in C code by proceeding the number with zero and lowercase x:
0xA57. Some assembly languages use a dollar sign $ to denote a hex number like
The connection between binary, hex, and decimal is very obvious once you think about it, but must have been a eureka moment for the first engineer to put it all together, way before the first computer was invented.
Understand all this? Cool.← Prev: welcome Next: a-quick-introduction-to-a-cpu →