For most of us, the value of digital technology is simply beyond question. Even my coffee pot includes “digital programming for maximum brewed flavor.”
The pot does make great coffee. But I sometimes find the public’s enchantment with all things digital ironic. After all, we are analog beings in what is, primarily, an analog world.
Light, sound, touch and smell; these are all analog senses. And yet no one doubts that anything digital is simply better than anything analog.
In the world of broadcasting, this is, in fact, true. Who, after all, wants to go back to the bad old days of 45 RPM records and needles in grooves? Broadcasters have been converting studio equipment to digital for years; lately transmitter equipment has made the switch as well.
Still, replacing all of that legacy equipment has not been without significant effort and expense.
It is technically very complex to transform sound waves into electrical signals via a microphone, to encode those analog signals into digital pulses, transmit them, recapture the transmitted stream and finally decode them (complete with error correction) back to analog electrical signals, where they excite speaker cones and headphone transducers in consumers’ cars and homes.
The basis for doing all of that is the subject of this new series of articles.
Scottish physicist James Clerk Maxwell, expanding upon ideas first put forth by Michael Faraday, stated in 1865 that in addition to sound, all electrical, magnetic and light energy is made up of waves.
These waves, from the longest to the incredibly short, vibrate like waves in a pond, in a continuous motion; one wave after the next, after the next.
Long vibrations, from about 20 waves (or cycles) per second to 20,000 cycles per second, are received by our ears as sound. The waves themselves are known as Hertzian Waves, in honor of Heinrich Hertz, the German physicist, who in 1886 validated Maxwell and Faraday’s propositions by generating the waves, rather than just theorizing their existence.
Around 1960, the term “Hertz” was adopted to describe “cycles per second” in honor of Hertz’s efforts. In any event, as the waves get shorter (and therefore, are more tightly packed together), they become part of the electromagnetic spectrum of frequencies.
The “waves-in-a-pond” idea became so embedded in the thinking of the day that scientists felt that light waves — and later, when radio waves were discovered — traveling through the vacuum of space must be assisted by some substance through which the waves could travel. After all, sound waves traveled from speaker to listener through the medium of air, so obviously, light and radio waves needed a substance as well. To solve the conundrum, they postulated the existence of “ether.”
This, they thought, was a unique and otherwise undetectable substance that existed everywhere but which had no known physical properties other than to facilitate the propagation of both radio and light waves.
This theory was debunked by American scientists Albert Michelson and Edward Morley in 1887, when they measured the speed of light both in the direction of Earth’s rotation, and again at a 90 degree angle and found the speed to be identical. Clear indication that light waves were unaffected by anything, including ether.
Still, the conventional wisdom held in the minds of many and even today there are occasional references to radio waves radiating out into the ether.
With or without the ethereal aid, energy at frequencies just above 50,000 Hz (50 kilo Hertz; “kilo” being the prefix for thousands) becomes so energetic that electrons jump off the wires into which the energy is applied to become radio waves. This goes on all the way up to 900,000,000 Hz (900 megahertz, where “mega” is the term for millions).
Above that frequency, the waves get so short and fast that they become microwaves, then even shorter infrared waves and ultimately light waves and beyond — to X rays, gamma rays and so forth. This is the analog world in which we live and around which all of our technology was built, until the advent of the digital age.
ON AND OFF
Digital signals are not wave-like at all. Rather, they are simply a series of pulses, either present or absent. For convenience, we call these pulses a “one” for a pulse that is present, or a “zero” when the pulse is absent. The pulses may be sent slowly (as they were in the first digital transmission system, the telegraph) or very fast, but they remain nothing more than pulses, just the same. The real black magic that gave us CDs and hard drive music is in the conversion, or encoding, of a continuous wave of analog sound picked up by a microphone, into those ones and zeros, and the decoding back to an exact copy of the original wave so we can hear it as it was originally spoken or played.
(Note that an “exact” digital copy of an analog wave is impossible for reasons that will become apparent, but we can get so close that the difference essentially is meaningless).
To understand how this encoding and decoding (the origin of the term “codec”) from analog to digital (“A to D”) and back again (“D to A”) takes place, we need to back up a bit to the beginnings of modern computing.
THE ARC OF THE THING
Digital computing got its start because the first “modern” analog computers, used for bombsights and artillery trajectory calculations in World War II, did not work well.
The reason for this is called “repeatability” and is one of the main advantages of digital technology over analog. To grasp why repeatability is such an advantage, imagine a thousand controls, each marked with a scale from 1 to 100. Turning a control to maximum would equal “100” on that control and turning it off would equal “1.” Now imagine trying to set each control independently to “50,” time after time.
You could get close, probably, and even very close if you had some sort of instrumentation and a very gentle touch, but the ability quickly and reliably to set all 1,000 controls precisely to “50” even once, let alone numerous times, would be limited to say the least.
Since “close” is a decided disadvantage when lobbing artillery shells around, analog computers soon gave way to digital devices.
THE DIGITAL DOMAIN
The first practical digital computers, ENIAC and UNIVAC (acronyms for Electronic Numerical Integrator and Calculator, and Universal Automatic Computer) overcame the problem of analog inaccuracy by replacing continuously variable analog input controls with a series of electronic switches.
These switches are either on or off, so there is no ambiguity. Having only those two states, they are binary digital devices. In ENIAC/UNIVAC the switches were vacuum tubes that were turned on and off electronically. The following generation of computers used transistors and current computers use integrated circuits. Even so, we can consider all of those devices to be simple on and off switches.
Back to our imaginary analog computer; if you used 100 UNIVAC type tube/switches to correspond to the 100 positions on one of the 1,000 control knobs, you could then turn on switch number “50” and be assured that time after time, that would be the exact result for that one control. If you wanted to set this up in place of all 1,000 controls, you would end up with an enormous number of switches — 100,000, to be exact — but in the end you would achieve absolute consistency over all 1,000 controls time and time again.
Earlier I said that it would become apparent why an exact digital copy of an analog signal was impossible and the above scenario explains why this is true.
We have turned on switch number 50 in each of the 1,000 control paths and that was exactly what we wanted to do. However, since the analog input signal is continuously variable, what if we wanted to replicate a control setting of 50.5? We could of course use twice as many switches and thereby create half-steps all the way from 1 to 100, but then again we might also have the need to replicate 50.25, or 76.333 or some other esoteric value, and in those cases we would need ever more switches.
Slicing and dicing a whole number into ever smaller increments is an infinite exercise, so it is easy to see that it is impossible exactly to replicate an analog signal using digital encoding. There is, as computer designers say, a limit to the resolution of A-D and D-A conversions.
IT’S AS EASY AS BCD
All of this takes us directly to the next issue regarding digital technology. In our example, it is easy to see that even a simple computer using discrete switches for every control setting desired would become quickly impractical, as even simple A/D conversions would take billions and then trillions of discrete switches.
Not only would this make for impossibly large computers, but it would slow even the fastest computer down to a crawl. Instead, a way was found to encode the switch values so as to represent rather large numbers with a relatively small number of switches. The solution is called Binary Coded Decimal encoding, BCD for short.
In the early days of digital computing, even this efficient way of counting couldn’t totally fix the size problem created by cascading tens of thousands of vacuum tubes together. The largest computer I ever saw using these vacuum tube circuits was an IBM monster created for the Air Force in the late 1950s. Called the A/N FSQ-7, it had 50,000 tubes, sat in a blockhouse the size of a Boeing 747 hanger and drew enough power to light up a small town. All that and the average smartphone today still has more calculating power.
In any event, BCD encoding works as follows: Start with a chess board. To each square correctly and easily identify, you could rack up 64 switches, label each one to correspond to a square, and call it a day. Switch 1 would correspond to square 1; switch 2 would correspond to square 2, and so on. If, though, you arranged the switches sequentially, where each succeeding switch had a decimal value twice that of the switch before it, you could still uniquely identify each square but would need only six switches instead of 64! Here is how that works:
Switch A = decimal value 1
Switch B = decimal value 2
Switch C = decimal value 4
Switch D = decimal value 8
Switch E = decimal value 16
Switch F = decimal value 32
Remember that each switch listed above represents one “bit” of information and can be in only one of two possible positions, either “on” or “off.” Switch A has the lowest decimal value of our six switches, so we call that the Least Significant Bit, or LSB. Switch F, which has a decimal value of 32, is our Most Significant Bit, or MSB.
IT’S ALL IN THE COUNT
The rule for counting is simple: When a switch is turned on, its decimal value is counted. When it is off, the value is ignored. If all the switches are off, then, the resulting decimal value is zero.
Since “all switches off” is the only condition under which we can get a count of zero, we are free to assign that value to the first square on our chessboard.
Turning on only switch A give us a decimal value of one, or in BCD, 000001 (the LSB is always represented as being on the right). Again, this condition is unique, so we will assign that value to the second square on the board.
We can continue flipping switches on and off in unique combinations all the way up to a count of 63, when all switches are on, as shown here. Adding the initial state of zero to the maximum count of 63 gives us 64 unique combinations, which fills out our chessboard. Table 1 shows each possible combination of the six switches.
A pattern is discernible. I’ve added colored lines in each row every time the switch in that row changes states. It’s easy to see that each switch toggles on and off at exactly one-half the rate of the switch before it and therefore has twice the numerical value. This method of counting reflects the “Power of Twos.” For six switches, it is mathematically represented as 2(spoken as “two to the sixth power,” which in turn, simply means the number 2 multiplied by itself six times: 2×2×2×2×2×2=64). Digital computers (and sound cards, and processors, and all the rest) use a version of this encoding method (hexadecimal counting in modern PCs, but even though the specifics are slightly different, the example is nonetheless valid).
So far, so good, but to operate, computers need to be told what to do. In the example above, we can count from zero to 63 and can further identify any individual square on the chessboard, but we have no way of telling our counter how to do those things, other than by manually flipping the switches ourselves (which would sort of defeat the whole purpose, right?). We need some software. Software is called that because the instructions in it can be changed without having to change the actual hardwired circuits — the hardware — inside the computer.
Furthermore, to make the computer understand our instructions, we have to apply some rules. Rather than just send bit after bit after bit of instruction, which to the machine would look like “dothisanddothisanddothisanddothisanddothis,” the bits are combined into computer words called bytes.
The term byte was not coined until 1956 by an IBM engineer. Before that, computer words were made of various numbers of bits; some 4 bits long, others 6, and still others, 8. The byte eventually was standardized to 8 bits, and this has worked out quite well.
It turns out that we need 101 different switch settings to define all of the alphanumeric characters in the English language, including both upper and lower case letters, numbers and punctuation that we typically use, including several combinations to identify special functions on a computer keyboard, (like the “delete” key, etc.). To assign a unique BCD value to each character and function, then, we need at least a BCD count of 101. Standard keyboards are frequently called 101 keyboards for just this reason.
Using the BCD method of encoding, we know from our example that six bits is not enough, since that only allows for 64 unique combinations. Instead, we need a minimum of 7 bits (2×2×2×2×2×2×2=128) to be able to identify all of the different characters, with a few left over. This arrangement and the specific bit combination for each character were adopted as the American Standard Code of Information Interchange (ASCII, pronounced “ask-ee”), and that term is also used to describe keyboards.
By adding an eighth bit (sometimes used as an error checking bit, called a “parity” bit), we get to the standard computer “word” of 8 bits. Note that a computer “word” is not a word in the sense that the letters on this page are combined to make words in the English language. Instead, a computer word, or byte, is a combination of bits combined in a certain way to define a numeric quantity in the computer.
FROM A TO Z AND THEN SOME
If we added two more switches to our arrangement on the previous page then we could generate any alphabetical or numerical character we wanted. For example, the decimal count for a lower case “a” in ASCII is 97. Arranging our eight switches in the following order of 01100001 gets us there. The LSB bit/switch on the right, with a value of decimal 1 is on, as are the sixth bit, with a value of 32, and the seventh bit, with a value of 64. 1+32+64=97).
For readers who want to get deeply immersed in all things ASCII, (and confirm that a lower case “a” is indeed represented as decimal 97), www.ascii-code.com is a good place to start.
As I strike the keys of my laptop keyboard to write this article, I am sending unique 8-bit bytes to the central processor, which in turn tells the screen to display the correct character. I have typed 16,214 characters for this article (subject to editing!). The very first word of the headline — digital — is in my computer memory right now as:
01000100 01101001 01100111 01101001 01110100 01100001 01101100
Fifty-six bits just for that first word; 129,712 bits for the whole article. Sounds like a lot, but compared to digital audio storage and processing, it’s not enough to record even one second of audio. The complexity of doing that is something we’ll take up next time.
Jim Withers is owner of KYRK(FM) in Corpus Christi, Texas, and a longtime RW contributor. He has four decades of broadcast engineering experience at radio and television stations around the country.