digital > analog; ? - Programmers Heaven

#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

# digital > analog; ?

Posts: 15Member
If analog can have a whole range of values as oposed to digital's two values,how come digital approach gives clearer,crispier etc output than analog ?

• Posts: 464Member
: If analog can have a whole range of values as oposed to digital's two values,how come digital approach gives clearer,crispier etc output than analog ?
:

Ehmm ... well, a digital value isn't just 1 or 0, this is where all those bytes come in ;-)

Lets take a Volt-meter for example:

Say you want to measure a voltage between the 0 an 5V, on an analog volt-meter, you get numbers from 0 to 5 and a needle which points somewhere, between the, if you're lucky, you can get 1 decimal precision.
If you have a digital volt-meter, then the voltage is converted to a bitpatern (consisting of 1 or more bytes) which gives you a much higher accuracy. Taking 1 byte you can see the difference between 255 values in that 0 to 5 voltage-range, where as with the analog, you'll be lucky if you can get 50.

• Posts: 30Member
Hi!
In addition to iDaZe's answer I would like to say that analog would actually be clearer, crisper etc than digital if it wasn't for this thing called noise. To convert something into digital also adds a little noise called quantization-noise, but then you can send it across the world without adding any more noise.

I will give you an example:
One day you hear a boat whistle that you really like, so you wish to call a friend and tell him about it. Now there a two ways for you to tell him what frequency that whistle had:

-The analog way: You hold your microphone up to the wistle, and your friend hears it almost perfectly.

-The digital way: You tell him it was about a C#, and of course the wistle was not a perfect C#, so you have added a little error,(quantization-noise).

So far analog has been superior, but now your friend wants to tell another friend, so he hangs up and dials another number. Now he has two options:

-The analog way: He whistles, and as most humans, he whistles less good than he think he does, and this adds an error.

-The digital way: He says "it was a C#", and doing so he adds no additional error.

Now imagin if this friend also calls a friend, who in his turn calls a friend and so on. Saying that it was a C# wasn't absolutely correct to begin with, but it doesn't get any worse than that first error. While whistling will add up errors for every transition that is needed.

/Tomas

• Posts: 96Member

Where did you hear that?

: If analog can have a whole range of values as oposed to digital's two values,how come digital approach gives clearer,crispier etc output than analog ?
:

• Posts: 1,666Member
: If analog can have a whole range of values as oposed to digital's two values,how come digital approach gives clearer,crispier etc output than analog ?
:

This is probably never gonna get read but...

Analog is better than digital. This is why ALBUMS are still the highest quality form of music storage available. However, as we all know, albums get scratched and lower in quality pretty quickly.

CD's, in many ways, are more convenient than albums, however they are digital and cannot have the same quality as an analog form of storage. However, it stores the data in a digital format so it's not a PERFECT reflection of the original song, unless the original was digital itself. Of course, none of this matters because they typically use a sampling rate beyond the human ear's capacity to differentiate the sounds.

The quantization error Tomas was talking about is due to the discrete sampling of the source signal. A pure note looks like a perfect and smooth sine wave. When you sample it you get something that looks like stairsteps. However, when you see a sine wave on a computer monitor it looks smooth, but that's because the sampling is beyond your eye's ability to tell the difference. The sampling comes from the program that calculated the sine wave and from the pixelation of the monitor.

Digital is better for a lot of things, because it's easier to tell if something is one or zero than it is to tell if it's .152323636. For example, when sending a telephone call to someone's cell phone the signal has to be transmitted through air. While it's being transmitted it is likely to attenuate and noise is likely to get in with the signal. Now if I just send someones voice any noise will cause it to be static-y, by causing the signal at that point to be greater or less than what it should. However, if instead I encode the voice digitally and send the data a bit at a time then it's less susceptible to noise. Why? Well, to send the voice you just trasmitted the sound as is, so if the pitch was 50 Hz, you converted it to some number let's say .05 and sent that. Now, if you encode the 50 in binary you can send each bit separately, and a 1 is simply positive numbers and a 0 is negative. That gives you a whole lot more room for error and noise. However, if the noise does cause the wave to change signs at that point you'd get the wrong bit, but error detecting and correcting protocols are able to be implemented with digital data.