• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

About "quantizing noise"

Doug Kerr

Well-known member
In a post in another thread there was some discussion of "quantizing noise" in connection with the digital representation of a photographic image.

I discuss this at length in this article:


but I thought I would give a brief review of the topic.

The concept originated in the field of digital audio transmission for telephone networks, and would have best been left there.

If we go back before digital transmission in telephone networks, we find that one preoccupation with transmission quality was the presence of noise in the delivered audio signal. There were various sources of such noise, with the noise itself being of different character. Two prominent kinds were:

Random (thermal) noise caused by the random nature of electron flow in amplifiers.

Impulse noise, "spikes" caused, typically, by induction into the transmission circuits from voltage transients in electromechaical switching equipment.

Of course, there was interest in quantifying the impact of noise, at first thermal noise. Only if we could quantity that impact could we, for example, determine if we were "gaining on the problem" in a particular context.

Let me point out that we can consider noise as something present in the delivered audio waveform that was not present in the original waveform.

Because of the linear nature of analog transmission systems, if we just send a "zero" signal, the delivered signal comprises only the noise introduced by the transmission channel.

It was determined that if we take that "free-standing noise" and weight its different frequency components in a way that reflected the differing sensitivity of human hearing to sound at different frequencies, and summed all the results, we got a metric that consistently represented the impact of the noise on a human listener.

So "noise measuring sets" were developed. They did this "weighting" by way of a single filter whose frequency response matched the variation of the sensitivity of human hearing with frequency. Then, by passing the noise from a test situation through that filter and measuring the total power in the filter output, we directly had our noise metric. How very handy!

Now please put aside for a while the notion of "noise".

In digital transmission of audio signals in the telephone network (introduced in the late 1950s), we capture the instantaneous value of the audio waveform periodically (we "sample" it) and we then measure each of those values and capture the result in digital form. Of course, the actual value of each sample can take on an infinite number of values (over some range), but our digital representation is discrete - it can only have a certain number of discrete values.

Thus when "at the distant end", we prepare to reconstruct the waveform from its values at these many instants, the values we have can only have those discrete values - the quantity had been "quantized". Looked at another way, the reconstructed values are, in general, not the same as the original values - they are subject to quantizing error.

As a result, when we reconstruct the waveform, it is not the same as the original waveform. This phenomenon is called quantizing distortion.

This is again a "transmission impairment", the impact of which we need to be able to quantize (for example, to be able to assess the benefit of a design change that will reduce the magnitude of the phenomenon).

We can consider the impact of quantizing distortion as a "component in the delivered waveform that was not present in the original waveform" (beginning to sound familiar?).

The question was asked, "If we isolated this 'undesired component' in the delivered waveform and subjected it to the same 'weighting by frequency' treatment we did when quantifying noise, would the resulting metric track well with the human perception of the impairment from this distortion?"

And in fact, the answer was, "Yeah, pretty much." In fact, the same weighting by frequency function used in quantifying noise (and implemented by way of the existing "weighting filter") turned out to be well suited for this scheme. (The details of execution of this are clever, but not necessary to the discussion here.)

So, as a result, it became the practice (ill-advised, actually) to consider quantizing distortion to be a special kind of noise ("If we measure it like a duck, then . . ."). And in fact, it came to be called "quantizing noise".

Now, let's move to digital imaging. We sample the color of the image at regular intervals (in two dimensions, in this case), measure the value of each sample, and represent that in digital form.

As in the case of the measurement of the audio waveform samples, the samples could have any value (over a certain range), but the digital representations could only have a finite repertoire of discrete values.

Thus, after we reconstruct the image from samples regenerated from these digital descriptions, the "color waveform" along a certain path across the reconstructed image is not the same as the original "color waveform" along that path. It has been afflicted by "quantizing distortion".

But we note that the impact to the viewer is not at all like the impact of random noise in the image. Rather, it typically manifests in such phenomena as "banding".

Now, if we wanted to develop a metric for describing the degree of this affliction, can we do that by separating the discrepancy from the "ideal copy" of the original waveform and then determining our usual "noise" metric (the RMS value of the "discrepancy")? Well, we can do that, but the resulting value does not track well with any recognized "score" for the seriousness of the perceptual impact of quantizing distortion on a photographic image.

So there is no good reason, in this context, to think of the discrepancy caused by quantizing error as "quantizing noise".

Still, apparently, those who were proud to have come to understand the concept of "quantizing noise" in the context of the digital representation of audio felt compelled to display that knowledge where it really did not fit. And so we, from place to place, hear "quantizing noise" spoken of in connection with quantizing error in the digital representation of an image.

Now, does the existence of quantizing error in the representation of a photographic image interact at all with the actual matter of noise (such as thermal noise in the sensor, or "shot noise")? Well, that is a tricky matter. And for that, I have to refer you to the discussion in the article cited above.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Doug,

Thanks for the amazingly simple to follow narrative of a subject that I feared would intimidate and flummox me. We take for granted digital
Pictures as representing something real, but unlike the analog creation of silver grains in location and proportion to incident light energy flux, digital image results we see have to build up from scratch a set of marks to closely approximate real photographic drawing.

So there are errors. Finding ways to measure any improvements we can devise has considerable value and so the science of measuring such deviations from an original photon flux pattern is worthy of our attention and understanding I wonder how the wordsmiths came to quantizing and not quantitizing or quantifying?

I do believe that as we progress to having each sensel well being an individual but swarm linked camera, we will discard the concept of "ISO" for the sensor and dynamic range will be related to the specific parameters of allowable shooter speed for senses receiving photo electrons relatively slowly as keeping these independent sensels counting long after sensels receive much more light have stopped counting. Counts would be continued to as long as possible until the rate of photo electron arrival is adequately determined with recingly small error. The latter will help define quality and noise measurements in these newer sensors.

Asher
 
Top