Topband: ADC Overload

Stephen Hicks, N5AC steve at flexradio.com
Sun Oct 11 11:23:41 EDT 2015


Rick,

I hope it's not an issue for me to post here directly.  I am posting here
because I believe that amateur radio has a huge educational component and
ultimately incorrect information services no one.  I got started when I was
12 and really knew very little about the hobby.  My journey, like most
hams, has been a long, exciting educational process.  Still, there are so
many that know so much more than I do.  I am constantly amazed at the
breadth and depth of the hobby and the people in it.  My points below are
to clarify what I have observed, calculated and believe to be true and are
presented in the interest of mutual education:

On Sun, Oct 11, 2015 at 6:17 AM, wrote:

>
> I have no experience with Flex Radio equipment, (it might be great stuff
> for
> all I know), so I will confine my comments to the theory discussed in the
> "ADC overload myths debunked"
> paper.  A lot of what I read didn't make a lot of sense to me, or seemed
> irrelevant.
>
> To begin with, I'm not sure as to the exact nature of the "myth".


Recently, a post was made to a reflector that definitively stated that
direct sampling receivers simply did not function -- that they would
overload with a minimal number of signals and/or signals of relatively
small magnitude.


> Initally,
> the myth is supposed to be that hams think average power of an ensemble of
> uncorrelated signals is the sum of the power of the components.  This is
> not
> a myth, it is true.  Then it is suggested that hams believe peak voltages
> add up, as in a 6 dB increase for two signals.  Supposedly, hams don't
> realize that the high peaks only occur rarely.  I'm not aware of any ham
> lore exhibiting this misunderstanding.


> The discussion of crest factor obscures the fact that average power still
> adds.  100 signals at S9 still has a power of 20 dB over S9, on the
> average.
> Once in a while it looks like 40 dB over S9.  The rest of the time, the
> combined power of all the signals still tests the dynamic range of the
> receiver.  It's not like a bunch of S9 signals is no worse than a single
> S9 signal.
>

The misunderstanding centers around a belief that an ADC reacts negatively
to a large average power.  There are two primary beliefs rolled into this
one: (1) That by taking the sum of any number of known signals in the power
domain we can reach a total, that when compared with the overload point of
the ADC, will definitively predict an overload of the ADC, and (2) That the
overload of an ADC  is a singular and complete event -- when it occurs the
ADC no longer functions.

Addressing each of these individually and getting more specific, the first
(1) belief is that if we have an ADC that overloads at +10dBm, that I can
take 100 -10dBm signals and completely overload the ADC to a point of
non-functioning.  This really seems like common sense to most.  We all
fully expect to be able to take 100 disparate signal generators, feed them
through a lossless combiner, read a power meter and see +10dBm and then
stick that in the ADC and overload it.  But this is not how it works.

To understand what actually happens, we have to look at how a discrete
sampled system works.  The ADCs we use are oversampled and run at somewhere
between 100-300MHz.  Each sample period, the ADC essentially takes a
voltage reading on the antenna and records this value, transmitting it to
to the computing element in an SDR.  The instantaneous voltage of any
RF signal varies with the sine wave that defines the RF carrier so it
varies from the bias point of the ADC to the bias plus the voltage
amplitude of the signal, back through the bias point, down to the the bias
minus the amplitude of the signal and back to the bias point each cycle of
the RF signal.  It is a sine wave of given voltage amplitude centered
around the bias point of the ADC.

If I add a second signal of equal amplitude, the second signal will add to
the first and I will get an instantaneous voltage that is the sum of the
two signals. But the voltage is not simply 2x the voltage of the first --
this is only the case if the two signals are on exactly the same frequency,
phase and amplitude.  What actually happens is a beat-note between the two
signals who's envelope oscillates in time at the frequency of the beat note
(difference in frequency of the two signals).  Periodically, the peaks of
the two signals will be exactly aligned and we will get a doubling of the
voltage.  For two signals this happens fairly frequently.  Similar to the
two signals adding, they will also subtract to result in a voltage
magnitude (absolute value) lower than either of the two signals would have
by themselves.  For example, one signal might be at +1V while the other is
at -0.66V.  The resulting voltage measured in the converter, due to linear
superposition, is +0.33V.

Assuming for a moment that the two signals are large compared to the
overload of the ADC, say they are at +7dBm compared to the +10dBm overload
of the converter, they will overload the converter, but only periodically.
If we assume for a moment that the two signals are 2kHz apart, there will
be a beat-note that runs at a 2kHz rate.  At the "nulls" of this envelope,
the signals will consistently add to very close to zero because the signals
are precisely 180-degrees out of phase: when one signal is at it's peak
voltage, the other will be at it's peak negative voltage.  At the "peaks"
of the 2kHz beat-note, both signals will be in-phase and they will look
like two signals added in a power combiner -- providing a 6dB PEP for a
brief period.  THIS will overload the converter, but only periodically as
the signal exceeds +3dB of it's average power.  In fact, to avoid any point
where the two signals will add to exceed the voltage limits of the ADC, we
must subtract 6dB from each signal and run them at +4dBm (again assuming a
+10dBm overload).

This case, two very strong signals, just below the overload of the ADC, is
the worst case situation.  If I now go from two signals to 100 signals,
each at -10dBm (20dB below the converter overload), the probability of the
signals adding like the two-signal case is significantly decreased.  This
is simply the reduced probability that all the signals will add together at
their peaks.  And because the signals, even at S9+63dB, the overloads are
infrequent (something like one sample for every few hundred samples).  If I
reduce the level of the signals to 100 S9+53 signals, the overload becomes
all but a statistical improbability.

This gets me to point number two (2) where the belief that an overload is a
enduring event.  In fact, as the number of signals are increased (real
world) and the amplitude becomes realistic -- most folks will not see 100
S9+50 signals -- the overload events as a percentage of time grow
exceedingly small. These events "corrupt" a single datapoint in the
converter.  But for any given receiver in the FLEX-6000 radios over 100
million samples are processed per second.  Losing a few of these samples
will not cause ANY perceptible degradation in the signal.  I used the noise
blanker as an example because most modern receivers just "toss" samples
that are perceived to be noise when the NB is on and this has the
same impact on the resulting output signal.

The net-net here is that a large number of mid-level signals (say S9+30 to
S9+50) are not going to cause an overload in a direct sampling receiver.
This is a myth -- based on reasonable experiences and perceptions, but a
myth nonetheless.


>
> Then there is this statement:
>
> "The individual data points that make up a signal
>   you are listening to are almost never going
>   to fall in the same time as the overload, statistically."
>
> I have no idea what this means in terms of Nyquist sampling theory.


The instantaneous voltage read in the converter for any given time period
is the superposition of the instantaneous voltage of all signals passing
through the Nyquist filter of the radio and into the ADC.  Statistically
speaking, the corruption of a small number of samples will have little to
no impact on the end voltage produced in a narrow-band receiver derived
from those samples.  Said another way, each sample fed through a speaker in
your receiver will be composed of 10,240 samples received off the air.  The
voltage change represented by a a sample read in the converter as +1V
instead of +1.05V will have such a minuscule impact on the receiver as to
be completely imperceptible.  The decimation process in the direct sampling
SDR converts 10,240 samples at 245.75Msps to one sample at 24ksps,
extending the number of bits of precision in the process.

The
> paper goes on to
> say:
>
> "With a noise blanker, we remove thousands of samples
>   with no negative effects to the signal being
>   monitored and a momentary overload from the
>   addition of many signals summing up will have a
>   much lower effect"
>
> I don't know whether this means Flex (IE "we") has invented some sort of
> magic digital noise blanker that removes samples corrupted by overload (I'm
> skeptical) or whether it means that a noise blanking effect just happens as
> part of the sampling process (in which case, I'm still skeptical).
>

Noise that is removed by a noise blanker essentially corrupts the samples
we want to use for our receiver because the characteristics of the noise
contain frequency components that overlap with our signal of interest.
Once you've "peed in the pool" it's hard to separate out the pee.  So
modern noise blankers understand that removing samples by substituting a 0V
value or some other value will often not impact the receiver negatively
because there are so many samples containing the information we need to
play the audio for your receiver.  So modern noise blankers do just this --
they remove the samples.  This is generally a safe thing to do although
there can be some side effects that are detrimental, especially when the NB
runs at a relatively low sampling rate.  The most common issue is when the
periodicity of the noise causes the blanker to act like a mixer at
the repetition frequency of the noise.  The random nature of an overload
due to the addition of a large number of signals will not exhibit this
problem.

My comment is about all noise blankers and not about a specific one we
created.  I figured a certain percentage of people reading what I wrote
would know how noise bankers worked and would hear this explanation and say
to themselves "oh yes, this makes complete sense -- the overload removes
far fewer samples than a NB would and so the impact will be less."


> Then the subject shifts to decimation and "processing gain", which are
> simply references to digital filters.
> These techniques are all based on linearity.  Adding digital filtering
> after
> a nonlinear front end cannot repair the damage caused by nonlinearity.
> Just
> like adding crystal filters to the IF in an analog receiver won't overcome
> front end overload caused by enabling the receiver's built in preamp.
>

Absolutely true.  Once we have an overload event, nothing down the line can
fix it because the true value of that sample is lost forever.  But we still
have people that believe that an 20-bit converter at audio is superior to a
16-bit converter at RF because they do not understand processing gain.
Again, this is myth that we have to stamp out through education.  It is not
directly related to the overload problem, but I figured while I was
stomping out myths, I would be an equal-opportunity stomper ;-)

>
> There is an assertion that the large amount of "noise" added by hundreds of
> signals results in "linearization", which I believe is referring to what is
> usually called "dithering".  This is a complete misunderstanding of
> dithering, which uses small amounts of noise and does not involve clipping
> in the ADC.  High quality ADC's have dithering and similar randomization
> processes built in and don't need help from external noise anyway.
>

The periodicity of alignment of a signal with the voltage bins of the
converter causes quantization noise due to what are effectively rounding
error.  The fewer signals there are to randomize how any one signal will
fall into which bins in the converter, the more quantization noise will be
evident in the resulting derived signal.  Anything that adds randomization
to this process lessens quantization noise.  Dither is used to do this, but
on-air signals have the same effect -- in fact large on-air signals are
best because they cross more bins.  A direct sampling converter should
always perform better on-air than in a lab environment because of this.
This is unrelated to clipping, as you say.


>
> The paper then changes the subject to phase noise.
> This has nothing to do with ADC overload.  I will note that digital radios
> are much more sensitive to clock jitter (IE phase noise) than analog
> radios.
> If anything, the phase noise issue is an argument against digital.
>

You are correct -- a good phase noise oscillator is much more important in
a direct sampling receiver because the phase noise is imparted in the
signal at the point of sampling.  Because this generally happens at a
higher frequency (oversampling) in a direct sampling system, it says the
designer must have a better oscillator than one that would be used in
a superhet radio.  The LO in a superhet radio is generally divided down,
producing better phase noise on the low bands (good news for top
band aficionados) and worse phase noise on the high bands.  For the
oscillator designer, it's easier to get better phase noise at a fixed
frequency (sampling clock) than in a synthesizer, generally.

Of course a superhet has more LOs and care much be taken with each
oscillator to prevent phase noise.  In the end, the receivers RMDR is a
tell-all on how the designer did so the radio purchaser just needs to look
at RMDR for the answer, not look at the phase noise of individual LOs.


>
> There are various distractions such as the Central Limit Theorem and the
> Jupiter effect that don't add much to the discussion.
>

It's hard to explain complex material in a way that will resonate and
impart a intuition in the reader's head.  The CLT is definitely at play as
can be seen in the random addition of a lot of carriers.  The Jupiter
discussion was to show that there is a common misconception about the ease
of alignment of a number of random objects that are oscillating and the
impact of that alignment.  This speaks to both parts of the myth and,
again, I was looking for ways to give the reader an intuitive feel for the
problem in the physical world.


>
> The dubious argument is made that the
> existence of 1000's of receivers in the field without complaints from their
> owners "proves" that overload problems do not exist.  Until last month, we
> could make a similar statement about the millions of satisfied Diesel
> Volkswagen owners.
>

Haha yes good point.  It is not proof, of course.  But I can tell you from
personal experience that when things go wrong, hams call us to tell us
about them.  I do feel it is fair to say that the lack of these types of
complaints speaks to the lack of a problem, but it is not proof!


>
> The concluding statement is quite a stretch:
>
> " it is simply mathematically true.  FlexRadio Systems
>   makes the best amateur transceivers available."
>
> Mathematically true?  Maybe it's that new Common Core math.
>

I apologize for not showing more math.  I tend not to go there first when
explaining to large groups because a large percentage of the folks will
tune out the discussion quickly.


>
> Rick N6RK
>
>
>
​All of your arguments and concerns are fair and I appreciate you
continuing the conversation.  Direct sampling is relatively new in the
amateur world.  I think it is here to stay because of the tremendous
benefits it offers.  But no one knows fully all of the benefits and
pitfalls that a new technology like this will bring.  It's my wish that we
will all learn them together for the benefit of ham radio.​


​Vy 73,
Steve​


Stephen Hicks, N5AC
VP Engineering
FlexRadio Systems™
4616 W Howard Ln Ste 1-150
Austin, TX 78728
Phone: 512-535-4713 x205
Email: steve at flexradio.com
Web: www.flexradio.com
Click Here for PGP Public Key
<https://sites.google.com/a/flex-radio.com/pgp-public-keys/n5ac>



*Tune In Excitement™*
PowerSDR™ is a trademark of FlexRadio Systems


More information about the Topband mailing list