Hi, Jim,
The article in question is available for download at:
http://www.arrl.org/tis/info/pdf/020708qex046.pdf
I suggest that people read it, especially my sidebar, before reading this post.
> 1. Referencing measurements to the MDS level is not
> best, because establishing the MDS level is a matter
> of the perception of the person doing the test! MDS should
> be dropped from the process, per Doug.
The measurement of the noise floor of a receiver cannot be dropped from the
process, because it is an important receiver operating parameter. It could be
reported as a noise figure although neither noise floor or noise figure that is
a complete number without reporting the bandwidth. And a noise floor
measurement is NOT a perception thing at all; it is made with an RMS-reading
voltmeter and can be done accurately by an experienced test engineer who knows
how to interpret a somewhat noisy meter reading.
> Also, Doug recommends the use of the Audio Spectrum analyzer as
> far superior to locating spurious signals (IMD's) near/in
> the noise floor or below it; not using an audio meter
> as is now used in the ARRL Lab with who knows what
> accuracy.
Actually, the accuracy of the ARRL Lab's metrology is good. The HP-339 is one
of our older pieces of equipment, but it is a true RMS reading meter and it is
calibrated by an external cal lab as recommended by the manufacturer. If a
particular reading is not influenced by receiver reciprocal mixing, and a
noise-floor measurement is not, the accuracy of the result is quite good.
Seeing as the noise-floor measurement is made by measuring the receiver input
noise level, noise is an integral part of the measurement. I would be hard
pressed to use a spectrum analyzer to make a noise floor measurement on a
receiver. It could be done with some analyzers, but it would require that one
either use an analyzer that is capable of reporting the entire power level of
the displayed spectrum, or do an FFT and obtain same on the receiver output
noise, then add a signal-generator signal and determine when the total power
rose by 3 dB. It is much easier to make that measurement with an analog meter.
In making measurements of receiver dynamic range, the ARRL Lab does indeed make
those measurements at the noise floor. If you look at the graphs I generated
for the sidebar in the above article, you will correctly note that receivers do
NOT always follow the third-order laws; in fact, from my 15+ years of
experience testing receviers for ARRL Product Review (either as test engineer,
or supervising one), I have found that most do not. For a number of years, ARRL
made dynamic range measurements at the noise floor, then calculated an IP3 from
those measurments. Doug Smith and I have had a number of interesting
discussions about this, and when we first started, that is exactly how he felt
ARRL should be making IP3 measurements, so that the dynamic range and IP3
results always added up correctly.
A few years back, though, Ulrich Rohde worked with ARRL and helped us
understand why the noise floor was not an ideal place to make IP3 measurements.
He suggested to ARRL that we use an S5 receiver output level as the standard
for making IP3 measurements. At the time, I took a look at as many of the
current spate of radios as I could lay hands on, and did some measurements of
IP3 at various levels, as Mike Tracy did for the sidebar I wrote. When I
looked at the graphs of those tests, at first I was quite puzzled about how to
best decide what the "real" IP3 of the radio might be. After all, I could get
a different IP3 at the noise floor, at S5, at S9, etc. But when I did what I
suggest readers do and draw a "best fit" line with the correct 1:1 and 3:1
slope onto the real graphs at levels below AGC compression, I concluded then
and now, that the measurements made at an S5 level were a good representation
for the IP3 of that radio at the level at which amateurs are apt to encounter
IMD distortion on the bands. To correct any misunderstanding, ARRL has been
making its IP3 measurements well above the noise floor -- at an S5 receiver
output level -- for over 10 years running now.
Frankly, for most HF installations, an intermod product that is equal to the
noise floor of a receiver will not be heard once that receiver is connected to
an antenna and band noise that is tens of dB higher than the receiver input
noise. Even an intermod product whose output were S1 or so would generally not
cause harmful interference. By selecting an S5 receiver output level as the
point at which IP3 is made, we have both a good fit for the way real receivers
have performed over decades of testing and an intermod level that is reasonably
representative of the level that will probably cause harmful interference in
actual use.
But you have hit upon one flaw in this system. S5 could be -109 dBm in one
receiver, -129 dBm in another with a very generous S meter and -89 dBm in yet
another receiver. I believe that correcting this is a good step for ARRL to
take and we intend to take it, but only after doing a thorough investigation on
any change, coordinated well with manufacturers. In general, it should not
make a tremendous difference in the actual IP3 calculation. If you look at the
graphs in my sidebar, I had made IP3 measurments at S1, S5 and S9 (plus a few
other levels for some of the receivers). I don't imagine that even a stingy S
meter would not read at least S5 for a signal level of -73 dBm, the "Collins"
S9 standard. So the net effect of being at different points along that
intersecting first-order and third-order set of lines is, between S1 and S7,
typically not more than a few dB. But the potential for hanky panky is still
present with the present test levels and I want to change it. Look at the Fig
C in my sidebar. If a manufacturer were to note that his radio has a
significantly higher IP3 at S9+, and made his S meter the most stingy in the
world, the "real" IP3 of that radio would be somewhat inflated, by about 8 dB
-- not a good thing by any stretch.
The other factor in S meter sensitivity is receiver sensitivity. Mike Tracy and
I just had a nice chat and our thinking is that if we were to take the
"Collins" S5 level of -109 dBm and add the receiver's noise figure to it, we
would have about as close to a standard receiver output level for IP3
measurements as one could get. There is a minor downside to that, though --
the eyeball factor. It is relatively easy for a test engineer to eyeball a
whole S unit number -- S5 for example. But if we standardize on -109 dBm +
noise figure, that will result in a S meter reading that is usually not going
to be a whole number. When the "on channel" reading is taken, the engineer will
then have to readjust the signal generators to create an intermodulation
product, then eyeball that product to the same S meter reference level -- not
an easy task with a few minutes time between the two measurements. It may be
more accurate in the long run to allow a +- 4 dB variation from the standard
level, to allow the test engineer to select a whole-unit level, to make for a
more accurate reading. In a receiver with a true 1:1 and 3:1 slope, there
would be no difference of IP3 measured at any level. In real-world receivers,
this 4 dB variation would result in an IP3 reading that varied by the amount
that the straightness of the curves deviated from ideal over a 4-dB range. From
my experience with receviers, this generally would result in only a fractional
dB difference in IP3, and I might expect at least that much from the eyeball
factor.
Now, one might say to skip the S meter and look only at receiver output, but
the only way to do that is with signals that are weak enough to be in the
linear range of the receiver. We have seen receivers whose AGC is so tight
that we can never get more than 9 dB change in receiver output no matter what
we do. In most radios, the receiver output will be the same at S5 as it would
at S9. We could, for most receivers, do IP3 measurements at a receiver output
that is 10 dB below the 1 dB compression point, but that is making an IP3
measurement at a level that is much less than the signals typically encountered
during actual receiver use.
All in all, I think we are on the right track with an S5ish output level and it
now is a matter of improving the standardization.
Actually, Doug and I agree pretty closely up to this point. Our differences of
opinion stem from "dynamic range" measurements. ARRL continues to make dynamic
range measurements at the noise floor, even though they are a bit difficult to
make. We do this because the very definition of two-tone, third-order dynamic
range (TTTODR) is the difference between the noise floor and an unwanted
receiver response at the noise floor level. The definition of blocking dynamic
range (BDR) is the difference between the noise floor and the level of signal
that causes 1 dB of degradation in the the receiver's performance.
When you look at the graphs, especially 1C, you can see that the relationship
between that actual measurement and IP3 and other points along those curves is
not as clear cut as theory might suggest. Doug and Ulrich both state that
dynamic range can be calculated from an IP3 measurement made at a higher level.
I disagree that this is the best way to determine dynamic range. First, one
could get a different "dynamic range" at any point along the curve and if the
calculation is intended to go backwards to the noise-floor level by assuming
that lines that are not straight are actually straight, I see no reason not to
do what one already has to do to get the noise floor reference in the first
place, and make the actual measurement at the noise floor. It is not, IMHO, an
accurate measurement to make a "dynamic range" measurement at a high level and
then calculate backwards to what the measurment would be if the receiver
performed differently than the receiver actually performs. If an acutal
measurement can be made -- impossible for IP3 -- then doing so is superior to
any calculation that assumes ideal performance where ideal performance is not
apt to exist.
What is lost doing it this way? Well, an IP3 calculation made from dynamic
range could and probably will be different than an IP3 calculation made at
higher levels. Those that want to see everything add up in real-world receivers
as if those receivers followed theory perfectly will have to do a bit of
thinking about how real-world receivers actually perform. I think the tradeoff
to have measurments made that reflect what a receiver is actually doing
outweigh any lost sense of aesthetics.
What can also get lost is that many dynamic range measurements cannot be made
because receiver noise, notably reciprocal mixing, masks the measurement being
made. In these cases, ARRL reports the measurements as "noise limited" at the
level that caused a 3 dB increase in noise for a TTTODR measurement or a 1 dB
increase in noise for a BDR measurement.
Doug has suggested that an audio spectrum analyzer be used to make receiver
measurements. In the case of a measurement that is not affected by receiver
noise, there will be no difference between the measurement made with an
RMS-reading audio voltmeter and a spectrum analyzer. Actually, I take it back,
because the HP-339 has a spec of +-2% accuracy, if memory serves, and the
HP-8563E has an accuracy of 2 dB or about +-60% or so , although for relative
measurements made at a close frequency separation, the accuracy is going to be
better than spec for both instruments. In the case of a measurement that is
completely noise limited, the analyzer will buy nothing, because if the
receiver is swamped by its own phase noise in the presence of strong signals,
the analyzer can't be used to dig a distortion product out of that noise. In
between, though, is an area where the analyzer may be able to do some things
that the RMS-reading voltmeter cannot. Do we want to, though? I am not sure we
do.
Let's look at one extreme. Assume that a receiver is noisy, but that an
intermodulation product can be detected 15 dB below the receiver noise present
during the test. Right now, with the RMS voltmeter method, we would report that
reading as being noise limited at N dB, with N being related to the signal
levels that caused the increase in noise instead of a distortion product or
blocking. Would it really be more useful to QST readers to dig that distortion
product out of the noise so that we could report that if the receiver were not
as noisy as it is, the dynamic range would have been N dB? I really don't think
so, and IMHO, the noise limited value is of more use to QST readers than a
"dynamic range" measurement that in that case cannot be achieved by the
receiver in practice.
Where could a spectrum analyzer be used more effectively than an RMS-reading
voltmeter? I have seen a few receivers where phase noise/reciprocal mixing and
the intermod or blocking seem to run neck and neck. In that case, the good
judgement of the test engineer has to sort out the intermod or blocking from
the noise and get a reading anyway. I think in these relatively rare cases, the
analyzer will do a better job than the RMS-reading voltmeter, in spite of the
specification inferiority of the analyzer. Mike Tracy intends to look into that
for an upcoming review. But do not discount the value of an RMS-reading
voltmeter, because an audio spectrum analyzer is doing nothing different than
the meter when measuring the desired signals and when measuring the input noise
of the receiver, the voltmeter is a LOT easier to use.
> 3. How the introduction of noise (from reciprocal mixing, from
> inaccuracies about knowledge of the rcvr's noise figure,
> imperfection about "knowing" the MDS of the particular
> rcvr, etc.) is accounted for can cause inaccurate measurement
> results; how these are accounted for ought to be part of
> the report of results for the measurement presented of IMD,
> IP's, etc. Also to be included is how the actual BW used
> in the measurement was determined and that number.
> (Note: ARRL procedure is to just select whatever the
> rcvr has stated to be at or close to 500 Hz BW; Doug
> notes that in various rcvrs tested the actual BW's
> ranged from 300 to 700 Hz for filters with 500 Hz labels).
I agree that some changes should be made in the way that ARRL reports
bandwidth. In reporting the noise floor of the receiver, this can only be
absolutely accurately interpreted if the rectangular bandwidth of the receiver
passband is also known. Of course, although noise figure of a recevier is
independent of bandwidth, one can really relate it to what one will hear from a
receiver in real use if one also knows the bandwidth of the receiver. So any
method of reporting sensitivity is tied into bandwidth -- not surprising
because the real-world performance of a receiver is also related to bandwidth.
Right now, ARRL reports the -6 dB bandwidth points on the receiver. This is
probably pretty close to the equivalent rectangular bandwidth in most cases.
The League is also including some bandwidth measurements in most of the
expanded test-result reports that are done for most rigs. Tidying up this loose
end would be a good improvement to make. But, as in the S5 reference level for
IP3, we really are not talking more than a few dB difference, if that, in
almost all cases.
> 4. Doug discusses the ARRL IMD and IP measurement
> equipment set up and use explicitly. He suggests the use
> of a better hybrid combiner for "summing" the outputs of
> the two input test signals; he mentions his new design for
> same, notes the ARRL is/was evaluating it, but no mention
> of whether the ARRL labs will be using it now/in the future.
> He points to several isolation/filtering issues which need
> more attention within the present set up.
Doug has made a few assumptions that are not correct. In addition to the
simple, two-port coupler method, ARRL also has a set of high-linearity 1-W
instrumentation amplifiers donated to us by Ulrich Rohde. We have used these
to verify that with the old HP-8640B generators, the ARRL Lab can accurately
measure up to about +33 dBm IP3 and with the Marconi generators, the
test-fixture IMD is low enough to measure about +40 dBm IP3, if memory serves.
All receivers measured to date are below that level, so the two-port coupler is
quite sufficient. I believe that Mike Tracy has confirmed that the simple test
setup is good to +75 dBm IP2. I do know that although the amplifier setup is a
lot more cumbersome, Mike always confirms any IP3 readings above +25 dBm, just
to ensure that ARRL is measuring the unit under test, not the ARRL test
fixture. We are also aware of the other limitations of our test equipment and
would NEVER knowingly report a number that we were not convinced was a real
unit measurement.
> 8. Ed's conclusion in his sidebar piece seems a good summary
> comment to all of this: " In the case of the receivers (tested here)
> what is the "true" intercept point of each receiver? There really
> is no true number........ the tests took considerable time. QST
> readers want to see Product Reviews as soon as possible,
> and the ARRL Lab can't take time to do much extra testing
> for radios being reviewed. Measurements made at the noise
> floor are difficult to make, and the influence of the measured
> noise on an IP3 calculation made from receiver response at
> the noise floor is not a very accurate way to make
> measurements." (!)
>
> And yet, that is exactly what is done by the ARRL and reported
> in their published rig test reports! Not only that, but note that
> every rig tested is going to be tested with different power levels
> since not every rig's S meter is going to read S = 5 with an
> exact -97 dBm input signal power pair. And ARRL Labs take
> no account for the differing BW's of the selected filter "closest
> to a 500 Hz bandwidth, nor even if the rig has one!
That is not exactly what is done by the ARRL and reported in their published
test reports. The measurements made at the noise floor are, well -- the noise
floor measurement and the dynamic range measurements. There is no other place
to measure the noise floor and, as I outlined above, I believe that a dynamic
range measurement made at the noise floor is more accurate than a calculation
based on ideal receiver responses extrapolated from a higher level. My words as
written are exactly what ARRL does, an IP3 measurement made at the noise floor
is not an accurate place to make an IP3 measurement, as can be readily seen in
2 or the 3 graphs in my sidebar.
> It seems that ARRL has neither the time, the equipment nor
> the interest in publishing more accurate test data. Hope folks
> will agree that this throws the accuracy responsibility back
> to the manufactures and ourselves, their customers and our
> own "on the air" judgments about the performance of our
> radios.
That is below the belt and an absolutely incorrect statement. Over the years I
have been involved in product testing for ARRL, I have made, and will continue
to make, improvements in the way ARRL tests and reports on equipment. Our
equipment is quite up to the tasks at hand here and after I have spent
literally hundreds of staff hours investigating test methods I have to wonder
why you would try to tell the world that ARRL has no interest in publishing
more accurate test data. The time that went into the sidebar alone should have
told you why saying was a cheap shot that has no place in a technical
discussion. I agree that improvements should be made, but they are going to be
made carefully, because if every time someone told me what ARRL is doing
"wrong," such as making dynamic range measurements at the noise floor, the way
ARRL tests equipment could change every month. That would not serve amateur
radio. If we are going to make a change, it will be in full communication with
manufacturers, who need to understand fully up front how ARRL will test their
equipment. In the case at hand, Ten Tec's own IP2 and IP3 specs stipulate "ARRL
method," although the way we test equipment is really NOT different than that
done by industry as a whole.
Some of the European societies do some comprehensive testing, but no one that I
know of has documented their test methods as thoroughly as ARRL. None routinely
offer 40+ page test result reports. If you believe that ARRL's test methods
are inaccurate, take a receiver and test it using them, and then the valid
method of your choice and you should get the same answer, within a dB or so
anyway. If not, I want to know about it and you would be more than welcome to
visit us here in the ARRL Lab and see how we are doing our testing.
In this post, I have outlined some of the reasons that ARRL is making the
testing choices it is using. I believe them to be the correct choices, offering
a reasonable level of standardization in testing and reporting on receivers
with a wide range of capabilities and "real-world" receiver performance. There
are improvements in the works, but they are not going to make a night and day
difference in results, because the test methods used give good results for the
test conditions employed, and most improvements I can think of will serve only
to tighten up a bit on the test conditions.
73,
Ed Hare, W1RFI
ARRL Lab
225 Main St
Newington, CT 06111
Tel: 860-594-0318
Internet: w1rfi@arrl.org
Web: http://www.arrl.org/tis
>
|