Towertalk
[Top] [All Lists]

Re: [TowerTalk] antenna FS measurements

To: "Tom Rauch" <w8ji@contesting.com>, <TexasRF@aol.com>,<jimjarvis@ieee.org>, <towertalk@contesting.com>
Subject: Re: [TowerTalk] antenna FS measurements
From: "Jim Lux" <jimlux@earthlink.net>
Date: Tue, 29 Jun 2004 21:03:46 -0700
List-post: <mailto:towertalk@contesting.com>
----- Original Message -----
From: "Tom Rauch" <w8ji@contesting.com>
To: "Jim Lux" <jimlux@earthlink.net>; <TexasRF@aol.com>;
<jimjarvis@ieee.org>; <towertalk@contesting.com>
Sent: Tuesday, June 29, 2004 6:12 PM
Subject: Re: [TowerTalk] antenna FS measurements


> > Why not?  Presumably the gain of the radio will be stable
> (in the short
> > run).  The sound card gain likewise.  The sampling rate of
> the sound card
>
> The word "presuming" about sums it all up. It's a HUGE chain
> of gain stages including everything from RF through IF and
> audio stages and right up to the A/D conversion. This method
> would depend on that entire system to be LINEAR, not just
> gain stable. Gain stable is probably one of the less glaring
> problems.
>
> An expensive spectrum analyzer intentionally designed to be
> accurate is working pretty well if it is within 2dB over a
> wide range of input levels. We can safely bet a receiver
> designed to operate with AGC won't be near that good.

I don't know about that... The linearity can't be all that bad, or you'd
have terrible IMD kinds of problems.
I wouldn't expect the receiver to have, say, 50-60 dB of instantaneous
dynamic range, either (which a decent spectrum analyzer DOES have, at some
expense).  For the purposes of measuring antenna patterns, you really only
need 30 dB of range, and I'll bet you can find a received signal level where
the linearity is good enough for the measurement.

The "detection linearity" of a spectrum analyzer is determined by the analog
power detector (usually a diode) and the a/d that follows it. Most decent
spectrum analyzers these days linearize the response using table lookups,
etc.  It's a long way from a 141T using a log-amp, however, there are some
issues with the accuracy of that linearization, particularly in terms of
absolute accuracy. For reference, the bottom of the line Agilent 4411B
claims 1.1 dB absolute accuracy over the entire range (and that includes the
uncertainties from the input step attenuator, etc.).  Spectrum analyzers
also have to meet that spec while sweeping over a potentially wide frequency
range, which places pretty tough demands on the wideband response linearity.
That $8000 spectrum analyzer also has a -124 to +30 dB amplitude range, and
a 80-90 dB instantaneous dynamic range. It also has to have carefully
designed filters so that the time domain response isn't screwed up (because
it's a swept device) with many selectable bandwidths.  The spectrum analyzer
has an absolute accuracy of 0.4 dB (3 sigma, I think) at the reference
setting (-25 dBm, 50 MHz, 3kHz BW, span 2kHz).  The frequency response
across the entir eband from 9 kHz to 3GHz is flat to 0.5 dB.  That's a heck
of a lot better than most receivers.

The detection linearity of the ham receiver feeding a sound card at a fixed
frequency, on the other hand, can be quite good.  The receiver itself has to
be quite linear (in an instantaneous transfer function standpoint), or you'd
be complaining about distortion products.  The absolute gain might vary as
you tune, but it's going to be reasonably stable at a given frequency.  The
linearity of the sound card has to be fairly good too, or all those wannabe
audiophile sound card junkies would complain.  Fact of the matter is, it's
not that tough to build a sound card type device with linearity such that
the error is less than 1 lsb out of 14 bits (that's around 80-85 dB,
depending on the precise error distribution).  Getting to 16 or 24 bits is a
bit more challenging, but I suspect that most all sound cards could do this.
Add in the fact that the inevitable random noise actually helps, once you do
any sort of signal processing to increase the integration time (like doing
an FFT on a 1 second capture for instance).

This is, of course, why they don't use spectrum analyzers on antenna ranges!
They're not designed for it.  You use a narrow band measurement receiver,
which has a much more limited instantaneous dynamic range, but which has
very stable detection accuracy so you can calibrate it. It's unfair to claim
that just because a spectrum analyzer costs many kilobucks and barely makes
1dB accuracies (although, they are better) you can't do better with a
consumer ham receiver and a sound card.  Different measurement problem,
different issues.


>
> > > MFJ sells a surface mount step attenuator that is
> accurate
> > > within a small fraction of a dB per step. Of course I'd
> > > check it first. I have three or four, and they are
> within
> > > .05dB per step.
> >
> > That makes it hard to do an automated measurement.
>
> I doubt anyone will do an automated measurement anyway. It
> would be tough to obtain a stable source (it has to have a
> pattern to the RX antenna direction that is steady, and that
> includes polarization). Worse yet, it is subject to ground
> effects on it's pattern.

This discussion was in the context of using HF beacons at some distance, not
doing measurements on an antenna range.  In that application, you'd be
making hundreds of measurements over time and azimuth angles, and automation
would be essential.

Even on a range, I'd think you're going to do automated measurements.  Why
would using a manual attenuator relax the requirement for a stable source?
>
> > procedure which depends on lots of manual operations is
> going to tend to
> > reduce the total number of measurements made, so you lose
> the good
> > statistics.  Off hand, I'd trust the measurements from a
> sound card, or from
> > a DVM measuring the audio or IF output more than manually
> entered switch
> > flipping.
>
> Having worked with receivers and even DVM's most of my life,
> I wouldn't.

Unless your labor is free, I'd find this hard to believe.
Doesn't take many hours of flipping switches to pay for a really, really
accurate DVM, and if you're measuring audio signals, a PC is a pretty good
AC voltmeter, with very minimal calibration required to get part per
thousand accuracies.
Even if the labor is free, you pretty quickly get to a situation where the
uncertainty due to making the measurements at different times will start to
dominate.

The one exception would be if you are making a single point measurement,
where it's possible that it would be cheaper and faster to make a
"attenuation replacement" type measurement than to use a suitable piece of
test equipment.


>
> (If you're measuring the IF level, I would have a question
> about
> > the linearity of the detector).  Sure, manual methods can
> make good
>
> If you are measuring the very same IF level through an
> unknown detector and several additional AF stages including
> a sound card, I'd have a question about linearity also.

I wouldn't use a sound card if measuring IF levels. The IF measurement
approach would use a diode detector and a DVM.

The audio linearity just isn't that bad on sound cards.  It HAS to be
decent... they are multidecade bandwidth devices where people have a basic
expectation of sub 0.1% THD and "CD quality" sound.  There's millions of
dollars devoted to getting there, cheaply.  If a manufacturer produced a
sound card that is noticeably substandard, they'd get shredded in the "check
box PC reviews", so there's a strong incentive to make it good enough.  The
quasi standard AC97 specifies that in-band integrated noise and distortion
artifacts are <-90dBFS and that THD is -74dB (0.02%).  Intermod is at least
85 dB down on the standard test method. The dominant error source in the
sound card interface is probably noise, which, conveniently, would be
reduced by averaging.

 The audio interfaces aimed at the semi-pro and pro audio business are
substantially better than that, even.  A linearity problem would manifest
itself as an inability to get that 24 bit accuracy and all that 24bits
implies.
>
> > measurements, but over the long run, for instance, I'd
> trust that automatic
> > network analyzer more than the slotted line and voltmeter.
> Likewise for
>
> The typical amateur receiver is not designed, constructed,
> or corrected as well as a $40,000 network analyzer. It's
> more like the slotted line and voltmeter.
>
> > power measurements. Carefully done attenuator substitution
> measurements to
> > the same detected level are metrologically good, but
> tedious, and probably
>
> That about sums it up also. Reseting level to the same point
> removes all errors except attenuator calibration errors and
> gain drift errors. It's a good method.
>

But slow, and not suited to repetitive measurements (or automation).  As a
result, you don't  see much use of this power measurement technique on
antenna ranges. You'd use it to calibrate the power meter or measurement
receiver, perhaps, and then use the calibrated measurement receiver to do
your antenna measurement.  Of course, you'd have to measure and calibrate
the attenuator.

I don't recall what the specs are on the Scientific Atlanta measurement
receivers we use on the range at work, but I think they are in area of the
0.2 dB for the first 10 dB, then get about 0.2 dB worse for every additional
10 dB down.  (so, if you measured a null at 20 dB down, it could really be
20.6 or 19.4 dB down (3 sigma)).

It would be interesting to see just how stable the gain of a consumer (ham)
receiver is.  I can't see any reason why it would be particularly bad, if
held at a reasonably constant temperature.  There would be a slow drift due
to aging, but temperature would be the dominant effect. If held at a really
constant temperature (say, within 1 deg C) the stability should be quite
good.

As an example*, a precision measurement receiver I built back in 1998-99 had
a 0.4 dB gain variation over a 60 deg C temperature change. There was
nothing special about the design or components (minicircuits packaged amps
and mixers for instance) to enhance temperature stability (it was mounted on
a large thermal heatsink, and we calibrated the gain vs temperature to back
out the effects of temperature).  We measured pulsed microwave signals with
a dynamic range of some 30 dB with an uncertainty (due to the receiver only)
of 0.05 dB over a time span of 4 days. (The overall system had other
uncertainties, but 0.05 was the budget for the receiver uncertainty).  This
was with a 12 bit a/d, and very short integration times on the pulses (about
2 milliseconds).  Achieving 30 dB dynamic range, with 0.1 dB accuracy, using
a sound card and 0.1 second or 1 second integration should be no problem.


Jim, W6RMK

*You can read all about it at:
Adams, Jon T., Lux, James P., "Ground Calibration of an Orbiting Spacecraft
Transmitter", IEEE 2000 Aerospace Conference, March 19-24, 2000.

Adams, Jon T, Lux, James P., "Ground Calibration of an Orbiting Spacecraft
Transmitter", IEEE 2000 International Geoscience and Remote Sensing
Symposium, July 24-28, 2000., pp 37-49, v.5

Yoho, Peter, et.al, "SeaWinds on QuikSCAT Calibration using a Calibration
Ground Station", IEEE 2000 International Geoscience and Remote Sensing
Symposium, July 24-28, 2000. pp 1039-1041 v.3


_______________________________________________

See: http://www.mscomputer.com  for "Self Supporting Towers", "Wireless Weather 
Stations", and lot's more.  Call Toll Free, 1-800-333-9041 with any questions 
and ask for Sherman, W2FLA.

_______________________________________________
TowerTalk mailing list
TowerTalk@contesting.com
http://lists.contesting.com/mailman/listinfo/towertalk

<Prev in Thread] Current Thread [Next in Thread>