> > > suggestion, implied or otherwise, that boresight performance of the
> > > antennas under test can be extrapolated from field strength
> > > measurement made deep inside a pattern null is dubious.
> I would agree if one attempted to do this on a single antenna. But this is
> a COMPARISON, and all are equally subject to the same attenuations.
That's a common assumption, but unfortunately simply is not true.
It is true ONLY if the antennas have exactly identical patterns when
the test site is subject to scattering or multipath. The larger the
difference in individual patterns, the greater the potential error.
> remarkable is it not, that they all really came up with relatively equal
> gains, save one? And where modelling is available, the gains specified are
> a bit shy of predicted.
Maybe the dipole had gain from scattering or re-radiation?
Also, every measurement method and every instrument has
tolerances. No one seems to consider that fact in the equation.
The effect of writing down information and presenting it in formal
form is that most people accept the data without understanding or
considering the fact that it always contains errors or tolerances.
Does anyone have any idea what the potential measurement error
None of this is meant to imply the results are incorrect, but it is a
fact the methods were far from ideal. The results with almost 100%
certainly have a few dB of tolerance. While that won't change the
general order of results, it could greatly change the "gain spread"
or order of close-to-the-same-gain antennas.
It amazes me that so many of us disregard potential errors and
consider results in a test like this as absolute. The damage this
can do is tremendous, it could easily put a company out of
business or at least hurt them financially based on data that
I got into a similar problem with the ARRL in 1984 or so. They
reviewed an AL-1200 amplifier, and tested it on a defective power
source at the lab. The result was HV sagged far beyond what it
would on a good power source. The ARRL measured the supply
voltage sag using an incorrect method (despite warnings from me
they were measuring it wrong, using a RMS meter instead of a
peak reading meter ore a scope), and concluded the power line
supply they used was good.
They published the results, and then when the took the same AL-
1200 over to W1AW and it worked TOTALLY different. The sag was
gone, and the power and efficiency were way up. They never
published a correction for the error, but they did quit using that
regulated supply in future tests. Had they used that supply, some
amplifiers would have done better than others and the spread from
best to worse would have expanded.
Having been in that position at least three times now in reviews, I
feel some sympathy for others who are thrust into the same
These types of "reviews" are just one person's (or one small
group's) idea of how something should be measured. They are
never perfect, and more often than we imagine are not even
That's the way the system will always work when people don't
verify data with a totally different measurement method that involves
different people. Without a blind cross-check, you have no idea
what you are getting.
73, Tom W8JI
FAQ on WWW: http://www.contesting.com/towertalkfaq.html
Administrative requests: towertalk-REQUEST@contesting.com