When these yagi comparison test results were first announced ( I think I
remember the talk at Dayton several years ago) it really got me thinking - IS
IT POSSIBLE that there could be an antenna model, that for years actually had
far less gain than the owners thought? The experiment certainly seemed to be
proper. The testers, with their undisputed engineering knowledge, the proper
instrumentation, other antennas that did what they were supposed to do on an
antenna range that they characterized. Everything seemed to be in order.
For sake of argument let's say I had one of the underperforming antennas. IF
(but hardly anyone ever does) I had a comparison dipole mounted nearby, with an
antenna switch to allow me to instantaneously switch between the dipole and the
beam, presumably I would see the effect the testers measured. But, lacking
that, IS IT POSSIBLE that I could go on for years working DX, and not know any
better? I could rotate the antenna and see the pattern changes. I might get
good F/B, I might THINK I had gain and a good performing antenna. Would I
really know if I was not getting, say, the 6 db gain that the antenna was
supposed to deliver?
Furthermore, say the antenna in question really was mis-designed, something
wrong somewhere. There's somewhere around 6 db to be accounted for. Where did
it go? (Remember, the assumption I made is this antenna had a pattern with a
peak off the front and some F/B, so it was ACTING like a beam). If there is 6
db of loss, that's 3/4 of the RF being lost. That's 1125 watts at legal limit
being dissipated. Certainly not in the traps, they'd melt in 30 seconds.
The guys who did these tests are to be commended for raising an extremely
interesting question about the Mosely antenna. Questions that for the past
many years haven't been answered. Does anyone even have a theory about it?
Windows Live™ SkyDrive™: Get 25 GB of free online storage.
TowerTalk mailing list