> > As evidenced by the published test figures on, for example, the
> > TS2000. Interesting, since modern MOSFET PA's are supposedly very
> > linear.
> The TS2000 is not a good example. It is exceptionally poor. It is
> one step removed from class C performance.
Has anyone disconnected the exciter stages of a TS2000 from the main
PA and measured them seperately? Is it a case of good amplifier raising
the levels of cr*p drive, the otherway round, or both?
There is also the real possibility that the basic RF design is considerably
better than that ever realised due to a total lack of proper setting up in
factory... it reminds me of the first generation Philips CD players (CD-104)
which used a 14-bit Burr-Brown DAC. The designer had included a high
quality 20-turn potentiometers in the design as per Burr-Brown applications
notes - this allowed for adjustment of the DAC and setting (minimising)
offsets and jitter/errors - the only problem was the factory stuffed them in
the board as they came (set half way) and there was no alignment procedure
for them :-(
> We have to remember a two-tone test does not show IMD caused
> by poor dynamic power supply regulation.
> Also AB1 or 2 tetrodes get nasty FAST compared to a GG triode
> when overdriven or misadjusted.
> > If the linear at full rated power is capable of -30dB rel PEP IMD
> > products, then the total IMD power is likely to be (neglecting phasing
> > and compensating distortion effects) 27dB down on PEP.
> That's right, IMD can add. So the PA must be many times cleaner
> than the exciter to not make things worse.
> > case the transceiver) needs to be a lot better. That suggests that
> > either it needs to run at a lower power output, demanding more gain
> > from the amplifier, or needs to be 10 to 20dB better on IMD at the
> > rated output. Many transceivers probably need to run around the 25
> > watt level rather than the 100 watt level for acceptable overall IMD.
> > However, Part 97.317 (a) 3 of the the FCC rules gives us a problem, as
> > it requires a 50 watt (mean) drive minimum for an amplifier.
Can someone explain the FCC Rules in this respect... why do they specify
the driving power of an amplifier at all? Does this mean that I cannot build
a really good amp using, say, a pair of MRF154/MRF157s because it will
require too little RF drive?
> Reducing drive power from an exciter does not always make things
> better. Quite often it actually makes things worse.
Guess it depends where on the transfer curve you are for the device, for
example I have a "so called" Microwave Modules 144MHz 100W linear
amplifier - it uses a single SD1477 (and over-simple diode biasing).
12W of drive with this amplifier would give 100W output but there was
no way it was "linear" and was too efficient. I moved a couple of the
caps and re-tuned it, its (slightly) less efficient, much more linear and
only produce a maximum of 85W output.
I have plotted its transfer characteristic with plain, unmodulated, RF
and its very good (well a straight line on my graph paper) from couple of
watts to around 75W out then starts to go into gain compression. I use it
in contests to drive two 3CX800 amplifiers via a Wilkinson splitter and
it performs very well. I'm probably using it at something like 50W output
with splitter and cable losses and its right in the middle of the straight
of its line.
As far as tuning GG amplifiers the 3CX800 amps on 144MHz (Command
Technologies Commander-IIs in my case) - I tune for maximum output
taking regard of the grid current then increase the loading by perhaps
1% (or less) which causes a substantial drop in grid current (say from 20mA
to 6-7mA) and perhaps 10-15W drop in output power, then reduce the
drive into the transvertor by 10%, ok so I don't get the last 80-90W to
air but I have a clean station and my neighbours appreciate it.
FAQ on WWW: http://www.contesting.com/FAQ/amps
Administrative requests: amps-REQUEST@contesting.com