[Amps] Low pass filters and saturated operation

Manfred Mornhinweg manfred at ludens.cl
Mon Jan 1 09:39:25 EST 2018


Hi all,

after my rather pointless post moments ago about the ICAS-CCS debate, 
here is a somewhat more meaty year-starter for those of you who do enjoy 
hands-on amplifier building.

I'm currently developing an SDR that uses an MRF1K50N in the final 
stage. My intention is to run this in Envelope Tracking mode (ET), with 
a linear class AB driver stage and running the final stage in saturated 
class-AB. I intend to use a bank of relay-switched series-input low-pass 
filters, because such filters offer a high impedance to harmonics. The 
LDMOSFET, running in saturation, behaves closely like a voltage source, 
producing a voltage waveform approaching a square wave, while the 
series-input low-pass filter takes a roughly sine-shaped current. So 
this could be considered a form of class-F operation.

At this time I have a test transmitter up and running, using just the 
driver FETs (2x RD16HHF1) as an experimental ET final stage. It works 
very well, producing roughly 70% efficiency over the complete SSB 
envelope, compared to 20% or so when running in linear class-AB. At the 
same time the IMD performance is way better than in linear class-AB 
mode! Like -45dB for IMD3, and dropping from there.

The RD16HHF1 has comparatively high RDSon. The MRF1K50N is better in 
this regard, relative to its voltage and current ratings, so I expect 
something closer to 80% efficiency from it - and that's consistent with 
information given in its datasheet.

But I foresee one problem, about which I would like to hear any good ideas:

While a series-input low-pass filter has high impedance at all 
harmonics, any line length between the MOSFETs and the filter will 
introduce impedance transformations, so that the MOSFET drains will see 
various different load impedances at the various harmonics of the 
various bands. If bad luck hits, on some harmonic of some band the load 
impedance might even be a dead short. And that worries me.

My RD16 test amplifier is quite immune to this, because it has a very 
small conventional output transformer. So the total wire length there is 
small, introducing no problematic impedance transformation at harmonic 
frequencies. But the legal-limit LDMOSFET version necessarily will have 
much larger transmission-line transformers, and the undesirable 
impedance transformation there will be highly significant. Assuming that 
the total cable length from the LDMOSFET drains to the low-pass filter 
will be 70cm, that's an electric length of roughly 1 meter, and that 
will introduce significant impedance transformations for all frequencies 
from about 30MHz up. And that means that all harmonics of all bands from 
30m up will be affected, while the lower bands will have at least one of 
a few of the most powerful harmonics exempted from this problem.

So, does anyone of you know how bad the effect of this will be? How much 
efficiency reduction it will cause? How much risk to blow the expensive 
LDMOSFET (I bought just one, and don't want to blow it up while 
experimenting!)

And does anybody have a good idea how to manage this? Maybe by 
empirically tuning each low-pass filter? That would absolutely require 
one filter per band, rather than sharing filters between neighboring bands.

Or maybe somebody has a good idea about how to match the LDMOSFET's 
drain impedance to 50 ohm while using the least possible wire length? It 
must be a broadband solution, of course, because bandswitching directly 
at the drains isn't feasible, given their low impedances.

Manfred


========================
Visit my hobby homepage!
http://ludens.cl
========================


More information about the Amps mailing list