[TowerTalk] Active phased arrays.

Jim Lux jimlux at earthlink.net
Mon Mar 7 00:22:46 EST 2005


  ----- Original Message ----- 
  From: Dudley Chapman 
  To: 'Tom Rauch' ; towertalk at contesting.com 
  Cc: richard at karlquist.com ; jimlux at earthlink.com 
  Sent: Sunday, March 06, 2005 8:32 PM
  Subject: RE: [TowerTalk] Active phased arrays.


   

  Tom wrote....

   

  All you'd have is a fancy MFJ-1025, and all the same

  limitations that apply to a MFJ-1025 would apply to the DSP

  system.

   

  You can't separate them and process them separately with a

  phasing system. You can't null or subtract noise with

  affecting desired signals from the same direction the same

  way. You can't null strong signals from a given direction

  without also nulling desired signals from the same

  direction. You can't null a signal without creating a

  response change in other directions, and you can't create a

  response peak without creating a null.

   

  You are stuck with whatever patterns you can create with the

  element spacing and element locations you have.

   

  73 Tom

   

  Tom,

     Actually, there is a difference between this and a single MFJ-1025.  Each element's signal is being phased and amplitude adjusted in the DSP as a complex signal before you do the combining.  This is a key point that all the computing is done with both the I and Q part of the signal from each element.  It gives you more options when you are beamforming nulls.  

   

   

  Jim,

     I think you see what I was getting at with the current source tx amps and the rx buffer both at the feedpoint of each element.  That way the phasing is completely determined by the DSP and the feedline lengths for both tx and rx.

   

     I am probably way off on thinking that I can reduce mutual coupling very much with non-resonant elements.  It does make the system simpler, though.  As long as the amps can handle the reactive loads.

   

     As for efficiency, there is still the problem of copper and ground losses.  In my economy, I would still consider those to be losses after the output and not before.   In other words, I would consider it a QRP system if I had 5 watts going into Rrad and Rloss, instead of running more power so that I have 5 watts into Rrad alone.  I think the latter is what you were suggesting in your original post.  That's the way I would run it, but I can see the other argument, too.  But other than that, I don't care if it takes me 25 watts of power supply power to accomplish getting a clean 5 watts distributed into the reactive elements.

   

     Anyway, its easy to see that this is the dream of a ham turned software guy, isn't it?  There are about 10 analog components and the rest is software.



Well.. that's precisely the value of the approach.  Moore's law means that the computation part (the DSP) will always be getting cheaper, lower power, etc.  The RF parts tend to stay pretty much constant cost (at a given RF performance level), and what variations there are tend to be pretty slow (as in years or decades).  This pushes towards increasing digital processing in any system.  



If you think about a sort of generic system. You have some signal processing that needs to be done. You can do it with all analog components, or with part analog and part digital. Today, the optimum split might be mostly analog, but once you bite the bullet and go more digital, your system changes much more quickly. 



Some 15 years ago, I was working with a variety of systems that essentially were real time spectrum analyzers that looked at all frequencies (in the band) all the time (as opposed to sweeping, like usual spectrum analyzers do).  There were three basic approaches:

- optical using a Bragg Cell and a laser to do a Fourier Transform on a CCD line sensor

- all analog, using a dispersive delay line in a "microscan compressive receiver"

- mostly digital, feeding the digitized IF into a pipelined FFT processor.

To put things in perspective, we're looking at doing a power spectrum somewhat faster than a millisecond with 10MHz bandwidth and sub-10 kHz resolution. All approaches had about the same signal processing performance, in terms of instantaneous dynamic range, etc.



At that time, the all digital approach was the biggest, highest power, and most complex, by a significant margin.  However, within a very few years, what required a stack of BIG boards full of ICs (Analog Devices digital multipliers, RAMs, Counters, etc.) turned into a few smaller ICs, and now, you could probably do it all (and then some) in a single inexpensive chip.  



The really interesting thing is that the optical and compressive receiver approaches were essentially limited in performance by physics of the devices - no matter how much you spent, you weren't going to get hugely better performance: maybe 3dB, or, if you were really lucky, and spent a huge amount of money, maybe 10 dB.  On the other hand, the digital approach could keep getting better. A factor of 10 improvement in speed is nothing (a couple years). A factor of 10 improvement in A/Ds, perhaps 5 years, and so it goes.  Not only that, but with the digital approach, you could use the digital output stream for other purposes (like feeding to a demodulator, if you wanted to demodulate a signal you found in the spectrum analysis stage).  If you wanted to store the signals for some small amount of time... RAM was getting ever cheaper, but bulk delay lines (the other technology) weren't.

 

Today, there'd be absolutely no question which way to go. The digital approach is orders of magnitude higher performance, cheaper, smaller, lower power, etc.

   

  Dudley - WA1X



More information about the TowerTalk mailing list