Improving Contesting

tekbspa!tavan at uunet.UU.NET tekbspa!tavan at uunet.UU.NET
Tue May 25 09:59:21 EDT 1993


Lots of good ideas are floating around about contest equalization
(a bad idea), handicapping, ranking, station classification,
skill vs. geography, etc.  Let's try to define what we want to
do:

  Measure contester skill
  Predict outcome of future competitions
  Recognize superior performance
  Recognize hard work and cumulative achievement
  Make everyone win or feel like a winner
  Attract more contesters
  Make it "fair" 
  Equalize all competitors

Measure contester skill
  The best suggestion I have seen is to normalize scores of
  similar stations in a given locale and award points based on
  percentage of the highest score.  This is similar to the
  NASTAR method of measuring amateur ski racers against a
  pro who sets a baseline for a given course on a given day.
  NASTAR events stand alone - you get a medal based on your
  comparison to the pacesetter.  We could do likewise in radio.
  For example, Hi/Lo/QRP for each ARRL Division would be a
  reasonable breakdown for SS.  The winner could get 100 points
  and everyone else in that power and division gets a percentage
  less than 100. We could measure percentile of
  competitors or percent of score.  In either case, a Low
  Power Rating of 87 in New England would be vaguely similar
  to an 87 in Pacific.  If we averaged these performance
  measures over many contests, or many runs of the same
  contest, we might get a useful metric.  e.g. "I average
  about 83 in SS but only 65 in CQWW."  (But we will continue
  to hear "It's not fair - my division is highly competitive in SS 
  but one big gun skews the scores in CQWW....")  

Predict outcome of future competitions
  There are many ways to do this.  It would be nice to have a ranking
  system such as that used in chess (the "Elo System" invented
  by Prof. Arpad Elo).  That system is designed for one-on-one
  competitions.  Each player has a rating.  When two players
  compete, the difference between their ratings determines the
  number of points at risk.  If you beat a much stronger player,
  your rating goes up the max and his or hers goes down the max.
  You exchange less points for a draw.  The number of points at
  risk goes down with the rating difference, so equals exchange
  a minimum of points for a decisive outcome and none for a 
  draw.  After an initial period of settling in for a new player
  in the field, ratings tend to move slowly unless a player is
  really improving or declining (both happen).  This system
  has the advantage of adapting to the level of the competition.
  It is an extraordinary predictor of results over extended
  competitions (many games, many players in a tournament).
  To adapt it to radio, with all of its inherent variables and
  multiple concurrent competitors, would be difficult.  I suspect that
  the percentage/percentile of pacesetter score described above
  would be a fair predictor of successor events entered from
  similar stations, but far less precise than Elo in chess.

Recognize superior performance
  Today's system works pretty well - just publish the results
  with top ten boxes and regional listings.  I'd like to see
  a top ten in each division and power category in ARRL contests 
  or each US call area in CQWW.  Since magazine space is at a
  premium, maybe NCJ could publish such supplementary lists.  Or
  the contest reflector.  The chess people have a supplement to
  the Elo system with which they compute a "performance rating"
  for an individual tournament.  Thus a player with a rating
  of 1850 might score 2200 in a particular event.  He has done
  very well indeed, even though his personal rating may have
  risen only to 1880 or so. 

Recognize hard work and cumulative achievement
  The bridge world does very well with this one:  Each tournament
  event has a certain number of Master Points to award based on
  its size and entry criteria.  The points are distributed to the
  winners.  I think you accumulate points indefinitely, so this
  system rewards active players much more than inactive but 
  stronger players.  To compensate somewhat, they have special
  color points which you can only win against tough competiton.
  We could come up with a similar metric by summing percentage/
  percentile scores instead of averaging them.

Make everyone win or feel like a winner
  The best suggestions I've heard on the reflector have been 
  to invent more categories to create more winners or to
  award various "most improved" citations.  Someone observed
  (correctly, I believe) that some of us are motivated by
  winning and others by self-improvement.  Perhaps better
  journalism would encourage both types.  Let's hear some
  comments in the text about special efforts, improvements,
  particularly competitive situations, etc.  Not the current
  pablum of prose renditions of top ten boxes and pages of
  fine print, nearly identical, zero information comments.

Attract more contesters
  More articles on how to do it, how much fun it is, operating
  tactics, station design, interviews with famous contesters,
  stories about successful newcomers, participation awards, etc.
  Contest sponsors need to do more marketing.

Make it "fair"
  Mainline contesting is not fair, never will be, and is particularly
  interesting because of the very variables that make it unfair.
  Such is life.  Long live unfair contesting.  Measurement of
  results in creative ways can overcome some of the inequalities
  and should be pursued as a marketing strategy, not as a holy
  quest. 

Equalize all competitors
  Good heavens, no!  Handicaps in golf, bowling, and Go are there
  to make it POSSIBLE for unequals to enjoy a game together.  Without
  them, pros and amateurs can't possibly participate in the same
  event.  But the "real" events are among peers and are the only
  ones that prove anything about individual ability.  In radiosport,
  K1DG and ZZ9ZZZ can QSO regardless of their respective experience
  and ability and both profit from the experience.  The only kind of
  equalizing I'd support would be in a long-term ranking system
  whose purpose is measurement, not equalization.

73,  

/Rick Tavan  N6XI
tavan at tss.com



More information about the CQ-Contest mailing list