Oops - I hit the wrong button and sent an early
version of this instead of saving it away for more
work later on. Here's the full message. Sorry.
.....
OK, lots of people agree that a way to rank contesters
would be exciting. There are no good ideas on how to
make it predictive of future performance like chess. I
sense well considered disdain for a handicapping system that
pretends to level a very uneven playing field. Some good
ideas are floating on a tiered system where you work your
way up a ladder through successive performance. I think
we could combine several objectives with an integrated
hierarchy of perks: Event Performance, Annual
Achievement, Lifetime Rank, Cumulative Achievement. We could
implement any or all depending mainly on publication
space since compute power is free.
Event Performance Rank
----------------------
For each contest of substance, have a way of awarding
ranking points. Points earned are in no way connected
to contest awards which remain the sole responsibility
of the contest sponsor.
Have a ranking administrator, preferably an organization
or publication, determine the points available for each
event. I would recommend that two or three of the
following factors be used to set available points: number
of submitted logs, number of contacts reported in those
logs, number of hours that the contest runs. Thus, the
points available reflect the "importance" of each
contest in some crude way. They change from year to
year as contest sponsors vie to attract activity - good
competition for them, too.
Divide the available points among the high scorers in each
geographic region. Although the suggestion to use grid
locators is intriguing, most people in the world feel a
stronger sense of comraderie (or competitiveness) for their
neighbors and co-nationals. So I would suggest that each
country be a region and that "large" countries be subdivided
in some reasonable way. In the US, I'd suggest call areas.
Set up regions so that there is significant number of entries
from most of them. (Sure, a small, emerging nation may
result in zero or one entrant. But it will only receive
a small number of points, too.) Consider reserving some
of the points for continental scores in WW tests or call
areas in national tests. The algorithm for dividing up
the points among regions is critical and can't be perfect: It
could consider number of entries, total reported QSOs,
total score, highest score, average of the ten hightest
scores, etc. Whatever is chosen, entrants from some
countries won't be happy. Just like regular contest scoring.
Now divide the available points among the entry categories:
high power, low power, QRP, etc. (I'd love to distinguish
dipoles from stacked monobanders, but the contest sponsors
don't have the data and I'd suggest getting something like
this going without the overhead of separate reporting,
memberships, etc.) Pick a reasonable way to divide the
points: Maybe proportional to the number of entrants or
entrant-hours in each category/region? A low power station
in France could do as well as a high power station in the
Carib.
Finally, allocate the available points within a category using
a reasonable algorithm. The number of point winners would be
proportional to the available number of points, so coming
in third in Yukon may be zero points or at least not be as
rewarding as fifth in New England. My favorite way of awarding
points would consider each point winner's percentage of the
winning score in his or her region and category, but there
are plenty of others.
(Problem: What about hotly contested regions where there
aren't enough point winning places to separate 1.7M from
1.65M? An alternative might be to give all the available
original points to the winner in each region/category and
then award additional points (in proportion) for ALL scores
within, say, 20% of the winner.)
Now each participant who does well enough has earned some
points. This is the Event Performance. The algorithm
sounds complex, but it's only a few lines of code.
Annual Achievement: Sort of a Grand Prix - add up event
performance scores for a year, regardless of category and
location, and see who earns the most points.
Lifetime Rank: Establish permanent rankings based
on the sum of an entrants five or ten top event performances.
If participation and scores inflate, you will have to stay
active to stay at the top of this ladder. But a few years
out of the running won't erase the memory of years of good
scores.
Cumulative Achievement: Just keep a running total of even
performance forever. Only the veterans who particpate
actively for a long time will be at the top of this ladder.
What I like about this:
* You can COMPARE scores from unequal locations.
* You can combine scores from different locations and
categories.
* We could back-test it against historical data. Even
establish it retroactively based on published results.
* It recognizes good results at the event, annual and lifetime
levels.
* It recognizes sustained results over a long period of time.
* The various lists are all based on a single metric which
is computed objectively and based on participation.
73,
/Rick Tavan N6XI
tavan@tss.com
|