CQ-Contest
[Top] [All Lists]

Re: [CQ-Contest] Log coordination server for distributed operations.

To: "Martin , LU5DX" <lu5dx@lucg.com.ar>
Subject: Re: [CQ-Contest] Log coordination server for distributed operations.
From: Jukka Klemola <jpklemola@gmail.com>
Date: Mon, 11 Jul 2016 21:44:02 +0300
List-post: <cq-contest@contesting.com">mailto:cq-contest@contesting.com>
Trying to suggest ti re-using an already invented wheel:
How about sending a spot for each QSO with comment QSO: or something alike
that is common.

Minimum effort, maximum result.


73,
Jukka OH6LI


2016-07-11 17:47 GMT+03:00 Martin , LU5DX <lu5dx@lucg.com.ar>:

> Hi guys,
> I've heard several comments of HQ stations having log syncing problems over
> the weekend during the IARU HF Championship.
>
> I wonder if a standard Log Coordination Protocol would be a good idea.
>
> This way all clients would point to just one (high availability) server via
> TCP or HTTP (I don't think it would hurt to use http).
>
> The Coordinating Log would be just that: a log, with some additional
> features to facilitate such a task.
>
> Either  a tiny portable DB could be used for that purpose (SQLite) or even
> a flat file  and some additional syncing software.
>
> This would reduce traffic between the stations and make it more efficient.
>
> To eliminate SPOF the coordinating server could have a real time replicas
> in different availability zones. This is easily done in most cloud service
> providers.
>
> What would be really nice is to come to an agreement about the coordination
> log format so that it wouldn't matter if you were running different logging
> programs during  a contest.
>
> Just an idea.
>
> 73,
>
> Martin
> _______________________________________________
> CQ-Contest mailing list
> CQ-Contest@contesting.com
> http://lists.contesting.com/mailman/listinfo/cq-contest
>
_______________________________________________
CQ-Contest mailing list
CQ-Contest@contesting.com
http://lists.contesting.com/mailman/listinfo/cq-contest

<Prev in Thread] Current Thread [Next in Thread>