[aprssig] APRS CSMA Settings

Robert Bruninga bruninga at usna.edu
Mon May 19 20:20:10 EDT 2014

The biggest difference between what you read about the original ALOHA
network and APRS, is that the original ALOHA assumes a "connected" network
with RETRIES for failed packets.

This does not apply to APRS at all.  APRS never retries (other than simply
waiting for the next beacon repeat... 10 minutes for local direct packets,
and 30 minutes for 2 hop fixed beacons).

So, my extrapolation of the 18% optimum Aloha channel loading is just
that... an extrapolation of apples to oranges.  It is closer to the 36%
for digipeaters since they are closer to CSMA since they can hear most of
the local users.  But when they transmit, they use D-wait of 0 so that
they beat all the other users to the channel.   But I am very glad to see
that you are taking an academic study.  I welcome the results.

> Reading the archives, it seems that Bob, et. al usually promotes no
CSMA (algorithm 1) for digipeaters as well, due to:...

True, only for a very high digi that can never hear silence (digi's on
balloons and aircraft or over Los Angeles basin)..  Otherwise, not true.
The digis should do CSMA, but their DWAIT should be zero.

-----Original Message-----
From: aprssig-bounces at tapr.org [mailto:aprssig-bounces at tapr.org] On Behalf
Of Kenneth Finnegan
Sent: Monday, May 19, 2014 3:53 PM
To: TAPR APRS Mailing List
Cc: Bridget Gwenith M. Benson
Subject: [aprssig] APRS CSMA Settings

My apologies for digging up all of our skeletons today.

Reading the APRS specs, I haven't seen any mention of the layer 1
channel access behavior other than that APRS is based on the ALOHAnet
work from the 1970s. As far as I can tell, the competing CSMA
behaviors are:

1. None. Transmit as soon as you have traffic
1a. Time slotting. Transmit during your allocated time slot (seconds
since top of hour modulus beacon interval).
2. DWait. Wait for a specific period of quiet time before transmitting
any pending traffic
3. P-persistent. Randomly transmit during a quiet time slot.

Deaf trackers obviously must follow the first algorithm.

Reading the archives, it seems that Bob, et. al usually promotes no
CSMA (algorithm 1) for digipeaters as well, due to:
* Effective channel access further multiplying bandwidth consumption
since each local digi will repeat a packet further in the future.
* The theory that FM capture effect allows all the digis to double
each other. I haven't seen any experimental evidence that this works,
and doubt it'd work for moving receivers.
* The argument that APRS networks have so many split horizons that
it's a waste of time.

I've been trying to reconcile this with the ALOHA channel statistical
models and Bob's assertion that we need to support 60 stations on the
local network.

I'm not sure where the original 60 number came from, but my modeling
for APRS is coming up with a maximum station count oddly close to 60,
but this is using all the optimistic assumptions like Poisson packet
arrival, p-pers channel access, etc. that Abramson used in his packet
broadcast channel work in the 1970s. Taking away p-pers hurts the
channel throughput, as does the much lower level of entropy in the
actual arrival times of packets on RF. (I haven't figured out how to
quantify the actual arrival time entropy, but since many trackers
don't insert any randomness in their fixed X second interval the
Poisson models seem optimistic...)

I understand that part of the concern is that super-high digis will
never hear any free channel time, but that doesn't seem to mean that
we can't be using p-pers to better application on the lower levels of
the network. Here's the channel distribution on a low level digi in
San Luis Obispo, CA, which is a grossly under-loaded network:
http://i.imgur.com/jSphYCY.png (the first bar is up at 5,000)

The strongest counter-argument I see for using p-pers is the fact that
unlike every other application of AX.25, we have a 30 second deadline
on the life of a specific packet, so any sort of queuing in
digipeaters is dangerous, and even computer-based digis with KISS
modems can't overcome the limitation that KISS doesn't support tagging
packets with a deadline for when it should be dropped instead of
transmitted late. This is unfortunate since channel acquisition is
such a large fraction of transmit time in APRS (my tests are showing
we can't depend on anything less than 300ms for TXDelay), so having
digis queue packets and dump them in a single burst every N seconds
would be an interesting improvement (unless you're trying to use a
Kenwood in KISS mode).

It would seem that a reasonable recommendation would be DWait=0 for
high level digis (unless we could implement hard deadlines) and
statistical channel access for low level digis to enjoy better
throughput in the smaller cells. Thoughts?

Unfortunately, this is one of the places where we probably can't give
definitive recommendations in the spec, since exact L1 behavior is
going to depend on the local network design.

This is all ignoring the interesting "viscous delay" layer 3 channel
access behavior, but that argument was had with the Helsinki guys back
in 2009.

Kenneth Finnegan

More information about the aprssig mailing list