[nos-bbs] TCP/IP over AX.25 on VC - a study

(Skip) K8RRA k8rra at ameritech.net
Tue Jun 5 16:23:37 EDT 2007

JNOS is awesome.
It moved the data flawlessly station-to-station with ftp.
It took a while, but jnos was dependable in the final analysis.

My jnos is on a linux workstation (server) with one 2-M RF link to two
local "hamgates".  Hamgates present an Internet link (that I do not
utilize even though DSL is present on the site) to the remainder of the
AMPR Net 44... worldwide.  Yes - I intentionally have not encapped to
the rest of the AMPR world directly.  

***My objective is to test the desirability of two paths to 44..., one
datagram (UI) and one connected (I) for all my AMPR purposes.***

To configure has been a challenge, and the result may be worth passing
on.  The way to avoid the "uncomfortable" parts of my experience is to
avoid the use of IP over (I).  

I have concluded that the connected mode (I) is problematic for
encapsulation of IP traffic, especially when mixed with datagram (UI)
mode.  I invite critical review of my work, and I will have carefully
documented this for the wiki when a few additional tests are fully

For discussion purposes, start with the text originated from the JNOS40
Config Manaual written by Johan Reinalda and William Thompson c. 1994,in
the secion "Of PACLEN, MTU, MSS, and More".  From that work some of
configuration may be formulated, then:
 a) ip rtimer        reassembly restart timer
 b) tcp timertype    retransmit delay formula
 c) tcp blimit       retransmit maximum count
 d) tcp maxwait      retransmit maximum wait time
 e) ax25 timertype   retransmit delay formula
 f) ax25 blimit      retransmit maximum count
 g) ax25 maxwait     retransmit maximum wait time
 h) ax25 t2          reply pace time
 i) ax25 t3          channel keep-alive timer
 j) ax25 t4          channel idle closure timer
require setting to complete the job.  Additionally, both ends of the
link need compatible parameters and may not be independently set.
Finally I listed the above in sequence to demonstrate:
 I) a) is a pacing timer that must be satisfied to demonstrate the link
continues to be viable for use.
 II) b), c), d), and e), f), g), demonstrate similar approaches to data
retransmission for both AX.25 and TCP/IP protocols
 III) h), i), and j), set end-to-end relationships that effectively
avoid unwarranted overhead.

In my research before testing I found the concept of TCP/IP over AX.25
on a Virtual Channel (I) compelling for the benefits of quicker error
detection and repair, plus the redundancy of data validation in both
protocols.  While working thru configuration, I discovered the similar
approaches to retransmission of data in both protocols do not play well
together.  More disappointing is the aspect that in a mixed environment
of both (UI) and (I) links there is not a good solution - one mode must
suffer if the other is to succeed.

Here is the test bed...  I have a couple 100K ascii files that require
around 1/2hr to transfer under reasonable conditions.  For the no-stress
test, just start one ftp "get" or "put" and wait until the transfer is
complete.  This is a single-threaded example and works really well.  For
the stress test, start two ftp sessions, one get and one put, then add a
telnet BBS session attempting a NET/ROM connect, and also add "ping",
"SMTP", and "finger", access from remote sites, all in the time window
required for ftp to complete.  To watch progress I created a script to
"source" from F10 console including "ax25 status", "arp", tcp view", and
"tcp irtt" commands that could be run frequently on demand.

For simplicity, keep all this activity on the VC (I) channel and allow
the (UI) channel to remain idle.  What I found in the no-stress test is
that AX25-srtt ran 5s and IP-srtt ran 35s (srtt is Statistical [mean]
Round Trip Time).  It was also true that the transport of data required
no retransmission of packets.

What I found in the stress test was quite disappointing - the AX25-srtt
doubled to 7sec but the IP-srtt rose tenfold to 326sec.  The
retransmission rate rose from nill to to values exceeding the good data
values.  It seems clear to me that the TCP/IP protocol retransmission
testing conflicted with the AX.25 protocol retransmission testing.  The
IP data was being needlessly retransmitted so rapidly that the
underlying AX channel was overloaded by the excess.

In my experience so far, some conflict is unavoidable, the level of
conflict may only be reduced but not eliminated.  Some retransmission
will be required by multi-threaded or independent-but-concurrent use of
the IP over AX on VC channel.  The worst news is that as VC channel gets
better parameter settings to reduced retransmission the datagram channel
gets worse performance by protracted times of inactivity.

The "proof" of the above is that once one of the two ftp sessions
completed, the remaining session returned to normal and required no
further retransmission.  Further, the srtt values for both protocols
began to return to the smaller (better) values.

The best news is that even under stress, the data got thru without
error.  Jnos software was bulletproof.  It seems that use of datagram
(UI) mode for IP over AX is more suitable than connected (I) mode.
However the logic presented in 1994 still seems compelling to me.

For the betterment of jnos, I'd like to float a concept based on this
work.  "When IP is over AX on VC - arbitrarily defeat the pacing timers
shown above as b), c), and d), but maintain a) and packet checksumming
if demanded by the IP protocol."  This proposal maintains the ability
for IP to cause retransmission when it finds an error, but it removes
the duality in pacing provided by the underlying AX protocol.  The IP
pacing timers correctly apply to datagram (UI) mode.

What should we do with my concept?  Just talk about it at this point.
If it turns out to be valuable consider implementation later...

Again - I invite any "holes" that can be shot into this story.

de [George (Skip) VerDuin] K8RRA k

More information about the nos-bbs mailing list