[aprssig] an off-topic query about statistical analysis indigital modes

Scott Miller scott at opentrac.org
Fri Sep 21 13:15:47 EDT 2007


That sort of thing relies on redundancy and inefficiency in the language 
- it works because the probability of each symbol is NOT uniform.  An 
efficient representation of the information will have as little 
redundancy as possible.  Convolutional coding adds redundancy in a 
systematic, uniform way that gives you the best chance of working out 
what the original message was in the presence of errors.

Same sort of idea, but convolutional coding can be applied to anything 
without having to have knowledge of what exactly is being encoded.

Scott
N1VG

Chris Howard wrote:
> On Fri, 21 Sep 2007 09:31:14 +0100
> "Dave Baxter" <dave at emv.co.uk> wrote:
> 
>> As of course does Morse code!
>>
>> Thanks for the book link btw Scott.
>>
>> Dave G0WBX.
> 
> I knew about low-level things like Huffman encoding.
> What I was asking about is a higher level dictionary
> of words and phrases and probability rating.
> 
> For example, if the string "tje" appears
> in a keyboard-typed message it is likely to really
> mean "the", or maybe it just might mean "tye" but that
> is much less likely.  You can do that at the bit level
> and again at a semantic level.
> 
> My BlackBerry cellphone/pda has a keypad with
> two letters assigned to every key.  Usually I can type
> my message just as if the keys were given a single letter
> and the BlackBerry algorithm figures out that I meant
> "the" not "ygr". So it is effectively a 2 to 1 compression
> of the communication between my fingers and the device.
> 
> I know just enough about the whole issue to ask dumb
> questions.  So I think that book will be very helpful.
> 
> Chris Howard
> w0ep





More information about the aprssig mailing list