6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sun Oct 06, 2024 11:24 pm

All times are UTC




Post new topic Reply to topic  [ 1 post ] 
Author Message
 Post subject: SerBus Protocol Issues
PostPosted: Sun Jul 22, 2007 5:24 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
Over time, I've waffled between using a GPIB-inspired protocol and an HDLC-inspired protocol for letting the Kestrel (and other 65xx-based computers) talk to intelligent peripherals. I decided that the GPIB-inspired protocol would perhaps be simpler. I'm now finding this to be questionable.

I've spent the better part of six hours implementing some software to put the protocol specified by http://www.falvotech.com/content/kestre ... -protocol/ to the test. What I've found is that it can be made to work, but the state machine needed to implement the protocol is somewhat sophisticated and complex. The actual mechanics of the bus are pretty simple, but managing I/O buffer state is quite a pain! All this misery on the PC platform -- just imagine if I had to implement this in a microcontroller form factor, where I have limited instruction and memory resources!

Another problem is that I've not found any simple method of re-using the code I've written so far in other SerBus peripherals. Re-use is possible, but it does involve tailoring the bus interface code to accomodate the various different kinds of channels you'll support.

Although implementing an HDLC-derived protocol may represent a slightly larger up-front investment in code (you need to frame and byte-stuff the data), I believe that overall system complexity decreases.

Most of my HDLC-related experience comes from AX.25. AX.25 is a specific variation of HDLC intended for use over the amateur radio spectrum (though it finds far greater use in government and commercial applications!!). Therefore, I'll be basing my design specifically on AX.25.

Features from AX.25 that I do not need and would not be included, are:

* Six-byte layer-2 addresses. One byte addresses gives access to 64 addressible channels or functions. The address is expandable as needed, as bit 0 of all address bytes is the "this is the last byte of the address" flag, but I doubt you'll need it. If used, each address field extension tacks on 7 bits (not 6) of additional address space.

* Source addresses. Both parties know who sent the packet by virtue of what I/O port it arrived on, so they already know the who sent the data.

* Separate command and response bits. Since the new protocol will lack secondary station ID bytes (an AX.25 exclusive), but will instead use real HDLC-style addresses, there is only one command/response bit (namely, bit 1 of byte 0 of the address).

* I have no intention of supporting I-frames, and consequently, any of the connected-mode signaling associated with them. However, I will not forbid their use on the bus either. The serial bus is local to the PC, and is strictly point to point -- the odds of high error rates on the link is going to be pretty small.

* Frame Check Sequence bytes -- for the same reason as above. The fields for FCS bytes will remain, however. Their interpretation is therefore dependent on the layer-3 protocol in use. This allows one to use ECC bits to correct errors if you want.

I realize that this all sounds more complex. In its most minimal implementation. the data framing and byte stuffing steps will dominate the system complexity, which is saying a lot.

"What about buffer memory inside microcontrollers?" This is a valid concern -- it is true that this may consume slightly more controller memory. Remember that I will primarily be designing my own peripherals using microcontrollers, so obviously this is a very high priority concern for me. However, if I were to make my MMC card device using the GPIB-derived protocol, I'd still need it to support at least 1KiB of RAM to be able to receive 512 byte blocks (since I'm going to need temporary buffers to write to the MMC card from). You can acquire ATmega chips with kilobytes of RAM can be had for less than $6 in quantities of 1, and you get tons of I/O pins and chip resources to boot. (With surface mount variations, you don't even need to worry about space on the PC board.) Additional memory overhead for supporting HDLC versus GPIB state machines is expected to be minimal.

"What about packet scheduling?" It is true that HDLC offers the potential for asynchronous data transfer capabilities. This is the holy rule: if you have data that needs to be transmitted, assert IRQ. You do not negate IRQ until the last byte has been sent out. The IRQ pin then becomes the "data pending" signal. You should size your microcontroller so that you can queue up three frames at a minimum: your largest data frame, one interrupt status frame (just in case you do have a real interrupt or exceptional condition), and one RNR frame, to tell the host computer that you are not ready to receive more data at this time. This allows for interrupts, flow control, and any response from a previous request to be adequately handled without concerns. If you can dynamically regenerate the data frame, then you can get by with just the RNR and interrupt status frames. This is how the uIP stack manages its buffer memory. And if Contiki + uIP can get a Commodore 64 not only on the web, but serving web pages, then I don't think I'm asking too much of others in supporting HDLC. :) And, believe me, the state machine for TCP/IP is quite a bit more sophisticated than a UI-only HDLC implementation. :)


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 1 post ] 

All times are UTC


Who is online

Users browsing this forum: Google [Bot] and 12 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: