6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Fri Nov 22, 2024 9:59 am

All times are UTC




Post new topic Reply to topic  [ 19 posts ]  Go to page 1, 2  Next
Author Message
PostPosted: Sat Aug 14, 2010 3:28 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
Let's talk hypothetically here. Garth and I are both of the opinion that high-performance expansion architectures are awesome, but extremely hard to get right, so we favor slower, simpler interfaces. But sometimes, you just plain need megabytes per second of throughput.

65SIB can provide this, provided you have a capable hardware implementation. Throwing an SPI port into a CPLD or FPGA can achieve this nicely. Problem is, to sustain that transfer rate, you need DMA.

For example, suppose you want to build a computer with a 640x480 16-color display operating at VGA frequencies. You're going to need a 25.2MHz dot clock. Packing two nybbles per byte because it's only 16 colors, that means you need 12.6MBps access to the frame buffer memory. Now, suppose the application you're writing requires 30fps animation. That means you'll need to deliver data to the frame-buffer somewhere between 6.3MBps (with double-buffering) to 12.6MBps (single-buffering) to keep up.

The fastest memory moves on the 65816 come from its block move instructions, which moves one byte every seven clock cycles. Clearly, you'd need a 63MHz CPU to provide raw CPU-driven animation performance. Clearly not practical.

65SIB can provide 6.3MBps if your hardware provides 50Mbps capacities. But, at these rates, you need transmission lines to your equipment -- a 20-pin ribbon cable leading off the motherboard to an external peripheral will almost certainly cause TV interference for some number of blocks in your area. The only reason IDE hard-drives gets away with this is because it's tucked away inside a metal case, making a Faraday cage. So, we're quickly leaving the realm of hobby-friendly, high-speed interconnects.

Perhaps you can use cat-5 cables? With these, you have 3 usable pairs for ferrying data to and fro: one for MISO, one for MOSI, and one for the clock. The remaining "pair" could then be dedicated to providing the ground reference, and device select. Hence, you lose the ability to daisy-chain multiple devices off the port, and you absolutely lose the ability to power peripherals off the link. You also lose the ability to auto-configure. In fact, it's pretty much not an 65SIB link at that point -- it's just plain old dumb SPI.

So, this leaves only parallel expansion buses. At one point, I proposed throwing GPIB on a backplane, and with the more controlled impedances and reflections this provides, you should be able to go way beyond GPIB's 1MBps without needing to significantly alter the protocol. Use 3-state transceivers instead of open-drain for data lines, and totem-pole outputs for handshake lines (use backplane-mounted logic to replace wire-OR with actual OR-gates and redistribute the ORed result to all slots). If you do add some lines to the bus, you can probably safely support 16- or 32-bit transfers easily. I can easily see this bus supporting 132MBps if widened to 32 bits, and keeping the bus to 3 slots. Unfortunately, this bus requires smarts on the peripheral itself, and out of the box at least, has no means of auto-configuring. (Though, having thought through a better approach to auto-configuration on the 65SIB, I think I can adapt it to GPIB fairly easily.) Actually using the peripherals on this bus will resemble a mainframe communicating with a remote terminal; which it's quite possible to push graphics through such an interface, it'll feel weird.

Then, of course, there are the STE, VME, and other "CPU-independent" buses. These look to be the cheapest to implement, since they re-use as much of the CPU's internal bus logic as practicable. STE bus appears more refined, and is asynchronous, meaning you should be able to isolate the CPU's clock speed from the peripheral's speed using some external logic. Unfortunately, many of these buses are way over-documented (VME), not documented at all in easily acquired resournces (STE), and so far none of them support any kind of auto-config or device identification.

Then, moving still closer to the CPU, we get STD and Fachat's Gecko bus, which requires peripherals to be hard-wired to the CPU's bus clock speed. This requires the least possible number of components (especially for the 6502!), and places the burden of rate-matching on the peripheral itself. Alternatively, you can go the route of the Apple II, and specify de facto that the clock on compatible expansion slots shall always be 1MHz (or whatever). So, even an 8MHz-clocked Apple IIgs slows to 1MHz to utilize devices in the expansion slots. As with Ethernet, you need multiple families of specifications to accomodate higher throughputs. Then you run into the problem of accidentally plugging a 1MHz device into a 10MHz slot, for example. Also, if your CPU is slower than the next highest supported clock rate, you'll still need bus rate synchronizer logic.

I must admit, this is a pretty sticky problem for us hobbyists. It's no wonder why commercial vendors are pushing gigabit serial links over parallel buses now-a-days. Indeed, our own 65SIB will go a long way towards achieving acceptable I/O throughputs for a moderately large class of devices, but it will become a bottleneck as folks want to explore technology demanding higher bandwidths.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sat Aug 14, 2010 4:02 pm 
Offline

Joined: Tue Jul 05, 2005 7:08 pm
Posts: 1043
Location: near Heidelberg, Germany
Good food for thought, only a short reply for now (being in a hurry)

I've also come to the conclusion that using an expansion bus at CPU speed limits the speed of the system, especially with today's faster possibilities.

Thus I have already built 65816 boards that come with fast memory, and slow down to use the expansion bus only for peripheral access.

André


Top
 Profile  
Reply with quote  
PostPosted: Sat Aug 14, 2010 5:32 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8505
Location: Midwestern USA
kc5tja wrote:
Let's talk hypothetically here. Garth and I are both of the opinion that high-performance expansion architectures are awesome, but extremely hard to get right, so we favor slower, simpler interfaces. But sometimes, you just plain need megabytes per second of throughput.

How about SCSI? SCSI can run anywhere from 2.5 MB/sec to 320 MB/sec (ultra-320) and can reliably work with cables as long as 12 meters (more in some cases). The challenge is in coming up with suitable silicon to drive the bus. While it would be possible to bit-bang an interface using, say, a 65C21 or similar, something along the lines of NCR's 53C90A or AMD's 53C94, both intelligent SCSI ASICS that execute the bus protocol in hardware, would be far more efficient in obtaining good throughput. I don't know, however, if either of those parts is still readily available.

Another possibility might be to adapt the SATA interface, which is also capable of high transfer rates. Not sure how one would go about doing so, however. I haven't seen any SATA ASICs mentioned anywhere.

Yet another idea is to develop a parallel bus architecture that runs at a fixed clock rate that is an exact submultiple of Ø2, a la the old ISA bus in PCs. The MPU would be wait-stated to give devices on the bus time to respond to selects, read/write, etc.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Sat Aug 14, 2010 7:07 pm 
Offline

Joined: Tue Jul 05, 2005 7:08 pm
Posts: 1043
Location: near Heidelberg, Germany
kc5tja wrote:
Then, moving still closer to the CPU, we get STD and Fachat's Gecko bus, which requires peripherals to be hard-wired to the CPU's bus clock speed. This requires the least possible number of components (especially for the 6502!), and places the burden of rate-matching on the peripheral itself.


What I already did was to use the RDY pin of the 6502 - which I conveniently put onto the bus - to watistate the bus speed (not slow the clock, use multiple clock cycles on the bus for a single access) from devices that need it. As my bus optionally run on 2MHz as well, I found two chips that wouldn't work, so I had to assert RDY there. IIRC one was the WD1772 floppy disk controller and the other one was an I2C controller.
Quote:
Alternatively, you can go the route of the Apple II, and specify de facto that the clock on compatible expansion slots shall always be 1MHz (or whatever). So, even an 8MHz-clocked Apple IIgs slows to 1MHz to utilize devices in the expansion slots.

That's in fact the way I am currently going with my bus, as the CPUs get faster, and FPGA have great possibilities for higher integration onto a single board - that uses the bus only as an I/O expansion.
Quote:
As with Ethernet, you need multiple families of specifications to accomodate higher throughputs. Then you run into the problem of accidentally plugging a 1MHz device into a 10MHz slot, for example. Also, if your CPU is slower than the next highest supported clock rate, you'll still need bus rate synchronizer logic.

IMHO different speed interfaces should either be physically impossible to connect (like different type of connector for example), or be compatible (like using different signals for high speed, that are invisible to slow devices).

But indeed, you need to come up with such a, as you say, set of specifications.

OTOH: If I want to do a really fast bus, I'd not let it go off the main board if possible, to avoid any problems that come with antennas ... ahem meaning long signal traces with high frequency signals on it, reflections, etc. This is a whole different type of design, you'd need series and termination resistors etc.

So I'd rather make a single board with all the fast devices I need on the board, and plug it into the slow bus ...
BigDumbDinosaur wrote:
How about SCSI?

SCSI I would actually qualify as a high-speed device. I have done an SCSI board for my bus - bit banging the signals in SCSI async mode :-) But these days do you really still want to use parallel SCSI, even if you do fast FPGA-programmed SCSI controllers? Do you actually still get any devices like parallel SCSI harddisks for decent prices? It's relatively easy to implement though compared to other buses.

If had the time I'd rather tap into the world of the ubiquitous USB interface, as a controller. USB is fast enough for a 6502, and USB3.0 if you really need it faster for any 6502 I've heard of. There is an abundance of USB devices, even fast ones :-)
And it provides some standard protocols for example for storage devices from different vendors, or different backing technologies (SD Card readers look like SATA harddisk etc)

The only problem is getting a decent controller chip that's interfacable with the 6502 (although I do have an unfinished board with a Cypress SL811 controller...)

André


Top
 Profile  
Reply with quote  
PostPosted: Sat Aug 14, 2010 7:47 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8505
Location: Midwestern USA
fachat wrote:
SCSI I would actually qualify as a high-speed device. I have done an SCSI board for my bus - bit banging the signals in SCSI async mode :-) But these days do you really still want to use parallel SCSI, even if you do fast FPGA-programmed SCSI controllers? Do you actually still get any devices like parallel SCSI harddisks for decent prices? It's relatively easy to implement though compared to other buses.

Parallel SCSI hardware is readily available, cost varies depending on source. We still use it in some of our Linux servers. However, my comment about SCSI had nothing to do with mass storage per se. I was thinking of SCSI as a general purpose interface bus, due to the nature of the protocol. Owing to the fact that any SCSI device can be an initiator or target, SCSI is bi-directional. With GPIB, you have a more narrowly defined (much older) protocol. It's all conjecture, of course.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sat Aug 14, 2010 7:47 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
Conceptually, SCSI seems to share a lot with GPIB. However, to get those 320MBps speeds over 12m long cables, you need some exotic hardware.

The reasons I prefer GPIB-based backplanes over SCSI:

* Although rarely used, GPIB allows you to talk to multiple devices at once.

* GPIB, like SCSI, allows a talker to send data to one or more listeners without bus master intervention.

* GPIB isn't restricted to seven devices per bus. Probably not an issue when you're constraining yourself to only 3 slots for signal cleanliness reasons. Additionally, each of the 31 supported devices can have 31 sub-channels as well.

* GPIB is a pure message-passing bus, like SCSI.

* GPIB makes it much easier to define new message formats than SCSI, where you run the risk of command-set collisions.

* A 4-slot GPIB backplane capable of supporting 50MBps on an 8-bit wide bus can be built for not much more than $10 in discrete silicon parts by any hobbyist, if my recollection of 74ACT-series logic prices hold.

* The biggest reason of them all: specifications for how the bus works are readily available on-line. GPIB operation is virtually public domain now, and can be explained in about 3 timing diagrams, of which only one is really important (the NRFD, DAV, and NDAC signal handshake). I just spent about four hours trying to find openly available SCSI(-1) timing diagrams, let alone specs, to no avail.

I will admit that SCSI is pretty appealing, when you consider it's still in production and GPIB isn't. :-) The way I see things, though, with the cheap availability of microcontrollers and programmable logic, this isn't a limiting factor for me anymore.

Good suggestion though -- I hadn't thought of SCSI when I derived the initial list.

Fachat wrote:
Quote:
Do you actually still get any devices like parallel SCSI harddisks for decent prices?


You're missing the point. Harddrives aren't necessarily the target for this -- rather, any device that needs high-throughput. My particular use-case is a graphical (versus character-oriented) terminal where I desire (for better or worse) 30fps or faster update rates at resolution.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sat Aug 14, 2010 7:53 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
My beef with USB is its simplicity is only skin deep. The protocol requires constant host intervention to maintain, and at least for 12Mbps devices and slower, you cannot attain faster than 1ms turn-around time. Faster than that, you're looking at 125us turn-around times, period. Hence, back-to-back I/O transactions are discouraged.

And, the host controllers I've researched so far are insanely hard to code for. You'll have greater success in shorter time getting Ethernet working.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sat Aug 14, 2010 8:04 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8505
Location: Midwestern USA
kc5tja wrote:
* ...I just spent about four hours trying to find openly available SCSI(-1) timing diagrams, let alone specs, to no avail.

Gee, I have all of that here in books, one of which was published by Seagate around 1990. I also have a manual for an Archive QIC-150 tape drive which documents SASI in surprising detail. :shock:

The real challenge with bit-banging SCSI, whether via a 65C21 style arrangement or within a CPLD or FPGA is staying within the bus timing constraints. The aforementioned NCR 53C90A and AMD 53C94 ASICs use a clock input signal (not necessarily Ø2) to internally derive the bus timing. While the timing is not critical, it needs to be reasonably consistent from one transaction to the next.

Otherwise, the rest of it is essentially changing line levels as required, also possible with a PLD of some kind. Of course, a driver needs to be written to handle each of the bus states.

I suppose GPIB could be implemented as well. The 31 device vs 7/15 device limit of SCSI isn't really a plus, as I don't think a practical 65xx bus would support more than 4-6 devices en toto.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Sat Aug 14, 2010 8:37 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8505
Location: Midwestern USA
Almost forgot! If a SCSI or GPIB bus is implemented then it is implied that peripherals attached to the bus are intelligent. I can see that being a practical arrangement with a host adapter driving mass storage or a network. However, a simple PIO device such as an ACIA or parallel port interface (e.g., to drive a Centronix printer) wouldn't need anything more than basic bit-banging, so the intelligence angle is probably an unwanted complication.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Sat Aug 14, 2010 9:00 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
BigDumbDinosaur wrote:
Almost forgot! If a SCSI or GPIB bus is implemented then it is implied that peripherals attached to the bus are intelligent. I can see that being a practical arrangement with a host adapter driving mass storage or a network.


Correct; and, this is as it should be, I think. You don't generally DMA data to a completely dumb peripheral -- the sole exception I can think of is a DAC.

My graphical terminal concept employs another 65816 CPU implementing an adaptation of Remote Frame Buffer protocol (the protocol behind VNC), granting the main computer a single (baseline, at least) interface for graphical output and keyboard/mouse input. RFB mandates the use of X11 keysyms for all keyboard events and mouse position reports are pretty well cooked. Therefore, it isolates application software from having to worry about USB vs PS/2 vs whatever device codes for these peripherals.

Both SCSI and GPIB would work well for this application.

Actually, so would ATM (Asynchronous Transfer Mode) over a parallel bus -- RFB was originally an ATM application. :)


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sun Aug 15, 2010 3:00 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8543
Location: Southern California
Since I can't think of a good way to make a presentation and wrap it up with a nice conclusion or proposal, I'll just ramble.

I think of three different kinds of buses:
  • a local bus that does not leave the SBC, and thus has no connector specified. This is the typical use for I²C , SPI, Microwire, and similar, although they are serial. Connection lengths are typically no more than a few inches.
  • a bus that goes off the SBC or motherboard but does not normally leave the card cage or case. These are normally parallel. VME, STD, ISA, etc. qualify.
  • a bus to interface to external equipment, usually nearby as separate pieces of instrumentation on a workbench or mounted in a rack

What we did with 65SIB is to take the first one above and make it external like the last one above. I don't have any exposure or awareness of an actual bus that would be used to interface equipment much farther than across a small room. I don't envision that kind of connection as being a bus at all, although it could be, probably at a high price in speed reduction.

Then there's the Hewlett-Packard Interface Loop, HPIL, which was way ahead of its time, and, if the upper management at HP had been wiser (I'm trying to be nice-- can you tell?), they would have used advancing technology to make it a high-performance standard instead of canning it along with other products aimed at technical professionals. It's basically a serial implementation of HPIB, or IEEE-488, with some improvements that come from the fact that everything is connected in a loop, so fan-out is never an issue because each device's output goes only to the next device's input; every message sent gets all the way around the loop so the sender receives its own message back to examine and make sure it did not get corrupted; and there's auto-addressing. The idea could even be carried out with fiber optics, allowing dazzling-fast, miles-long interfaces. It has tons of attractions, but it requires that every device have a lot of intelligence.

One of my many beefs with USB is that it's not a bus at all. Without external hubs, one port is good for only one device. Another one of my beefs is that a hand-held battery-powered computer cannot ever be a controller, due to power-supply limitations. At least the cable is small, but so is HPIL's.

To back up a bit, how many outside things need to be so fast that the bus needs to be able to handle streaming-video speeds? Are there really enough to need a bus? Even our son's 1U rack-mount server with two 2TB serial ATA hard discs has only the two, and 1Gbps ethernet connections with RJ-45 jacks, all coming from the mother board. These are not bus connections. 65SIB is fast enough for CD-quality stereo streaming audio though.

I like the Cat-5 cable and RJ-45 connector combination, but we just couldn't find any way to meet all the goals for 65SIB with it. These cables and connectors are way cheaper than the ones for HPIB / GPIB / IEEE-488, and easier to mount.

For most of us the time required to build something is an even greater limiting factor than money; so if it's not simple, it won't get built. I²C is about as simple as they come, unless you want to get down to 1-Wire which eliminates a single additional wire (whoopdeedoo) at the price of being much, much slower and imposing timing requirements that aren't there in I²C, let alone SPI. There will never be a do-everything bus. My workbench computer has an I²C port, a 65SIB port, and others; and for those things you can't plug into one of the ports, you can access a lot of the raw I/O at the board-edge connector. I'm using that board-edge connector less and less though, as experience has grouped projects' connections into the various other more-organized ports.

Running a 6502's own bus off the SBC brings a terrible performance hit. I don't even want to say, "I'll slow the clock down only for off-board transfers," because I use the VIAs' timers counting phase-2 pulses to keep time, and if the phase-2 rate is not constant, the accuracy is lost. I guess wait states are usually ok if the VIAs are oblivious to other things using wait states, as long as they get a consistent phase-2 rate.


Top
 Profile  
Reply with quote  
PostPosted: Sun Aug 15, 2010 4:37 am 
Offline

Joined: Mon Mar 08, 2010 1:01 am
Posts: 142
kc5tja wrote:
For example, suppose you want to build a computer with a 640x480 16-color display operating at VGA frequencies. You're going to need a 25.2MHz dot clock.


Is that not "overkill" for your bus? Just consider, that you don't need to render or refresh the screen with the bus data, just the data on the other end. Your CPU in the computer your posting this from doesn't transfer all the data from it, to the Northbridge then to the Graphics card. It simply transfers the required "new" information, and allows the Graphics Processing Unit to keep refreshing the screen continuously.

So for example, if you want to display a image, all you got to do is send that once, then the graphics chip keeps it on the screen. If you want to alternate the images, to save bandwidth on the bus, and processing time, you can send the multiple images, and have your GPU commanded to change the images at "X" rates.

My SB"C" will use a similar method to do graphics as I described above. Commands are sent to the Graphics chip and it deals with it. Without any processor intervention, other then passing the original data of course.

Dimitri


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sun Aug 15, 2010 10:09 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10985
Location: England
How about this: if the bandwidth is adequate, use UDP packets over Ethernet over CAT5 cable. (Or even raw Ethernet frames?)

For the Ethernet interface, note that Daryl has had success with interfacing the ENC28J60 onto his SBC-3.

I've had a good experience using the CLUNC tool which emulates a serial connection over UDP, to configure a network storage device. One nice thing is that the very same Ethernet connection can then be used to tftp-boot the device and then to carry application data over the usual internet protocols.

UDP itself is stateless, so you might get away without a complex stack. (Over a point-to-point cable, or an ordinary local Ethernet switch, you could perhaps assume UDP will be reliable - I know it isn't defined to be reliable, but implementing acknowledgements, timeouts and retries is obviously more complicated. You might want to use a full TCP/IP stack if you were tempted to do that.)

Ethernet devices already have a globally unique MAC address, and Ethernet supports broadcast if you want to invent your own addressing at a higher level. Or use IP addresses (fixed ones would be easiest but DHCP is defined if that makes sense.)


Top
 Profile  
Reply with quote  
PostPosted: Sun Aug 15, 2010 3:55 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
Dimitri wrote:
Is that not "overkill" for your bus?


Are you confusing dot-clock (the clock used on the video device's bus) with the communications clocking? I quoted 25.2MHz for the dot-clock shift register, but only 12.6MBps data rate worst-case, and 6.3MBps best-case (requires double-buffering).

Numerous arcade games work by redrawing the entire screen 30 times a second. And, of course, video players work by doing the same (assuming they're in full-screen mode).

But, let's suppose you do optimize the display of animated objects. You also have to read the state of the framebuffer to do this. This means, under even the most ideal conditions, you can only read and modify no more than half of the screen's pixels per frame.

Quote:
Your CPU in the computer your posting this from doesn't transfer all the data from it, to the Northbridge then to the Graphics card. It simply transfers the required "new" information, and allows the Graphics Processing Unit to keep refreshing the screen continuously.


I'm well aware of this, and if you read my description, this is precisely what I proposed doing. The Remote Frame Buffer protocol works by sending video deltas as needed, not continuous raw frames. But, when animating, "when needed" means at least 30 times per second.

A 16-color 640x480 display consumes about 150KB of memory space. At 30 frames per second, you'll need at least 4.5MBps bus transfer rate. Additionally, if you are single-buffering, you'll want to be synchronized with the electron beam in a CRT (otherwise, you'll run into tearing problems), which is why I advocated higher bandwidths.

This is a clear-cut case where you may not often need the bandwidth, but when you do, you really need it.

Quote:
you can send the multiple images, and have your GPU commanded to change the images at "X" rates.


Change the image to what though? At 150KB per screen, you can only pack so many frame buffers into the video card's memory. Eventually, the CPU will need to load frames into the card, and it must do so at least at frame rate.

Remember back in the mid-80s and early-90s when the MPC standard came out? The reason why you had 160x120 animations on screen sizes of 800x600 or more is precisely because the PC couldn't push pixels fast enough into the video card. Even if you relied on the video card to page-flip, you couldn't pull off full-screen, smooth animation. The buses at the time just couldn't deal with the load.

Today, CPU buses are so much faster than video refresh rates that it's actually easier to just redraw the screen in its entirety, from a blank frame buffer, 30 to 60 frames per second, than it is to optimize your animation.

Now, for me, I have no false pretenses that I'll pull off 60fps animation with a 65816. It's doable (the Amiga did it with a 7MHz 68000) with some support hardware, but not worth my time. But, if I ever do build my Kestrel-2, I would like a pleasant GUI, and watching windows flip onto the screen like sheets of toilet paper falling from a roll just annoys the ever-loving crap out of me. It really does -- it drives me so batty that I have to leave the workstation. I expect that sort of behavior from GEOS on a Commodore 64, but not on something 12MHz or faster.

So, to conclude, I really do desire having a bus with enough bandwidth to support full-screen animation. From the point of view of usability, it really does make a difference.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sun Aug 15, 2010 4:05 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
GARTHWILSON wrote:
One of my many beefs with USB is that it's not a bus at all. Without external hubs, one port is good for only one device.


Actually, this isn't true. You are allowed to hang multiple devices of the same D+/D- pair and the bus will work just fine. The original use-case for USB was, in fact, to daisy-chain peripherals.

Nobody does this because it's not as convenient as having hubs, since many USB devices draw their power off the bus. Additionally, supporting hot-swapping is made more difficult if you daisy-chain.

Hubs are, however, by definition, buses (c.f. Ethernet hubs versus switches). So, while hubs do maintain some characteristics of switches (particularly per-port maintenance of hot-swap status), everything else is distributed through the network verbatim.

Quote:
Another one of my beefs is that a hand-held battery-powered computer cannot ever be a controller, due to power-supply limitations. At least the cable is small, but so is HPIL's.


HPIL doesn't run at 12Mbps though, and it certainly doesn't run at 480Mbps. The faster the bus, the more power you'll draw.

Still, Firewire demonstrated multi-bus-master capability contemporaneously with the introduction of USB 1.0. Unlike USB, Firewire is a real network protocol.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 19 posts ]  Go to page 1, 2  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 23 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: