6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sat Nov 23, 2024 11:56 pm

All times are UTC




Post new topic Reply to topic  [ 75 posts ]  Go to page Previous  1, 2, 3, 4, 5  Next
Author Message
 Post subject:
PostPosted: Wed Mar 05, 2003 11:09 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
mdpenny wrote:
There's always _one_ way to do it - anyone know of/remember the Microtan?


No, I never heard of it before. I only found out about it in the past few months. Even so, it offers nothing of particular interest to the table.

Quote:
The basic module could be designed as a potential SBC - a single 160-by-100mm Eurocard-sized PCB, with 65(C)02 CPU, clock & reset circuitry, along with space/sockets for (say) 8KB EPROM, 8KB SRAM, 6522 and 6850, along with connectors - some sort of connector for the 6522 I/O, and a 9 or 25 connector for a serial port.

These latter components could all be optional; instead of the onboard EPROM/SRAM/VIA/ACIA, set a link/jumper differently, and this onboard stuff is disabled,


Quite effectively wasting a large amount of the user's money in the process. No thanks. If I were to pay $250 for a new computer product, I want all of my investment to go towards the product as a whole. If I had a jumper to disable all the internal hardware on the PC, then what I just did was spend $225 for a paperweight, while the remaining CPU and ancillary circuitry is used as a glorified bus master arbiter for the expansion bus.

No thanks.

Quote:
...with all the 6502 lines going via an (optional) Eurocard connector. The board could then be plugged into a suitable expansion cage, to which could be added a "support" board (address decoding, some I/O, OS/boot EPROM), RAM board, ROM board (for BASIC, C, FORTH, ...), I/O board (serial/parallel/FDD/IDE or IrDA/USB/Flash ROM).


What's better is if we do this from the start. Indeed, the original spec for the 65-series PC was to use a consistent back-plane bus. This way, people only pay for what they want.

However, I also grew up with the Commodore 64 and Amiga 500. I've seen people who have used all-in-one machines, and people who have used the piece-meal approach that the current PCs offer. All in all, in my experience (and user-interface research by Jef Raskin appears to support this), the all-in-one units are concurrently easier for people to use and more cost effective. They're easier because things can't go wrong when you add peripheral devices. 99% of what the users will need will be integrated into the unit -- no configuration necessary. Even if the device added fails miserably, the user can usually still boot the system to a level where diagnosis and repair can happen. In addition, the all-in-one solutions unilaterally have some means of auto-configuration (e.g., MacOS had it, AmigaOS had an even better one, and IBM PS/2 with Microchannel was somewhere in between). It'll be more cost effective because the hardware that ships stock with the machine will be used right away.

Consider the Commodore Amiga series of computers. That was one gorgeous machine, whether in all-in-one or in expansion backplane configurations. The video hardware and audio hardware were adequate for the overwhelmingly vast majority of its market segments. Yeah, towards the mid-90s, the Paula chip needed 16-bit audio capability. But the video was solid, and the AGA chipset would have lasted for another 10 years easily had Commodore remained in business. Want additional graphics performance for high-powered 3-D games? Don't change the whole dang video architecture; instead, add additional coprocessors to the existing bus. In fact, before Commodore went under, the Amiga was starting to increase in sales big-time here in the states, as it was finally viewed as a REAL business-class machine. In another year or two, it would have matched or exceeded Macintosh sales by a factor of ten, given current trends, which would have royally embarrassed Apple.

The point is, the Amiga shipped with an operating system that used its included hardware to the fullest potential. The hardware and software were co-developed, in fact -- they were designed as a unit. The amount of money you spent on the machine went towards a complete system. As a result, the usability of the Amiga was head and shoulders beyond that of a Macintosh or PC with similar capabilities. In the case of the Mac, the OS wasn't optimized for color, real-time multimedia (yet). This is especially potent a consideration when you consider Macs back then simply had no idea what multitasking was about. The PC's case is far more abysmal -- it quite often just plain wouldn't work at all, and if it did work, made a Mac look like a Ferrari in comparison.

Quote:
Also, if someone wants a 65(C)816 CPU, such a board could be designed


The price/performance ratio of the 65C02 versus the 65C816 is such that you'd be crazy not to go with the 65816. For only a dollar more in relatively small quantities, you get double the performance with ALL of the backward compatibility, 16MB of addressible RAM. You even get support for hardware memory manage with the aid of external hardware, which you simply cannot get with the 6502. Note that by hardware memory management, I'm talking about paging and segmentation sophisticated enough to support transparent virtual memory, multiple protection domains, etc.

Quote:
As long as there's some sort of agreed standards (memory maps, ROM-content-format, Eurocard connector pinouts, etc.), different boards could be designed for different purposes and uploaded to 6502.org, leaving people to choose which bits they actually want.

Any good?


I think your intentions are fair and just, and I thank you for them. But I think it's important that we re-evaluate what really is important for producing a product of this type. For example, in a recent poll (doggone it, I can't locate the reference anymore :( ), people were asked whether they wanted things to be user configurable or not. The overwhelming majority voted for YES, I do want user configurability. However, the study went further in that it tested productivity. All those people who desired their software or hardware to be user configurable spent OVER 60% of their time configuring the software and hardware instead of getting useful work/play done.

This kind of research suggests that having too much configurability is a liability and is antithetical to progress. Witness the utter devestation of the home computer market by the IBM PC-compatibles. Witness, at the same time, how frustration with computers and software quality has hit an all-time low. Also notice that operating system research for desktop machines has not only halted, but regressed -- Linux is a Unix-class OS, and Unix is one of the oldest operating systems in existance. Windows NT, and by extension 2000 and XP, are both derived from VMS, which is similarly old. Both of these operating systems are using technologies that date from before CP/M was first introduced, on which DOS itself was based. This literally makes DOS one of the most modern operating systems you can use for the PC. (Actually, the most modern OS for the PC today is QNX Real Time Platform.) These events are not coincidences.

(I'll also be the first to admit that AmigaOS's kernel is itself based loosely on VMS, but it was equally original as well. It's DOS implementation was HORRID -- use of BCPL to implement was a huge mistake, and that dates to a time even before VMS OR Unix. And it showed. While the command-line of AmigaDOS was nice, programming for it was, to put it politely, definitely not a pleasant experience.)


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Thu Mar 06, 2003 12:07 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8546
Location: Southern California
Quote:
The price/performance ratio of the 65C02 versus the 65C816 is such that you'd be crazy not to go with the 65816. For only a dollar more in relatively small quantities, you get double the performance with ALL of the backward compatibility, 16MB of addressable RAM.

Using a few jumper selections, you can make the board accommodate either processor.  In writing my optimized '816 Forth kernel, I found the '816 to give two to three times the performance of the '02 at a given clock speed.  As with everything else though, your actual results will depend on what you're doing with it.

The '816 is not quite 100% backward compatible, because it doesn't have the BBS, BBR, SMB, and RMB instructions.  I expect few on this forum have used those instructions though, since not all of the earlier 65c02 versions had them (and none of the NMOS ones did).


Quote:
This literally makes DOS one of the most modern operating systems you can use for the PC.

I still do virtually all my work in DOS because it's more efficient and trouble-free.  (Even in DOS though, it's still all point-and-click, and I seldom see the DOS prompt.)  I only use Windoze for E-mail and web access.  Since my interactive 65xx software development on the target computer depends on the host PC being able to output plain text as if to a printer, I wonder if others will start having difficulty with that now that it seems nothing that runs under Windoze remembers anymore how to do simply that, one word, one line, or one paragraph at a time, without resorting to graphics mode and throwing in a bunch of unwanted escape sequences.

Garth

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Thu Mar 06, 2003 1:01 am 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
GARTHWILSON wrote:
I still do virtually all my work in DOS because it's more efficient and trouble-free. (Even in DOS though, it's still all point-and-click, and I seldom see the DOS prompt.)


I'm willing to bet you that it's trouble-free because there's only one way to do something in DOS. With Windows or MacOS, you're immediately presented with a user interface that presents multiple methods of user input: menus, command-key shortcuts, buttons you click on in the toolbar, etc. I've always wondered why toolbars became all the rage, considering menus are usually almost as quick. Yes, toolbars can be faster in some cases. For those cases, why bother providing the equivalent menus? For that matter, why not just provide a textual command input to replace all of these? The HP-Apollo series of computers have very clearly demonstrated that pseudo-command-line user input in a GUI environment (via DomainOS) is bar none the single most productive method used to date. More modern research supports this; Jef Raskin's THE project uses the same concept, and was also used back in the early 80s on his SwyftCard and Canon Cat products.

Quote:
I only use Windoze for E-mail and web access. Since my interactive 65xx software development on the target computer depends on the host PC being able to output plain text as if to a printer, I wonder if others will start having difficulty with that now that it seems nothing that runs under Windoze remembers anymore how to do simply that, one word, one line, or one paragraph at a time, without resorting to graphics mode and throwing in a bunch of unwanted escape sequences.


I think you're confusing the communications protocol with the screen handler's implementation. AmigaOS demonstrates that full VT-100 and ANSI compatbility can be had in a graphical environment with very little overhead. In fact, the Amiga's command-line environment is built on top of the console.device library, which is the core of its VT-100 and ANSI emulation.

It's important to remember that traditional text mode devices operate on the principle of a stream -- you send a stream of bytes to a device, and it prints them. Somehow. You don't know how, really when you think about it, but it does. Maybe it doesn't even print them. Maybe it transforms them in some manner, or saves them to a file. Doesn't matter, though -- your program is fulfilling its output contract.

A GUI is the final step in this. A one-dimensional stream of bytes cannot fully encompass the requirements for things like boldfaced text, selection of different fonts or their sizes, fine-grained placement of text, etc. Escape codes can be defined to take these into consideration of course, but that's begging the question. The point is, somehow, the application needs to establish the visual setting with which output is going to take place.

Instead, GUIs take an equally applicable world-view, or at least, they're finally starting to. Each item you see on the screen is a layer, or a gadget, or more often, a widget. These layers can be composited on the screen, just as layers of cellulose is composited by an animator in a cartoon production. The back of the screen is assumed to be pure white light, which shines through these layers, which ultimately provides the display you see. To put an object on the screen, you define a new layer with various attributes. To clear the screen, you remove all the layers.

This is a more object oriented approach towards managing the screen, and believe it or not, I find my software ends up doing this more often than not even in text-mode! It's rather burdensome, in fact, to do it in text mode, because then I have to provide a conversion layer of software which interprets the "display tree" and renders the screen as ANSI character codes. But the result is quite worth it -- unless the application is trivial, the resulting software usually ends up smaller, faster (even to display stuff, as the back-end display handler has full semantic information as to what parts of the screen has changed, and usually updates only those parts that are actually visible), and more consistent to use.

Of late, my user environment on my Linux box consists of running in X11 with a window manager that forces all windows to be the maximum possible size. That is, each window is the full screen size. I change applications either with the mouse or with keyboard commands. If I need to see two apps concurrently, I can define another pane in which to place windows, and thereby control screen layout that way. I've found this to be handy when I needed it, but usually fairly rare. I've never had a complaint with this setup. I've been using it for a year and a half now.

This suggests that windows aren't as useful as people make them out to be. My personal experience watching new users is painful, trying to explain what a window title bar is, how to move windows around on the screen, resizing windows, depth arrangement, etc. Jef Raskin, in describing his THE project, even goes on to say that windows and applications aren't necessary. What you're working on should occupy the whole screen at all times. The interface is completely modeless (e.g., Oberon System uses a totally modeless UI). Navigating between parts of the document, or even between whole documents, is handled using a zooming user interface and a very simple, pervasive, easy to use search mechanism. It all just makes sense.

So, in short, the GUI doesn't have to be as complicated as you suggest. It can be as simple and orthogonal as a text mode interface, but it necessarily does more than a simple text-mode interface. Therefore, it follows that software must be programmed accordingly to take advantage of its unique features.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Thu Mar 06, 2003 2:39 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8546
Location: Southern California
Quote:
I'm willing to bet you that it's trouble-free because there's only one way to do something in DOS.

The programmer's text editor I use a lot offers hot keys and F keys, mouse menus, and tool bars.  I always keep the tool bars turned off.  It also offes infinite combinations of windowing and tiling, repositioning and resizing.  My PC-board CAD does not offer the tool bars, which is fine with me.  Neither one has the dumb little pictures (icons) that are meaningless without text below them to tell you what they are.

Discussing my endless Windoze frustrations here might be off-topic.  My point for any designers of 65K hardware or OS would be to encourage the objectives I've mentioned earlier, to avoid putting the user in a cage, regardless of how cute or flowery the cage is.  Of course Uncle Bill wants us in cages so he can keep coming back to help himself to our wallets often.  He'll make the cage as cute and flowery as he can (you know--  increase the gee-whiz factor) to lure his prey in.

One of the many cartoons in Leo Brodie's book "Thinking Forth" illustrates two methods of addressing the problem of security.  In the frame representing many programming languages and OSs, a cage keeps a ferocious dog from reaching a man.  The trouble is, it's the man who's in the cage.  In the one representing Forth, it's the dog that's in the cage.  If I were to draw one for Windows, I might put both inside, or put the man inside and make the openings between the bars such that the dog can get in but the man can't get out.

Of course we can learn plenty from previous efforts, whether commercial or not, to pattern a new design around.  Talk of how this or that was done on Amiga, C64, Mac, Apple II, PCs, HPs, or anything else, can be very helpful.  I'm definitely open to hear it, as long as we don't forget where we're going with it.  Even if no one here actually gets a 6502 PC to market, our own projects will still benefit.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Thu Mar 06, 2003 3:20 am 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
GARTHWILSON wrote:
> The programmer's text editor I use a lot offers hot keys and F keys, mouse menus, and tool bars.


But when you need to do something with the editor, you always do it the same way, every time. That's the point.

Quote:
I always keep the tool bars turned off. It also offes infinite combinations of windowing and tiling, repositioning and resizing.


How often do you actually use this though? By the sounds of your response, not very often. Yeah, it can do it, but is it useful to you? If so, how useful?

I used to be all for windowing interfaces. I currently find them a hinderance of the highest order. Newbies do too; just watch anyone trying to learn how to use MacOS or Windows for the first time.

Quote:
My PC-board CAD does not offer the tool bars, which is fine with me. Neither one has the dumb little pictures (icons) that are meaningless without text below them to tell you what they are.


I'll agree that icons have been violently abused over the years; however, "dumb little pictures" can come in handy when used appropriately, and they don't even need descriptive text underneath them.

Quote:
Discussing my endless Windoze frustrations here might be off-topic. My point for any designers of 65K hardware or OS would be to encourage the objectives I've mentioned earlier, to avoid putting the user in a cage, regardless of how cute or flowery the cage is.


Read Jef Raskin's book, "The Humane Interface." You'll come to realize that the inventor of the MacOS GUI itself is spot on in his assessment that GUIs today (including MacOS') are not the simple, user-friendly environments they used to be. He acknowledges that the WIMP concept was right in the short-term, but wrong in the long-term, because it does not foster habituation (or worse, development of bad habits), and programs which have a whole multitude of ways to accomplish goal X is demonstrably harder for new users to grok. He offers solutions that new UIs should implement, whether textual OR graphical.

To bring this discussion somewhat back on topic, consider the whole idea of user configurability. It sounds really nice to be able to arrange your settings according to your preferences. Screen colors, text fonts, etc. are all candidates for configuration. But that peachy-keen, homey feeling you get from user configuration very quickly disappears as soon as you get a number of technical support phone calls. I've been on the receiving end of technical support phone calls when I worked for two ISPs. While our software wasn't configurable (it was Dial-Up Networking -- not much you can do for that), the fact that each user had a different desktop arrangement, different color settings, etc. all made technical support a minimum of twice as difficult (and thus, expensive for us). "Double click on My Computer, then go to Dial-Up Networking." "What do you mean by My Computer?" "You should have an icon on your desktop that looks like a computer, and reads `My Computer' underneath it." "No, I don't have that." "Sir? Are you sure? All Windows installations comes with it." "Nope. Don't have it." (Turns out this user was using an alternative shell which didn't use the normal desktop icon system, and so, nothing was standard from a technical support point of view. This person literally had to bring his computer in before we knew what the heck was going on.) Even users who DO have the stock Windows 95 desktop would very often rename My Computer, Network Neighborhood, or Dial-Up Networking to something "Cute"(tm) because they thought ... well, it was cute. Well, imagine my frustration when I found out that one of our customer's My Computer was renamed to "Fido's Home" (I'm not kidding), because she named the computer after her dog. It only took three tech support phone calls from her to remember that she renamed it.

The 65-based PC must, in my opinion, take these things into consideration. One of the overwhelming reasons for the popularity of both the Apple-II and the Commodore 64 was its sheer simplicity. Plug it in, turn it on, and go. No fussing about with DIP switches, no fussing about with plug-in cards, no fussing about with anything. People even put up with the abysmal speeds of Commodore's serial peripheral bus because it was so utterly convenient, even despite the wide availability of IEEE-488 interfaces available for the 64 and cheap mods for the 1541 and 1571 drives.

I'm not saying that you should close off the whole system architecture of the PC, for fear that the user will somehow manage to end the world as we know it. But I am saying that we need more integration than even today's "integrated" PCs offer. Integrated, sane video hardware (VGA just doesn't cut it, sorry), sane audio interfaces with reasonable capability (must our audio cards really support studio-grade DSP? We got along fine for decades without it!), sane I/O and auto-configuration systems, etc. These things can all be included lock, stock, and barrel without sacrificing the ability to add peripherals. It also establishes the minimum baseline system, and helps ensure a completely usable system right out of the box. The PC, to this day, still can't make this claim, since you always have to populate it with some compatible video card (not all cards are compatible with Linux or BSD, for example, and even Windows drivers can get horrifically confused with the wide array of seemingly compatible hardware out there now). Audio compatibility is still a major issue. I have a SoundBlaster Live! in my box, and my roommate has a SoundBlaster Audigy with all the bells and whistles. You'd think that we can run similar drivers, but the answer to that is a resounding no. I have a digital camera that uses USB, along with an 802.11b network that also uses USB. I have to have two UHCI drivers installed in my Linux system because one doesn't like to talk with the 802.11b driver. Yet, ironically, the two UHCI drivers seem to live perfectly fine with each other. Go figure.

In short, I'm sick of the headaches of modern PC environments. I spend no less than an hour and a half, all total, per day, just managing the computer, and not producing useful, productive work. That's anywhere between 12.5% to 33% of my computer use schedule, depending on the circumstances. 1% I can see, 5% is maybe tolerable. 12.5% is inexcusable. When my network goes down, that figure increases. When I have to reconfigure my audio settings because one program won't work with 16-bit sound, and another program will not work with 8-bit sound, that figure increases. It's annoying. Damn annoying. Infuriating, in fact.

If this 65-based PC is actually produced, it must ensure that these frustrations don't happen. Otherwise, it's just another non-x96 PC wanna-be. Consider: all non-x86 PCs out there (even modern Macs) all use AT or ATX-motherboards and chipsets. There's something wrong with this picture...


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Thu Mar 06, 2003 12:08 pm 
Offline

Joined: Sat Sep 14, 2002 10:05 pm
Posts: 23
Location: France
Hi Daryl,
with what everyone wants to see on your (his/her) "SBC" (that stands for Single-or is it Simple- Board Computer,right ?) you will end with what we call in French "une usine à gaz" (gas-works) !!! :lol:
It seems to me that the real power behind such a board would be the "intelligence" put on it and that everyone should be able to choose the kind of "intelligence" he/she wants.
So my suggestion would be to come with a bus FIRST (the ASK family's one was not that bad), supported by some kind of universal (and cheap,and easy to find) connectors : the Din 41612 2*32 pins seems a good candidate (can be extended to 96 pins - 3*32 if necessary).The obvious choice for the board would then be the 160 by 100 mm so called EuropaBoard as protoboards are very easy to find.
I would like to find on that board,beside the 65816,supports for 28 pins Eproms (or eeroms or flashroms or ....) and supports for 28pins Srams with decoding as versatile as possible.6551, along with some 6522/6532/6526, is a must too so any DOS or Windoze terminal could be used.A minimal monitor permitting RS232 link,bytes and bits manipulation,file transfer, is also mandatory.
With all that ,everyone would be able to focus on the "intelligence" he /she wants to bring on board (and start programming ...) and think of and build all those marvellous boards he/she would see hooked on the bus.
Yes,the Microtan is a good example of what could be done : on a clear bus,one can expand forever.......
Have fun.I will,anyhow....

René.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Thu Mar 06, 2003 3:51 pm 
Offline
User avatar

Joined: Fri Aug 30, 2002 9:02 pm
Posts: 1748
Location: Sacramento, CA
Thanks Rene, and to all of you who have responded.

It is clear that we all have our own ideas on what would be the perfect computer.

For me, I would include a 65816 and a 16550 serial UART (or a dual UART) for faster serial speeds and some kind of simple boot hardware to allow an OS (user selectable) to be loaded into RAM. Yes, I want the entire memory map to be RAM, to allow the processor to run a full speed after the bootup sequence.

From there, I'd have an expansion bus capable of addressing the full memory map, with I/O decoding and interrupt handling for the I/O hardware.

From there, each user could add what he/she wants in terms of I/O devices and software.

For I/O, I would want to work on video, keyboard, sound, permanent data storage, and standard I/O interfaces such as serial, parallel, USB, Ethernet, IDE, PCMCIA, etc.

Now, I just have to find the time, money, and more time to make it all come true.......

Daryl


Top
 Profile  
Reply with quote  
PostPosted: Sat May 10, 2003 4:47 am 
Offline

Joined: Sat May 10, 2003 4:03 am
Posts: 25
Location: Haslett, Michigan, USA
Hi, Johnny-come-lately here,

Wow, and I thought the religious wars would be over whether the machine ought to be CBMish or Appleish!

Here are my two cents worth.

I think I'd prefer to see some sort of basic reference design for a modern "core" 65xx circuit. Along with this could be another reference design for the important basic higher-level I/O section (video, hard drive I/O, USB, Ethernet, what have you).

The SBC crowd could simply implement these on one board, the backplane crowd should come to an agreement on a backplane, then implement the core on one card, but be able to implement the I/O functions as a second card or several cards. The SBC may have to accept some design efficiency tradeoffs to be 100% compatible with the multi-board computer's minimal configuration, yet be open enough to permit MBCs to add new I/O cards down the road.

The Memory-I/O map ought to be identical for both because we'd want to avoid fancy memory controllers or dynamically-loading OSs as work-arounds for address-space differences.

I feel that the primary storage ought to be flash myself, preferably SmartMedia but I could live with CF if I had to. I haven't scoped out the practicality of this, but here are my thoughts:

Boot ROM: just enough smarts to POST then load and boot an OS kernel from (internal) removeable flash written in existing flash formats (basically similar to MS-DOS formatted drives).

Kernel: Standard I/O supporting some small agreed-upon set of I/O, probably most of the reference basic I/O devices. Some way to natively add "driver" logic for additional devices after booting.

CLI: User interface(s) - here the fun begins. Anything you want at this point. Basic interpreter, DOS or Amiga style CLI, full GUI, whatever. Here is where uniqueness can begin. You can have yer BASIC, yer Forth, yer Oberon.

The software work above the Kernel is gonna go slow, and is very different in character from the hardware, boot, and Kernel work. I'll bet we see a tinyForth and a tinyBasic come as second horses to a fancy MLM. Probably a basic MLM in early boot ROMs anyway.

Using an internal flash drive as a personality module would not only be cool and flexible, these are readily available and inexpensive in some good sizes: 128MB for SmartMedia, larger for CF. The full set of goodies could be booted up off these things w/o any disk access at all. "Builds" could be done using PCs until things get mature.

I might go so far as to suggest that Hard Drives be optional add-ons, not even supported fully in the kernel (though the flash I/O routines could largely be recycled maybe - they're quite similar at an important level if we choose to support MS FAT style drive formats). I think USB and Ethernet are musts, and some people want video but I could live w/o it. I'd want to put the thing on my network and get to it from my PCs. Probably use an HTTP/HTML interface like routers and printers and other appliances do, though fully programmable.

If one CLI (plus whatever) gets popular and alternatives die out, we've lost nothing. Upgrades are cheap and easy, plus you can slap in an old version if need be 10 years from now, then put the 2013 version back in when done.


Top
 Profile  
Reply with quote  
PostPosted: Sun May 11, 2003 10:52 pm 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
dilettante wrote:
Wow, and I thought the religious wars would be over whether the machine ought to be CBMish or Appleish!


The problem is that the CBM machines are built quite similarly to the Apple machines. It is, in fact, Atari who innovated different bus architectures with the 65xx series, and that is a model to study, if not aspire to, in my opinion. (Note: The Amiga was designed by none other than Jay Minor, the father of too many Atari platforms to list here. One familiar with the Atari 800-series, IIRC, will immediately appreciate the Amiga's bus architecture and I/O chipset capabilities.)

Quote:
The Memory-I/O map ought to be identical for both because we'd want to avoid fancy memory controllers or dynamically-loading OSs as work-arounds for address-space differences.


Dynamic loading isn't hard to do. As usual, the Amiga demonstrates its viability with the use of dynamically loaded libraries, using a statically linked binary execution format, in a single-address-space environment. Even today, the people you mention something like this to will tell you, "That's not possible."

Still, supporting a binary format that takes dynamic linking into account would make things simpler and faster at run-time, due to the lack of run-time "stubs." But, alas, I have to admit, I do still prefer AmigaOS's method of load-time static linking over most DLL implementations. I feel I have greater control.

For that matter, why distinguish between an application and a library at all? The only difference I can think of is that an application has ONE entry point, while a library has multiple. But having only one entry point is merely a degenerate case of having many; hence, there is no concrete difference between the two.

Quote:
I feel that the primary storage ought to be flash myself, preferably SmartMedia but I could live with CF if I had to. I haven't scoped out the practicality of this, but here are my thoughts:


SmartMedia isn't going away, as a few here have feared. I found a USB-bus SM drive in Radio Shack a while ago for a mere $20. That's the same basic price point as *internal* floppy disk drives, suggesting an internal SM drive would be even cheaper than its floppy cousin. Also, while media prices for smart media isn't as cheap as floppies, the cost per MB of storage is still quite a bit cheaper than floppy storage.

The 65-series platform will need a USB host controller of some kind. This is the one missing piece. It should support, at a minimum, 1.5Mbps and 12Mbps throughputs (e.g., USB 1.1). Once this is taken care of, a large number of I/O-related problems will be solved.

Quote:
Boot ROM: just enough smarts to POST then load and boot an OS kernel from (internal) removeable flash written in existing flash formats (basically similar to MS-DOS formatted drives).


Usability studies seem to suggest that POSTing takes too long for most users. People want instant-on access to their computers. Hence, while I agree that POST functionality should be in the boot ROM, it should not always run. The best way to handle this would be to run POST only if a certain key (combination) on the keyboard is pressed during booting.

For example, it takes my PC over 2 minutes to boot into Linux; just under half that time is spent in BIOS doing RAM test and bus enumeration. It's silly, because PCI bus enumeration, USB bus enumeration, and detection of devices on the various I/O ports should take at most a few milliseconds.

By the way, an approach similar to what you proposed above was used on the Atari ST. It booted from a small, fixed ROM, but its GEM desktop environment was loaded from a ROM-disk. This kept the cost of the ROM system low, as 16-bit wide ROMs that were fast enough for the 68000 were not available back in the mid-80s. The kernel was loaded into a "write-protected" region of RAM, to make the RAM behave exactly like ROM.

The 16-bit ROM width isn't an issue for our purposes, but bus speed certainly is. Therefore, booting the operating system off a ROM-disk makes a lot of sense. A primitive form of memory management, while not strictly necessary, would be highly desirable to prevent rogue software from overwriting the kernel image once loaded into memory.

Quote:
Kernel: Standard I/O supporting some small agreed-upon set of I/O, probably most of the reference basic I/O devices. Some way to natively add "driver" logic for additional devices after booting.


I personally like the concept of an exokernel myself; the kernel provides zero resource policy, but provides only protection of resources. Applications are then free to implement whatever I/O or resource policies they desire. "Operating systems" as we currently know them are implemented as shared libraries (this isn't too far from the truth either; by definition, an OS is a shared base of software which all applications link to at load-time or run-time). The performance of the system is, unlike microkernels, equal or superior to traditional monolithic kernel designs, as applications ultimately have (near) register-level access to resources.

The nice thing about exokernels is they make good on the promise of running software from multiple "operating systems" (which microkernels have, to date, not been able to deliver transparently). Thus, "installing an OS" is as simple as copying the relavent libraries into the appropriate place in the filesystem. The only requirement is that the variety of operating systems share at least one common filesystem (so that they know how to access their respective libraries).

Quote:
Using an internal flash drive as a personality module would not only be cool and flexible, these are readily available and inexpensive in some good sizes: 128MB for SmartMedia, larger for CF. The full set of goodies could be booted up off these things w/o any disk access at all. "Builds" could be done using PCs until things get mature.


I recommend using USB devices, because they're available, they're cheap, and they all use a consistent device driver model. That is, a USB floppy driver will/should work with *any* USB storage device, CF and SM included.

Quote:
I might go so far as to suggest that Hard Drives be optional add-ons, not even supported fully in the kernel (though the flash I/O routines could largely be recycled maybe - they're quite similar at an important level if we choose to support MS FAT style drive formats).


Compact Flash is IDE; there is *zero* difference. If you support CF, you'll inherently also support harddrives. SmartMedia is different, however. But if you use USB storage devices, you have ONE driver for all these, as they ALL use the SCSI command-set.

Also, PLEASE remember that the storage device and the filesystem used on it are two completely different issues. There is nothing in FAT that is inherently "harddrive specific" (in fact, it's designed *specifically* for use on floppy drives); likewise, there is nothing in compact flash that mandates the use of FAT. I've used Linux's ext2fs filesystem on CompactFlash and SmartMedia several times, with complete success.

People seem to think that FAT is required because of the illusion that PC BIOS won't boot with anything else. This is NOT TRUE. BIOS boots the system by loading only the first 512 byte block on the media, and branching to that code. If that code is designed to use, say, Amiga OFS instead of FAT, then it'll boot using OFS.

Quote:
I think USB and Ethernet are musts, and some people want video but I could live w/o it. I'd want to put the thing on my network and get to it from my PCs. Probably use an HTTP/HTML interface like routers and printers and other appliances do, though fully programmable.


The original purpose of the 65-series computer is to serve as a decent replacement for the PC: low power, quiet, and user-friendly with minimum of configuration muss and fuss. Consequently, to be a replacement of the PC, then it must have video support.

Trying to use a computer purely from an HTTP interface has failed on more than one attempt. Protocols such as PC Anywhere and the open-source VNC will probably be more to your liking. If not, you might want to research distributed windowing systems, such as Berlin or X11.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Mon May 12, 2003 2:36 am 
Offline

Joined: Sat May 10, 2003 4:03 am
Posts: 25
Location: Haslett, Michigan, USA
Excellent points, all well taken.

Quote:
The original purpose of the 65-series computer is to serve as a decent replacement for the PC: low power, quiet, and user-friendly with minimum of configuration muss and fuss. Consequently, to be a replacement of the PC, then it must have video support.


Given that premise, I've probably wasted everyone's time by posting in this thread. After seeing the hobby almost from its beginning (started drooling over hardware in magazines and at local club meetings in 1974) I came to the conclusion that the "wars" were over and sold my Amiga 2000 and bought a 486 PC around '93. Apple and Linux notwithstanding, the consumer/office PC market belongs to Wintel for now. I can't see a 65XX or 65XXX machine coming close to competing with even handheld devices in that market. The hardware/software arms race is tough to catch up with.

Feel free to carry on the good fight (glad to see it) but from where I stand life is too short.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Mon May 12, 2003 5:33 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8546
Location: Southern California
kc5tja, or anyone familiar with USB's innards, maybe you could give us kind of a primer or tutorial on the subject.  Obviously several are interested.  Some questions I might have are:

1)  Will we be dependent on here-today-gone-tomorrow chip sets?  Or can it be done with parts that will hang around several times as long as it takes us to complete a project on hobbyist time (sometimes even years)?

2)  If we implement the original (slower) USB spec, will all future USB devices made for faster spec.s be downward compatible, or will hardware and software have to be updated to accommodate a new extension to the spec in a couple of years?

3)  How much code space and how much hardware might be involved in implementing it?  (I'm thinking about the investment in time, hardware space on a board, amount of memory required, and lastly, dollar cost.)

The editor of one of the trade magazines practically said recently in an editorial that "RS-232 is dead!" and then got bombarded with a ton of E-mail from readers saying this was not true at all, except maybe in portions of the consumer PC market.  One of them, Ivan Baggett, the president of Bagotronix, Inc., gave several reasons.  Here's part of his letter:

      "We also make custom OEM products that are not listed on our website.  These control products are used in industrial automation for high-pressure air compressors, photographic film processing, and traffic control.  In each product, there is at least one RS-232 port for interfacing to other control systems or PCs.  In each case, the added cost and complexity of a USB port would be prohibitive.  USB is not an easy thing to add to an embedded system, for these reasons:

      1) There are very few general-purpose embedded microcontrollers with built-in USB capability.  Therefore, USB is typically added as a separate chip, increasing the cost.

      2) USB protocol adds a lot of code to an embedded device.  In many cases, it would mean using a microcontroller with more memory, further increasing unit costs.  Also development time would be increased.

      3) USB nodes are either hosts or devices.  They are not peers.  This means that USB devices (embedded systems) can only talk to a USB host (PC), but not each other.  Therefore USB is useless as a peer-to-peer link.  There is a new industry effort to overcome this limitation of USB, it's called USB-On-The-Go.  But USBOTG is very complex to code, and requires a USBOTG controller, which is only available from one company so far.  Contrast this to RS-232, which can be used between any two units without regard for host/device issues.

      The consequence of (3) is that any embedded system that is required to interface to both embedded systems and some new PCs would require both RS-232 and USB ports.  This would add cost, complexity, and size.  Given the constraints that embedded designers like me face, it's obvious which port must be eliminated from the embedded device—it's USB.

      RS-232 is trivial for a PC manufacturer to support, but USB is quite a burden for embedded systems to support.  In the cutthroat world of PCs, they will eliminate anything they can to save money.  This is the real motivation for the disappearance of the RS-232 port from PCs.  The embedded computing world does not need to be dictated to by the bean counters of the PC world."

[ end of quote ]

Obviously he's coming at from a perspective other than that of the home- or office-type PC market, but he definitely mentions things that are of interest to me in my work.  That's not to say I think USB should get lost, but rather that both have their place and RS-232 isn't going away.

In any case, I am interested in how USB might be implemented in a 6502 system where 10K of code space for implementing it is not acceptable.  Without knowing, I doubt that it would be that much, but how much would it take?  Is it reasonable for a 6502 hobbyist to tackle?  Perhaps someone could write up such a primer and Mike could post it on this website.  I'm working on some other primers that will go up this year.

Quote:
the consumer/office PC market belongs to Wintel for now.

and it will remain that way if people let themselves be conquered.  Although I don't like it, I use two Pentium machines because my business demands it— the software selection for my work on other machines is generally weak, and in some cases, non-existent.  For example, a programmable logic manufacturer may provide development software only for Windoze.

Quote:
I can't see a 65XX or 65XXX machine coming close to competing with even handheld devices in that market. The hardware/software arms race is tough to catch up with.

Although we would like to see market-viable alternatives, I'm not sure any of us are expecting it to come from a 65-family machine.  It would be nice to be able to replace some of the PCs' functions however with a 65-family machine.  These functions might include programmers' text editors, assemblers, compilers, and even some internet use, but probably would not extend into things like very complex CAD.

Garth

[Edit, many years later:]
Quote:
SmartMedia isn't going away, as a few here have feared.

SmartMedia memory cards are no longer manufactured as of around 2006 (according to Wikipedia).

Quote:
The 65-series platform will need a USB host controller of some kind. This is the one missing piece.

The MAX3421 might do the job.  It's an SPI-interfaced USB peripheral/host controller IC.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Mon May 12, 2003 7:14 am 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
GARTHWILSON wrote:
1) Will we be dependent on here-today-gone-tomorrow chip sets? Or can it be done with parts that will hang around several times as long as it takes us to complete a project on hobbyist time (sometimes even years)?


Currently, all are here-today-gone-tomorrow solutions because not a *single* USB host controller "chip" exists -- they're always implemented as a component on a motherboard chipset. Hence, if we were to support USB, we'd have to make our own (e.g., via an FPGA or some such).

Quote:
2) If we imlement the original (slower) USB spec, will all future USB devices made for faster spec.s be downward compatible, or will hardware and software have to be updated to accommodate a new extension to the spec in a couple of years?


I can't answer this, but I know that, at least, USB 2.0 devices are partially supported under USB 1.1. Those devices which *require* the higher throughput of USB 2.0, of course, will not work under the USB 1.1 bus. Those that *support* it, but can work nonetheless with 1.1, then yes, it'll work.

Quote:
3) How much code space and how much hardware might be involved in implementing it? (I'm thinking about the investment in time, hardware space on a board, amount of memory required, and lastly, dollar cost.)


Again, I can't really answer this question.

As far as hardware, I'm not at all sure. However, for software, I'm guessing around 6 to 12KB of code would need to be dedicated to bandwidth allocation, and queueing of packets.

Remember that USB is a special-purpose networking protocol, and so it'll have requirements very similar to, say, Ethernet, as far as code space requirements.

But again, the interest in USB is not for embedded device control. It has never been, and it never will be. I would love to see the concept of a serial loop bus be re-introduced, but that'll never happen if we want cheap, off-the-shelf peripherals for this computer. Keyboards, mice, and flash/SmartMedia readers/writers make the investment worthwhile alone, to say nothing of wireless Ethernet adaptors.

Quote:
In each case, the added cost and complexity of a USB port would be prohibitive. USB is not an easy thing to add to an embedded system, for these reasons:

1) There are very few general-purpose embedded microcontrollers with built-in USB capability. Therefore, USB is typically added as a separate chip, increasing the cost.

2) USB protocol adds a lot of code to an embedded device. In many cases, it would mean using a microcontroller with more memory, further increasing unit costs. Also development time would be increased.

3) USB nodes are either hosts or devices. They are not peers. This means that USB devices (embedded systems) can only talk to a USB host (PC), but not each other. Therefore USB is useless as a peer-to-peer link. There is a new industry effort to overcome this limitation of USB, it's called USB-On-The-Go. But USBOTG is very complex to code, and requires a USBOTG controller, which is only available from one company so far. Constrast this to RS-232, which can be used between any two units without regard for host/device issues.


Well, again, USB isn't for embedded control. It was designed first and foremost for general purpose desktop users in mind.

Quote:
RS-232 is trivial for a PC manufacturer to support, but USB is quite a burden for embedded systems to support. In the cutthroat world of PCs, they will eliminate anything they can to save money. This is the real motivation for the disappearance of the RS-232 port from PCs. The embedded computing world does not need to be dictated to by the bean counters of the PC world."


This is a fallacy if I've ever heard one. The USB hardware is substantially more involved than the RS-232 hardware. The cost of an RS-232 port is mere pennies, even cheaper in volume production. USB implementations require bus mastering (since it's DMA driven), and limited hardware support for the serial protocol (this can be averted), and high-speed shift registers for (de)serialization. RS-232 can be implemented with a set of 10-bit shift registers and a down-counter (this is how the Amiga implemented it; it handled 1.536Mbps too!!).

The reason for eliminating the RS-232 port is two-fold:

1. Eliminating the RS-232 ports from a PC gives the perception of "progress," and forces the user to have to utilize USB devices. This benefits the USB community by (artificially?) creating an economy of scale.

2. There just plain isn't enough room on many small-sized PCs (e.g., micro-ATX boards) for RS-232 ports. USB ports are smaller, better shielded, and hot-swappable.

The tradeoff comes in the user's ease of use and configuration, and its hot-swappability (a feature which I use quite often with my digital camera, BTW).

Quote:
Obviously he's coming at from a perspective other than that of the home- or office-type PC market, but he definitely mentions things that are of interest to me in my work. That's not to say I think USB should get lost, but rather that both have their place and RS-232 isn't going away.


However, you'll never, ever find a CompactFlash or SmartMedia drive for RS-232. For USB, however, they're everywhere, and darn cheap to boot.

Quote:
In any case, I am interested in how USB might be implemented in a 6502 system where 10K of code space for implementing it is not acceptable.


Unless you're one slick coder, it can't. Don't bother trying. :) I was going under the assumption that we were planning for the 65816-based PC.

Quote:
Without knowing, I doubt that it would be that much, but how much would it take? Is it reasonable for a 6502 hobbyist to tackle? Perhaps someone could write up such a primer and Mike could post it on this website. I'm working on some other primers that will go up this year.


I think it's quite possible to tackle by a hobbiest. The limiting factor is the host controller required by the protocol. You either need dedicated hardware, or a dedicated CPU handling the bit-banging requirements of USB that can tackle 12Mbps throughputs. That won't happen with a 65xx(x) series CPU.

I don't know anything about low-level USB implementation requirements, however; someday, I'll have to find out, as I'd like to use my digital camera from within my native Forth environment (if/when I ever get that done. :( ).


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Mon May 12, 2003 7:25 am 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
dilettante wrote:
Given that premise, I've probably wasted everyone's time by posting in this thread.


I don't think so. But remember that what we're describing is a hobby machine, although Garth has interest of using it for his workbench machine (he already has a custom-built 6502 box that he's looking to replace with an upgraded architecture).

Quote:
After seeing the hobby almost from its beginning (started drooling over hardware in magazines and at local club meetings in 1974) I came to the conclusion that the "wars" were over and sold my Amiga 2000 and bought a 486 PC around '93. Apple and Linux notwithstanding, the consumer/office PC market belongs to Wintel for now.


I agree that the 65xx(x) architecture won't conquer the desktop again anytime soon, if at all. But my "quiet" (compared to the other computers I have here) PC sounds like the inside of a 747 jet, and emits all sorts of RF noise, making my ham radio hobby less pleasing. Moreover, it draws the equivalent of three 100W lightbulbs in power. A 65x-based PC will not run Windows or Linux, which is fine because it's a hobby machine anyway. But it can run all the important software a user is willing to write or download from someone else, such as e-mail clients, web browsers, and text editors/word processors.

You might not realize it, but the Commodore 64/128 "industry" is alive and well, and is now entirely user-supported.

Quote:
I can't see a 65XX or 65XXX machine coming close to competing with even handheld devices in that market. The hardware/software arms race is tough to catch up with.


A 16MHz 65816 will give a 16MHz Dragonball CPU a dire run for its money. Beyond that, however, the ARM owns the '816, and the MIPS is better still.

Quote:
Feel free to carry on the good fight (glad to see it) but from where I stand life is too short.


Life is too short not to try new things. I, too, grew up with the early 8-bits, moved to the Amiga (although I still have mine, it isn't used often anymore), and now am with the x86-based PC architecture. Moving to the PC early on was nice, but it's since gotten very boring, and very monotonous what with Microsoft running my life and all. I'd like a refreshing change from that.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Wed May 21, 2003 1:24 pm 
Offline

Joined: Wed May 21, 2003 1:08 pm
Posts: 27
Location: Germany
Hi,

we're a group of 6502 ATARI hobbists that work on an USB device for the ATARI 8Bit, so I like to share some of our USB experience so far.

Today, after 6 Month of development (4 Mon Hardware and 2 Mon Software so far) we're able to send packets between the ATARI and an Linux PC over USB, but nor full communication. The development enviroment is FORTH, the drivers will be translated in optimised assembler when proven in the interactive FORTH enviroment.

GARTHWILSON wrote:
kc5tja, or anyone familiar with USB's innards, maybe you could give us kind of a primer or tutorial on the subject. Obviously several are interested. Some questions I might have are:

1) Will we be dependent on here-today-gone-tomorrow chip sets? Or can it be done with parts that will hang around several times as long as it takes us to complete a project on hobbyist time (sometimes even years)?


Yes, it seems so. We use the National Semiconductor USBN9602 USB Controller. This is a node controller, not a host controller, but we try to bend this thing to allow 1-to-1 USB-Node communication. But this has to be proven in the future development.

I know people who have the source for FPGA USB Controller if needed.

Quote:
2) If we imlement the original (slower) USB spec, will all future USB devices made for faster spec.s be downward compatible, or will hardware and software have to be updated to accommodate a new extension to the spec in a couple of years?


New USB-Hardware is free to support the older or only the newer specs. I think that devices that need throughput will stop supporting USB 1.1 in the future :( , so yes, USB is not a solution for ever....

Quote:
3) How much code space and how much hardware might be involved in implementing it? (I'm thinking about the investment in time, hardware space on a board, amount of memory required, and lastly, dollar cost.)


I plan for 2-3 K 6502 Code for a generic driver (Mouse, Keyboard). For Ethernet or Storage you need also to implement higher levels like FAT Filesystem or TCP/IP which can get bloated big.

So if you format the Storage with a low-level filesystem like ATARI DOS 2.x, the drivers on the 6502 will be small. All we need then is a FS-driver for the PC world, but having all the emulators, writing a new FS-driver for, say Linux, is not a big issue. Its easier to get the PC reading the Home-Compter FS that the other way round.

Hardwarecosts so far are 50 Euro per Unit, so it's not really cheap.

And yes, I would like to see a new, effordable 6502/65816 computer!

Carsten Strotmann


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Thu May 22, 2003 5:32 am 
Offline

Joined: Sat Jan 04, 2003 10:03 pm
Posts: 1706
cas wrote:
Today, after 6 Month of development (4 Mon Hardware and 2 Mon Software so far) we're able to send packets between the ATARI and an Linux PC over USB, but nor full communication. The development enviroment is FORTH, the drivers will be translated in optimised assembler when proven in the interactive FORTH enviroment.


Would you be able to post the Forth code for this? I'd be interested in viewing the Forth sources when they're complete.

Quote:
I know people who have the source for FPGA USB Controller if needed.


I was envisioning that FPGAs would be used pretty much for all custom logic in the computer, including video and audio, SmartMedia interface, basic I/O ports, etc. I think it'd be best to also include the USB interface in the FPGA category as well.

The nice thing about this is, if we make the FPGAs load from EEPROM, it'd be possible to hot-fix "hardware bugs."

Quote:
I plan for 2-3 K 6502 Code for a generic driver (Mouse, Keyboard). For Ethernet or Storage you need also to implement higher levels like FAT Filesystem or TCP/IP which can get bloated big.


FAT is utterly trivial to implement. Remember that the original PCs shipped with only 64K to 128K of RAM installed; DOS 1.x had to be loaded into a portion of that RAM. VFAT32 is only marginally more sophisticated. I wouldn't bother with VFAT16 -- it's a waste.

Quote:
So if you format the Storage with a low-level filesystem like ATARI DOS 2.x, the drivers on the 6502 will be small.


I'm willing to bet that FAT is as simple as AtariDOS, and all the while, you gain the benefits of supporting subdirectories. Are there any sites that document AtariDOS (seeing as how Commodore's filesystem is quite well documented on the Internet, there should be comparable docs for AtariDOS as well) that you recommend as a starting point?


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 75 posts ]  Go to page Previous  1, 2, 3, 4, 5  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 4 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: