6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sat Nov 16, 2024 6:10 pm

All times are UTC




Post new topic Reply to topic  [ 41 posts ]  Go to page Previous  1, 2, 3  Next
Author Message
PostPosted: Wed Jan 24, 2018 8:58 pm 
Offline
User avatar

Joined: Tue Oct 25, 2016 8:56 pm
Posts: 362
ARM processors have a seperate shadow register for the stack pointer and interrupt return address for every possible source of an interrupt. The FIRQ additionally has some of the general purpose registers shadowed. The Z80 doesn't have automatic shadow registers, but it does have a complete duplicate set of all registers that can be swapped out with a single instruction. If super-fast interrrupt response is what you want, then shadow registers is the way to go. Then its a matter of flipping a single flip-flop in the CPU.

For the 6502 though, I think its safe to say that pretty much any IRQ that isn't a stub is going to need to use A, so the IRQ/RTI itself may as well save that too, saving cycles and code space vs having to explicitly use PHA/PLA. I seem to remember reading somewhere that it was originally intended that it would save A but this was removed as it required increasing the maximum possible instruction length in the PAL to 7 cycles instead of 6, and this required an unacceptably large amount of die space. Or some such.

_________________
Want to design a PCB for your project? I strongly recommend KiCad. Its free, its multiplatform, and its easy to learn!
Also, I maintain KiCad libraries of Retro Computing and Arduino components you might find useful.


Top
 Profile  
Reply with quote  
PostPosted: Wed Jan 24, 2018 9:05 pm 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8541
Location: Southern California
There were stack (Forth) processors 30 years ago that had only a 4-cycle overhead for interrupt and 0-cycle return-from-interrupt. I'm thinking of particularly the Harris RTX2000 and SC32. Everything is automatically saved, because it's all on stacks. Saving contexts and environments does not require any action. The 6502's interrupt sequence takes 7 cycles, and saves the status register. There's no need to save registers the ISR doesn't use, although it seems like the net effect would have been positive if the designers had added 3 cycles to save A, X, and Y as well, and 3 to the RTI to restore them. OTOH, it is my understanding that that would have pushed the entire PLA to require an additional bit in the cycle counter.

It's no secret that the '02 is poorly suited for compilers. However, I don't think it's necessarily valid to say it has to struggle with "generic" function calls because of the argument-passing overhead. I show why in my 6502 stacks treatise. The '816 is of course more efficient in handling arguments on stacks though, especially with its hardware-stack-relative addressing modes.

My own feeling is that too much emphasis is placed on portability, at the expense of efficiency and of hardware simplicity. Even in my PCB CAD, I'm using an old DOS-based version of Easy-PC Professional from Number One Systems in England, for several reasons. There was also a very inexpensive non-Pro version available at the same time which was limited to 8 layers and only viewing 2 at a time IIRC, but it was written in assembly and screen re-draws were nearly instant on a 16MHz '286. The Pro version we got was written in C and not nearly as fast (although it was fine when PCs reached several hundred MHz). Initially the Pro version written in C was also very buggy, unlike the non-Pro version written in assembly. I and one other very intensive user here in the States kept finding and documenting the bugs, and the programmers were very responsive in working with us to fix them all. The use of C was undoubtedly motivated by the complexity of the additional capabilities desired; but I would also say that assembly language has enjoyed advances since then too, with improved use of macros. The CAD has never been ported to non-x86 processors, so portability is not an issue.

Quote:
The ND 1/10/100 series of minicomputers had 16 interrupt levels, and a dedicated set of registers for each level.

The benefit is still lost if you have nested interrupts—although nested interrupts are pretty unlikely within a level (if I understand you correctly). The Z80 boasted that it could switch to another set of registers for interrupts, making it unnecessary to save them. I'm nesting interrupts all the time though, so you'd still have to save them. The benefit is lost.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Wed Jan 24, 2018 11:32 pm 
Offline

Joined: Tue Jul 24, 2012 2:27 am
Posts: 679
GARTHWILSON wrote:
My own feeling is that too much emphasis is placed on portability, at the expense of efficiency and of hardware simplicity.


I'm in 100% agreement on this point, and it affects every single piece of the ecosystem. Back in the day, portability was too costly to support. In order to get chips cheaper, the ISAs were optimized for a particular use, and software was optimized to that use. It needed to ensure that code density was balanced with memory bandwidth and memory sizes. It's not just the ISA that's compressed, but the hardware can follow fixed-function paths in the chip which are way smaller and faster than generic paths. In using such a system, you need to think in terms of how to exploit what it can do straightforwardly, not in terms of generic software abstractions; those won't match and its result won't fit.

Making the ISA more generic will increase code size, increase decoding complexity, increase chip size, and run slower. That's happened as chip manufacturing costs (including RAM to hold more code & data) decreased and clock speeds increased to compensate, but was a heavily limiting factor in the past when there were merely thousands of transistors per chip, not billions.

_________________
WFDis Interactive 6502 Disassembler
AcheronVM: A Reconfigurable 16-bit Virtual CPU for the 6502 Microprocessor


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 25, 2018 6:30 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8541
Location: Southern California
It's undoubtedly a philosophical point, and I know not everyone will see it the same way. I hold this philosophy on other areas of life too though, not just software. My uncle who's probably worth $100M [*] has the attitude that it's easier for him to make money than to figure out how to economize. I, OTOH, actually want to live a simpler life, not because it costs money to have airplanes and a half-million-dollar RV and houses and dozens of restored classic cars and motorcycles in his personal museum, but because regardless of money supply, I want a simpler life.

Similarly, we have reached a level of computer hardware power that forgives—or at least has the illusion of forgiving—all manner of sloppiness and inefficiency in software. I don't deny that I benefit from what I learn in online videos, and from my Gimp photo-editing software, and other complex software, (and from forums like this one which isn't nearly as complex), and I can't imagine writing something as complex as a browser myself or even with with a small team of programmers, to support these capabilities. It's beyond me. But at the same time I'm often frustrated with inexcusable software problems seen on our PCs with 64-bit, multi-GHz, multi-core processors, as expressed in other topics, and I don't like things that are done just because they can be (particularly excessive fanciness), and I yearn for making things more simple and efficient.


[*] He didn't start out wealthy. He started out rather poor, but worked 16-hour days for many years, kept re-investing and risking everything he had, made a lot of good decisions, and probably had a certain amount of luck too.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 25, 2018 6:19 pm 
Offline

Joined: Sat Dec 13, 2003 3:37 pm
Posts: 1004
GARTHWILSON wrote:
It's no secret that the '02 is poorly suited for compilers. However, I don't think it's necessarily valid to say it has to struggle with "generic" function calls because of the argument-passing overhead. I show why in my 6502 stacks treatise. The '816 is of course more efficient in handling arguments on stacks though, especially with its hardware-stack-relative addressing modes.

The question, perhaps better put, is whether in the domain of running larger programs written in high level languages did something like the 68K perform better than something like the '816. Have a CPU that runs more efficiently with hand tuned code in assembly than another CPU doesn't bring much benefit when the majority of the users don't write code that way, but instead rely on high level compilers.
Quote:
My own feeling is that too much emphasis is placed on portability, at the expense of efficiency and of hardware simplicity.

Yet portability is what has empowered the industry to be where we are today. The simplest case is the explosion of the micro controller environment today. The sheer diversity of which is powered by the portability of everything from OS's to stepper motor drivers to temperature calculations. And that's simply portability of source code.

Next you have portability of object code. Where the toolchain let's objects created from different tools link together. Fortran from C programs, assembly routines in Pascal, etc. As long as a compatible object file, AND it's associated calling convention, is created, you gain the benefit reusing other work in your applications. On the Mac there were extensions to the compilers to distinguish between Pascal and C calling sequences, for example.

The other aspect is portability of the actual application programmers. They write C for Intel, ARM (all 276 different variants it seems), MIPS, TI, SPARC, PowerPC, Coldfire, etc. When a new processor comes out, the manufacturer is essentially obligated to provide a toolchain, a C based toolchain, in order to get any traction whatsoever. Because with that toolchain, the entire world opens up for that processor. Back in the day, a customer asked me which machine they should buy and I simply told them that as long as it support the Database tool set we used and had a modem, I didn't care. That was back in the day when there were a plethora of Unix based mini computer systems being sold (HP, Sun, Sequent, Sequoia, Pyramid, IBM, Data General, SCO Unix, AT&T, NCR, etc. etc.). Folks back then where crushed by the complexity of choice.

Even here, on this forum, we see the diversity of simply which assembler to use and the struggles folks have moving 6502 from one toolchain to another. All of that is friction on the process, and friction is never good.

Finally, portability is manifest even in electronics. Folks long ago talked about "Software ICs", using the integrated circuit as a unit of composition. Can you imagine where we would be, both as an industry and a community, if everyone was wiring processors by hand? Creating their own processors? "Yea my 6502 doesn't have Decimal mode, I didn't add those gates. And I added a Z register, but here's my trig routines if you'd like to use them!" Look at the diversity of components available to your projects.

The industrial revolution is premised on portability and interchangeability of components and parts. Yea, it takes less craftsmanship that way, but it sure scales a lot better.


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 25, 2018 6:25 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10980
Location: England
(Speaking of the industrial revolution and standardisation, the first standard national thread was, apparently, the Whitworth thread (my first decent camera, a Russian one, had a 1/4" Whitworth tripod fitting) and it might not be a well-known fact that the same Joseph Whitworth worked on Babbage's Difference Engine.)


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 25, 2018 6:58 pm 
Offline
User avatar

Joined: Mon May 12, 2014 6:18 pm
Posts: 365
Quote:
Similarly, we have reached a level of computer hardware power that forgives—or at least has the illusion of forgiving—all manner of sloppiness and inefficiency in software.
This makes sense economically. I think if you wrote something like Word or Excel from scratch in x86 assembler and took the time to optimize everything by hand, you might have something a lot faster. The problem is that no one would be willing to pay five times more for it to compensate the extra programming time that would require. What people will pay for software determines the trade off between cost and quality, not lazy programmers or something else nefarious.

OTOH, we are here working on hobby projects where the time of the programmer is free and there is no firm deadline for finishing. In that case nothing gets in the way of pursuing perfection as long as your patience holds out :D


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 25, 2018 7:32 pm 
Offline

Joined: Tue Jul 24, 2012 2:27 am
Posts: 679
I think there's a few crossing paths here. There's 1970s tech, where there were severe constraints and hand-optimization made things feasible and efficient with what was available at that time. Then there's 2010s tech, where there are no constraints and we're free to abstract when working on hard problems and let the machine deal with it, avoiding the additional hard problems of hand-optimizing tons of layers, allowing us to stick to the original problems at hand.

The 6502 *design* and the software ecosystem running on it can't be critiqued in context of 2010s processors. That context didn't exist when that design was defined, and there's still small-scale computational work to be done where that design is beneficial. In this new context, there are expectations of what computers should do that we simply didn't expect of small computers of the past, because it would have been absurd.

(Plus, hand-optimizing code to the 6502's strengths is a fun challenge, for us weirdos. :P)

_________________
WFDis Interactive 6502 Disassembler
AcheronVM: A Reconfigurable 16-bit Virtual CPU for the 6502 Microprocessor


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 25, 2018 10:45 pm 
Offline

Joined: Thu Mar 10, 2016 4:33 am
Posts: 181
I'm not sure if RISC is a very useful concept on 8-bit CPU's. With a 32-bit bus you can load a register or do something useful in one bus cycle, but with the 6502's 8-bit data bus, many operations must take more than one cycle. That 8-bit bus is very busy and the 6502 design does well to maximise performance given that constraint. A true 8-bit RISC design would end up being very slow and code size would be huge.

Extending the 6502 design to 32-bits is also problematic as for efficiency the designs would tend towards modern RICS designs, and then lose the character of the 6502 and start to not make much sense, you might as well just use an ARM by then. The 6502 is characterised by it's small 'core' register set, it's powerful addressing modes, zero-page (and associated addressing modes), and it's fast interrupt response. It was also built for a world where memory could keep up with the CPU.

The 65816 adds decent compiler support to this, although not as complete as the 68000 which was designed for compilers. Mostly it adds stack relative addressing which is really helpful for compilers. The two modern things that it doesn't support well are systems where the memory speed is decoupled from the CPU and support for multitasking. In the x86 world these features started with the 286/386, and by that time the 6502 had been completely replaced by the 68000 series. If the 6502 had continued to evolve then it would probably look a lot like a modern x86 system, and that is probably not that interesting to the people that come here.


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 25, 2018 10:51 pm 
Offline

Joined: Tue Jan 23, 2018 10:14 pm
Posts: 6
Took me one day to get of my branching thoughts. One (additionally) MIPS like
loop with loop length within the byte
last operation of the loop (no influence on jump condidtion)

Another day (on and off) to follow all the links: And I have to say am not interested in a widebus variant. I like bigger regs (like z80, 68000 and the 368SX in real mode I used). In the interest of pipelining I would explicitly state that ADC propagete one byte every cycle. For counters there may be a special backwards flag to detect zero in time.

Merely I read that there were multiplexed address busses at the time. Save some more pin. Apparently there are multiplexers in a C64. Made with a faster process? I do not think so. Then PCIe has even less wires.

I am not thinking in todays technology. I just found that systems using a small bus and a largen DRAM (not SRAM) were quite successful. Also in our PC the Video RAM was often unused because I was in text-mode. Shared memory made sense until Win95. In these other threads I found a 65ce02. But that chip is from 1988! What did cbm do all the years? Historically they could have sped up their CPUs early on. Before people wrote cycle perfect loop for them. That 386SX mentioned above ran at 16 Mhz with no cache (but fast page ram mode). So I guess in 1989 affordable ram could run at 8 MHz in (not fast page mode). That would mean 4 MHz for the CPU and a 80 column interlaced at 100 Hz display from shared memory. I guess I discribe the Apple ][gs here.

Single cycle is more feasable in the shared memory architecture because we do not try to use every memory cycle. That would make our pipeline twice as deep. A deeper pipeline requires branch prediction adding more pipeline stages. Look up in cache is another pipeline stage thus I avoid cache in these thoughts. Counters can count up in one cycle. For the other computations two physical accumlators can be used in odd and even cycles. Thus only A can appear as source and as destination.

I think, my puzzle is compiler friendly. The µCode is simple and matches the C-programming language. The compiler then works it way from inside {} to the outside. Of course real C with all variables at the top is adversely. I would declare them as need like in C++. So the compiler just checks if a reg is free. If not, push it out, or store it back (member variable). I would just like to get rid of BCD and any sign extending stuff. For sprite emulation shifting by multiple positions is needed, but hey: special case, use real hardware.

For multimedia applications I need many ADDs while for pointers I need ADC. I think the C64 is not a clean design with its MMU. So in 1977 they found out: 32k RAM is not enough. Why not using more pins and larger regs, instead of registers in an external MMU there would be B_low B_middle B_high. If you deal with the KERNEL, load some B with its address. Most of the time you would deal with yourself and not care if the whole RAM is like 128k big. Also please no additional cycles at page or bank boundaries: Counters have also the carry propagate from h to m to l (meaning all higher bits=1). Loops with their short distance should also work this way.

Now an updated opcode counting (and as I saw that the 65ce02 has counter-registers and ALU registers).
only four regs available = 8
INC X Y C SP
DEC X Y C SP

only four operations (and four regs) two choices = 32
ADC SBB AND OR memory[X] Akkumulator X Y C -> Akku or D or null or memory[Y]
<< >> XOR

I mean, this should be possible:

string move

loop:
[X++] -> D
D -> [Y++]
C--
BNE loop

32 bit Addition
4 -> C
loop:
[X++] -> A
ADC A [Y++]
C--
BNE loop
A -> [Z++] -- no branch prediction planned: the one-byte-loop executes one instruction after the BNE opcode

Also custum chip reside in their own bank. If you want to fiddle with VIC-II, place your variables at ZeroPage:X and load BaseAdress with the address of the VIC. No new opcodes for inp and outp.

Note that there is no loop command, no movsb, movz, imul, LEA, move eax, [BX+DI+DI*2] or whatever stuff.

Okay, in the end I am convinced that historically 8bit technology used 70% of its potential and in 1989 computers with a wide bus really got an advantage. So I did use a C16, but more people get nostalgic when seeing (hearing) C64 stuff. In 1990 I should have written a jump an run (like commander keen) for 386 in protect mode to get 32 bit without prefix and for superVGA only to get banks instead of Chain4. KeenHD... So I lost anyway, so I couldn't care less that I should have written this 8-bit stuff at that time..


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 25, 2018 11:02 pm 
Offline

Joined: Tue Jul 24, 2012 2:27 am
Posts: 679
How would you load an immediate value into a register? Would you have to take multiple 1-cycle instructions to just load a few bits at a time, if you're talking about 1-cycle 8-bit instructions?

And I still don't know what your baseline is. You mention pipelines, but also single-cycle instructions, which don't make that much sense together. Does the CPU clock == memory bus clock? If instructions are the size of your data bus width (e.g. 8 bits), then you can't sustain 1 clock per instruction including the instructions' memory accesses, and each instruction will have to do less because there's no room for operands. Not to mention DRAM refresh.

Certainly you can define a minimalist instruction set computer. But it'll be slow, even if it has lots of registers, because it will need tons of instructions to do things that other architectures can do in 1. More simple instructions will take more clock cycles than fewer complex instructions.

_________________
WFDis Interactive 6502 Disassembler
AcheronVM: A Reconfigurable 16-bit Virtual CPU for the 6502 Microprocessor


Top
 Profile  
Reply with quote  
PostPosted: Thu Jan 25, 2018 11:11 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10980
Location: England
For a high performance wide-memory micro, you don't need to wait until 1989 - the ARM1 came out in 1985 and Acorn's Archimedes machines in 1987.

(Speaking of ARM, in one of her talks, Sophie Wilson chooses to define RISC as "Reduced Complexity Instruction Set" - an interesting perspective, I think. Certainly the 6502 has reduced complexity compared to 6800, and also to Z80, going by transistor count if nothing else. Even so, it doesn't seem like a useful label in this case.)

Our OPC7 machine might be of interest, as being very much reduced complexity, and yet having something of a 6502 feel to it. Indeed, we did some transliteration of code from 6502 to the OPC machines with success. By the time we got to OPC7, we had a 32 bit machine with all instructions being one word long, and in one of just two formats. This series of machines has proved to be quite fun to program: small instruction set, sufficiently powerful, and a spacious register file. One of the motivations was the challenge to make something of about the complexity of the 6502 - the sort of thing which could possibly have been a contender in the day.


Top
 Profile  
Reply with quote  
 Post subject: zero overhead loops
PostPosted: Sat Jan 27, 2018 11:30 am 
Offline

Joined: Tue Jan 23, 2018 10:14 pm
Posts: 6
I want to avoid Duff's Device and I think on (my design so far but genereally before 1980) there was an imbalance between registers. Let's assume that the highest byte of the counters will be implemented a bit for a year. Apple ][ gets none, C64 one, C128 two then up to 8. But anyway about 16 bytes are used for Data. So I think we give the code 16 bytes (and some) as a cache. We have a 4 bit ProgramCounter in cache, a ring buffer, and two 4 bit distances to the bounds. So for DRAM refresh or Data-Access to ram the CPU can look into the cache instead. Instead of one ProgramCounter++, we have two PC++ and two distance++ (-- respect.). Already for the Atari 8-bit the CPU was too fast for memory so the cache logic should be implementable in 1979 without adding cycles.

Above I bached the loop command of 0x86, but I came to realize it's beauty. It means that the CounterRegister (CX) when decremented does not affect the carry flag in order for us to carry the carry flag from one ADC to another. As a kid I did not realize the importance of the affected flags of each instruction. Give CX its own flag so to say and introduce
loop -> --CX;}while(CX!=0);
In the cache we add a flag to each command which tells us if this is a loop command. Reading out the cache gives us 8 bits for the command at PC and the loop at [PC,PC+1,PC+2] "look ahead". Thus --CX can be called in parallel and the prefetch can occure at the right place (taking the jump vs skipping the immediate value for relative jump).

When jumping just before loop, look ahead starts with delay (CX in parallel, but prefetch wasted, or no speed benifit).
Affecting CX ( AL->CX) restarts look ahead with the new value.

Since this caching affects timing loops (first pass is different then second), it can be disabled in the mode register (the one swithing between BCD and binary and some other stuff on the WDC16C8). On the 0x86 there are also many modes. When calling subroutines from partners: Always set the mode. Stupid DLLs, I want source. At least declare calling conventions clearly and completely in the header of the DLL.

With this ADC in32 is faster in the above noted loop. The MoveString commands can be emulated at full speed. Abort is possible by jumping out of the loop: No special syntax for rep StringCompare . In case the first pass needs to omit some code, we can still jump in the middle of the block. This is no do{ } while. It hurts me a little, but for this I have to add another flag to mark NULL.


Top
 Profile  
Reply with quote  
PostPosted: Sat Jan 27, 2018 11:46 am 
Offline

Joined: Tue Jan 23, 2018 10:14 pm
Posts: 6
ARM was very expensive. Double bus width => double price of the borard, the pheripherials the chip package. It even has 4 times the bus width. Then while I do math stuff from time to time, people like text. Text is byte based. 7 bit was never enough, 16 bit did not survive gloablization, utf-8 brought us back to bytes. 368 became a success when reducing the bus from 32 to 16 lines. The C64 was expensive at first. Color Ram: 4 additional data lines :-( Intel had already shown addressline multiplexing. I guess the 6502 team was fixated on ROM. They could still have made a non-multiplexed version for the NES later when things got cheaper. Anyhow the NES architecture is ugly as hell with so little RAM pushing all advancements into the (therefore costly) modules. (same with SNES not having a 68000)

What did most people use their computers for? They write text and calculate their budget. Even for desktop publishing and CAD such a CPU would be fast enough. In CAD you work with dirty areas and do not allow interactive zoom (only scroll). No zoom for DesktopPublishing, and if you change the beginning of the paragraph, or change the font-size that takes some time, but works in parallel with a second cursor from the background task.


Top
 Profile  
Reply with quote  
 Post subject: large cache and DRAM
PostPosted: Sun Jan 28, 2018 9:25 am 
Offline

Joined: Tue Jan 23, 2018 10:14 pm
Posts: 6
DRAM refresh cycles are driven by the VIC-II in the C64. It's the basis for the arrangement. The CPU runs at half the speed of the memory in order to let memory access appear instantanous. This saves circuitry for arbitration. I still think it's a bad idea that the VIC can steal memory cycles from the CPU. For wide words a faster clock means better utilisation of the transistors doing the grunt work. But clock generation and distribution comes at some cost. Also some chips from MOS were prone to overheating. I am sure that a basically low clock rate allows lower supply voltage and higher resistance in the pull-up resistors keeping the current consumption low. If we would use the 6502 in an embedded application, it would be nice to add DRAM refresh. It may be that branches within a cached loop lead to pipeline stalls and a cycle where the 6502 does not access memory. This could be used for refresh. For this the refresh counter would be allowed to stay some ns behind schedule. Of course this means that there are is no cycle timed code. If one uses it for a woz-machine for example, the disk controller would have to generate the DRAM access.

If we could employ a large on-chip cache, bus width cost goes down (no more pins). Then use ARM. Apparently cardridges for video consoles could not keep up with the speed of CPUs and the 6502 in snes needed to be throttled down. They could have used an ARM with large on chip cache and try to eliminate some of their co-processors. It was 1991 after all. Amiga came out 1985.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 41 posts ]  Go to page Previous  1, 2, 3  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 14 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: