6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Tue Nov 12, 2024 12:02 am

All times are UTC




Post new topic Reply to topic  [ 4 posts ] 
Author Message
PostPosted: Wed Feb 08, 2012 9:16 pm 
Offline

Joined: Tue Jul 05, 2005 7:08 pm
Posts: 1042
Location: near Heidelberg, Germany
Hi there,

I am working on the interrupt design for my 65k processor and I am looking into some questions...

First some background: the 65k will have a hypervisor and a user mode. It will also have (as currently planned) 7 prioritized interrupt lines.

The ideas is that when an interrupt line triggers, the interrupt sequence is started, and the interrupt level (corresponding to the interrupt line) is stored - so that similar to the automatic setting of the I-flag the same interrupt line does not re-trigger the interrupt. An RTI returns from that interrupt. Lower-priority interrupts are ignored until the RTI - which would clear the interrupt level and thus allow a lower prio interrupt to start the IRQ sequence. A higher priority interrupt would trigger an IRQ sequence even if a lower priority interrupt routine is currently running. On RTI of that interrupt, the CPU would of course fall back to the lower prio interrupt routine. So far so good.

Now what does an CLI and SEI do? A SEI disables all interrupts besides the NMI (i.e. highes prio) by setting the interrupt level appropriately. A CLI would clear the interrupt level, and any pending interrupt would trigger the interrupt sequence (with the highest prio first). Still all good.

Interrupt stack would be (and here I contradict my web page already) always be the hypervisor stack. A second status byte on the stack would note if the user mode or hypervisor more has been interrupted, and would return to this mode from hypervisor more. Still good.

What happens when the interrupt source is gone? Should the interrupt level be returned/reduced? Currently I do not think so. In a high prio interrupt routine the routine expects to be run to the end, at least as long as no higher prio interrupt comes. If the interrupt source is switched off, and interrupt level would be reduced, a lower priority interrupt would suddenly get the chance to interrupt the interrupt routine and run. So currently I think only on RTI and CLI should the interrupt level be cleared, and possibly pending lower prio interrupts executed.

This, however, also means that all interrupts, even NMI would be level triggered. Or would it?

The "problem" with the approach to run all interrupts in hypervisor more is that what should happen on a) interrupts b) CLI, c) SEI and d) RTI in user mode?

First, interrupts: they always jump into hypervisor mode (at least for now) and return from there on RTI in hypervisor mode. - ok

CLI and SEI don't really have a meaning in user mode. One could define that for example 3 of the interrupt lines (the lowest prio ones) can be "switched" off in user more by user mode SEI. I am playing with that though - what do you think?

What about RTI? Well, that would basically be just the return for the BRK opcode.

The discussion above assumes that you jump into user mode with interrupt level clear (no interrupt handling in progress). What about that: you could jump into user space during an interrupt handling! This way you could do interrupt handlers (obviously not very time critical ones though) in a separate process. Or you could forward specific devices into a user mode process. Think for example a user mode process running a Commodore PET "emulator", you could trap into the interrupt in hypervisor more, then forward e.g. a timer interrupt for keyboard handling into the PET process (even using the "$FFFE" interrupt handler. What do you think about that idea?

In this case, however, any RTI and CLI would trap back into hypervisor more to be able to do proper time slice accounting (assuming for example the time had interrupt one process, but the interrupt was handled in a second process - you would probably like to return to hypervisor to clean that up). Is this reasonable?

Side question: what does RTI do when it pulls the status off the stack, and the I-flag is actually set? This should normally not^D^D^D only happen in NMI. Looking at the NMI situation that answers my question, it will restore the I-flag. The interesting question now is if the hypervisor wants to forward an interrupt, how does it know whether the I-flag is set in the user mode (to decide if it should forward the interrupt at all)? It needs to look into the saved interrupt stack from the last time the processor jumped out of this process into hypervisor more. May be complicated but feasible...


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Thu Feb 09, 2012 7:01 am 
Offline

Joined: Thu Mar 03, 2011 5:56 pm
Posts: 284
I'd make CLI, SEI and RTI available only in hypervisor (supervisor?) mode. CLI could (potentially) lock up the entire system (unless NMI is used for time-slicing/watchdog purposes), and so should not be allowed from user mode. If CLI is disallowed for user mode, it is no great hardship to disallow SEI, as well :-)

Since interrupts are handled in supervisor mode, RTI does not make sense for user mode, and should be disallowed as well. For more complex interrupt handling, the handler should be split into a short segment that does the minimum amount of work and sets up a context for doing the rest of the work at a later time.

The BRK instruction could be used for software interrupts, and would be mostly (only?) used from user code. It should be given its own interrupt level, which should (probably) be the lowest of the 7 interrupt levels. User code would conceptually be the 8th level, and (obviously) lower priority than the 7 levels allocated for interrupt handling and hypervisor/supervisor mode.

Instead of (or in addition to) having an I flag, I would have 7 bits, one for each interrupt level, for indicating whether at least one interrupt has been posted for that level. With this scheme, the interrupts should be edge-triggered.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Thu Feb 09, 2012 10:48 pm 
Offline

Joined: Tue Jul 05, 2005 7:08 pm
Posts: 1042
Location: near Heidelberg, Germany
Yes, disallowing CLI, SEI and RTI in user mode would make it easy.

However, one of my original plans is to use original 6502 code in user mode. Basically virtualize for example a Commodore PET. This code
a) uses SEI/CLI (and RTI in the interrupt), so you would at least need to make it a NOP instead of forbidden opcode.
b) has an interrupt routine that I would like to use. So jumping from a hypervisor interrupt routine into the user space interrupt routine ends up in an RTI that should work.

BRK stays in user mode - there is a TRP (Trap) opcode to go from user into hypervisor mode.

So my requirement to run original 6502 code lets me think about some useful functionality for the CLI/SEI/RTI opcodes...

André


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sun Feb 12, 2012 1:42 pm 
Offline

Joined: Tue Jul 05, 2005 7:08 pm
Posts: 1042
Location: near Heidelberg, Germany
After thinking a bit more about that it should in fact be easy...

SEI in user mode sets the interrupt level to the "user interrupt" level, to prevent any "user mode enabled" interrupts - but only if the CPU's interrupt level is not already above that (e.g. called from a hypervisor interrupt).
CLI clears the interrupt bit. If there is another interrupt pending, it will trap into the hypervisor who can deal with it.

Now RTI: For interrupts I have already defined an extended stack frame. In an interrupt, first the return address is pushed, then an extended status byte, and then the original status byte. The extended status byte contains whether the interrupt occurred in user or hypervisor mode.

Now RTI checks the extended status byte to see if it should return to hypervisor mode. If not, and it is in user mode, nothing special happens, the CPU stays in user mode. If it is in user mode and should return to hypervisor mode, it traps into hypervisor mode. So it is all a matter of how the usermode stack is set up when jumping from the hypervisor "interrupt frontend" into the user mode interrupt routine.

BTW: RTI will also cope with the original stack frame. Both stack frames can be distinguished by the "1" bit in the original status register on the stack. If it is set, it's the original stack frame, if not, it's the extended one (where mode changes do not occur).

Now back to coding.... :-)

André


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 4 posts ] 

All times are UTC


Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: