6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Wed Sep 25, 2024 4:30 am

All times are UTC




Post new topic Reply to topic  [ 30 posts ]  Go to page Previous  1, 2
Author Message
 Post subject: Re: Interrupt Handling
PostPosted: Sun Dec 16, 2012 6:13 pm 
Offline
User avatar

Joined: Mon Apr 23, 2012 12:28 am
Posts: 760
Location: Huntsville, AL
That interrupt latency is extended by an instruction execution period is far less of an issue than if the system comes to a halt due to a stuck interrupt request line.

_________________
Michael A.


Top
 Profile  
Reply with quote  
 Post subject: Re: Interrupt Handling
PostPosted: Sun Dec 16, 2012 6:14 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10938
Location: England
And at 100MHz even 7 cycles isn't too great an extra penalty.


Top
 Profile  
Reply with quote  
 Post subject: Re: Interrupt Handling
PostPosted: Sun Dec 16, 2012 6:19 pm 
Offline
User avatar

Joined: Tue Nov 16, 2010 8:00 am
Posts: 2353
Location: Gouda, The Netherlands
I don't see why it would come to a halt. The IRQ handler removes the source of the interrupt, and then it'll execute the next instruction(s). If the interrupts happen so quickly that there's never time to execute any more instructions, the system is badly designed.


Top
 Profile  
Reply with quote  
 Post subject: Re: Interrupt Handling
PostPosted: Sun Dec 16, 2012 6:37 pm 
Offline
User avatar

Joined: Mon Apr 23, 2012 12:28 am
Posts: 760
Location: Huntsville, AL
I don't know how many times I've had to dig out software designs which require fast interrupt handling from a microprocessor to operate correctly. Generally, these situations arise when the hardware being serviced has too small a buffer, and a quick response is needed to preserve the data. It's amazing how slow PCI and ISA busses are compared to the processors connected to them. Furthermore, I don't know of a single processor that can multi-thread its I/O and memory busses; so if the I/O transactions are performed in SW, the processor is operating for a significant portion of time at the transaction speed of the I/O bus.

However, a much safer and less problematic solution is to improve both the hardware and the software. For example, I don't know how many times I've brought a PC (MS-DOS/Windows/Linux) to it's knees servicing a simple 16C550 UART receive queue. Some additional buffering implemented in a custom UART, coupled with a block move ISR, easily relieves the bottleneck on the bus and frees the power of the processor to be applied to processing of the data stream. The interrupt latency requirements are then dramatically less stringent, and everything operates on a much better schedule.

_________________
Michael A.


Top
 Profile  
Reply with quote  
 Post subject: Re: Interrupt Handling
PostPosted: Sun Dec 16, 2012 8:46 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8395
Location: Midwestern USA
BigEd wrote:
You can see the effect in this simulation:
http://www.visual6502.org/JSSim/expert. ... ogmore=irq
The program is
CLI
INX
INY
and the interrupt handler is just
RTI
You notice that the INX completes, the INY is fetched but subsumed by the interrupt response sequence, and on RTI the INY is again fetched but again subsumed. The Y reg is never incremented.

The above wouldn't happen on a real processor, as any instruction, once fetched, must be completed prior to interrupt acknowledgement. If /IRQ is pulled low during the cycle when the CLI opcode is being fetched, the MPU will service the interrupt after CLI is done, which means execution of INX would be deferred until after the ISR has finished and INY would be then executed, unless another IRQ hit while INX was being handled. The actual latency is determined by at what point in the instruction cycle /IRQ is asserted.

Your above description suggests that the simulator's behavior may be incorrect.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Sun Dec 16, 2012 8:56 pm, edited 1 time in total.

Top
 Profile  
Reply with quote  
 Post subject: Re: Interrupt Handling
PostPosted: Sun Dec 16, 2012 8:52 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8395
Location: Midwestern USA
Arlet wrote:
I don't see why it would come to a halt. The IRQ handler removes the source of the interrupt, and then it'll execute the next instruction(s). If the interrupts happen so quickly that there's never time to execute any more instructions, the system is badly designed.

Conceivably this could be the case if a wired-OR IRQ circuit is slow to return to the high state after the interrupt source has been cleared. The MPU would then be hit with a spurious interrupt, which could definitely stall everything if the ISR doesn't account for such a scenario. Garth can tell you all about it! :lol:

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
 Post subject: Re: Interrupt Handling
PostPosted: Sun Dec 16, 2012 9:08 pm 
Offline

Joined: Sat Jul 28, 2012 11:41 am
Posts: 442
Location: Wiesbaden, Germany
MichaelM wrote:
That interrupt latency is extended by an instruction execution period is far less of an issue than if the system comes to a halt due to a stuck interrupt request line.

It would still result in severe degradation of processing power for the main program, not to mention the impact of the continuously missed interrupt service for an IO resource.

Wether you stall or crawl will not make that much of a difference.

_________________
6502 sources on GitHub: https://github.com/Klaus2m5


Top
 Profile  
Reply with quote  
 Post subject: Re: Interrupt Handling
PostPosted: Sun Dec 16, 2012 9:39 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10938
Location: England
BDD - have you studied the 6502 implementation? Do you understand NMOS logic and the basis of the visual6502 simulation? You're very quick to suppose that you know better.


Top
 Profile  
Reply with quote  
 Post subject: Re: Interrupt Handling
PostPosted: Sun Dec 16, 2012 9:47 pm 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8517
Location: Southern California
If you have more than one possible source of interrupts, it is quite possible to have one hit during the servicing of another.  Even if the ISR checks for other interrupts before the RTI, it is still possible for an interrupt to hit after such checking, and before the RTI has completed.  In that case, you would go right back into the ISR.  In my real-time work, I depend at least as heavily on interrupts and good interrupt performance as anyone else here (if not more heavily), and have never had bug problems from instructions getting skipped.  This may be however because I never used the NMOS processor except in my very first home-made computer which, although it worked, wasn't much good for anything.

Quote:
Generally, these situations arise when the hardware being serviced has too small a buffer, and a quick response is needed to preserve the data.

I've done a lot of audio sampling timed by interrupts, where the jitter from an extra instrucion getting executed before the interrupt sequence would be a problem.  It's not just about servicing the interrupt before another byte comes in, but that the exact time is important.  I gave a little discussion on jitter at http://wilsonminesco.com/6502primer/potpourri.html#JIT .  Adding another instruction's delay would have a big effect on jitter.  In PCs where you can for example listen to music while doing other things, they have dedicated audio-processing hardware with buffers which in turn are fed by DMA I expect, but that's a much more complex system.  In the case of using timer interrupts to time audio sampling at tens of thousands of evenly spaced samples per second, I don't let other interrupts delay those samples at critical times.  I can put the sampling interrupt on the NMI if necessary, but often I just let it sound crazy for fractions of a second when there are interrupts from serial reception of instructions for example.

Quote:
I don't see why it would come to a halt. The IRQ handler removes the source of the interrupt, and then it'll execute the next instruction(s). If the interrupts happen so quickly that there's never time to execute any more instructions, the system is badly designed

or may just be that the application is nearly too much for the hardware you have to work with.  There was something in the Apollo 11 movie that sounded like they were bordering on having that problem with interrupts during the moon landing, IIRC.  I've had a few microcontroller projects where 80% of all the processing time is spent on interrupt service, and I do end up counting cycles to see if there are any conditions where it could fail because it just can't keep up.

Quote:
Conceivably this could be the case if a wired-OR IRQ circuit is slow to return to the high state after the interrupt source has been cleared. The MPU would then be hit with a spurious interrupt, which could definitely stall everything if the ISR doesn't account for such a scenario. Garth can tell you all about it! :lol:

Yep, I've been bit by that one, and I think I mentioned it in both the interrupts primer and the tips.  Oh yes, and the interrupt page of the 6502 primer also.  Then there's the "ghost" interrupt problem I had which is described in Tip of the Day #42, with solutions.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
 Post subject: Re: Interrupt Handling
PostPosted: Sun Dec 16, 2012 10:31 pm 
Offline
User avatar

Joined: Mon Apr 23, 2012 12:28 am
Posts: 760
Location: Huntsville, AL
But it should allow the main program to reach an SEI at some point without extraordinary effort; albeit quite slowly.

Once you turn on an interrupt, you've wandered off the reservation anyways, and all the dragons and sea serpents that will appear as a consequence will be there because you invited them in. :)

Furthermore, it is very common for the main program to disable/enable interrupts in order to access common data structures between itself and the ISR(s). Under these circumstances, I don't think that it's realistic to express concern regarding the additional latency introduced by allowing the instruction at the return address to execute before the ISR is re-entered. The latency introduced by the enable/disable sequences of the main program are likely to be much longer than that of the longest executing 65C02 instructions, e.g. 7 or 8 cycles for jmp (abs,X).

If the hardware and the software are well synchronized so that there's no need to enable/disable the interrupts, then interrupts are a convenience and not a necessity. The question then becomes is how much time is the main program spending waiting for the interrupt to occur.

Finally, the service time jitter introduced by the use of SEI/CLI to access critical regions should be on the order of 50, 100, or more clock cycles for any processor performing any meaningful work in an interrupt driven environment. The jitter has to be a fraction of the interrupt rate and interrupt service time in order for there to be any processing time available for the main program to perform any meaningful work.

Given all of the effects that can affect the regularity of interrupt service routines, I don't consider using interrupts for timing or sampling of events which require precision greater than about 1000 clock cycles. For those types of events, I use hardware timers and independent hardware feeding into, or being fed from, FIFOs/queues. That's not to say that there's not a place for interrupts in these type of tasks, but it's that I lean toward a blended solution between custom/semi-custom HW and SW.

Thus, if I were to use interrupts in the manner that Garth described for audio sampling, then I would be concerned about the additional latency that my design decision for the M65C02 core might introduce. However, the core is targeted at FPGAs, and as a consequence, it is not just the core but any additional logic that may be required by an application that will be included in the FPGA. If the core was intended as a general purpose processor, then I would consider changing the mix of interruptable and non-interruptable instructions. I expect that only the interrupt handling microsequence and the microsequences of the program flow control instructions would need adjustment; no change required to any of the logic.

_________________
Michael A.


Top
 Profile  
Reply with quote  
 Post subject: Re: Interrupt Handling
PostPosted: Mon Dec 17, 2012 12:13 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8517
Location: Southern California
MichaelM wrote:
But it should allow the main program to reach an SEI at some point without extraordinary effort; albeit quite slowly.

I don't know what you're referring to here.

Quote:
Furthermore, it is very common for the main program to disable/enable interrupts in order to access common data structures between itself and the ISR(s).

The only thing of this sort that I remember ever experiencing is that the interrupt-driven RTC might increment one variable byte which carries so it increments the next one too.  The easy solution for that is when you want to read the time, you just read it twice in a row and make sure they match, so you don't read for example 4:59:59.99 but by the time you read the hours it's 5 instead of 4 because the ISR cut in and incremented it.  If you read it twice in a row, the first one gets thrown out because it doesn't match the second one, then you read the time again and get 5:00:00.00 twice so you're not an hour off.

Other situations just use hand-shake variables.  Or for another example, a ring buffer for RS-232 reception can be interrupted at any time, because there's a pointer variable for writing and one for reading, and the ISR doesn't store anything to the read pointer, and the background routine doesn't store anything to the write pointer.  The ISR does check the read pointer though to make sure it tells the transmitting end to stop when there are still several bytes left to make sure it doesn't step on bytes that haven't been read yet if the background program gets behind, and the background program reads the write pointer so it knows when it's caught up and should stop reading.  The interrupting ability wasn't touched.

I'm working on an application now where audio is recorded and played back by a microcontroller, and the storage is in SPI flash.  The flash takes many cycles to access; so while the SPI port is doing the shifting, I let the computer do something else useful.  The background program has to do some buffering.  Its timing is very inaccurate, and it's the ISR that determines the instant that the sample is either taken or played back, and leaves a record for the background program to know when to handle the next one, or, alternately, when it can use its time for something else.  The background program can't do it accurately enough while it's also scanning a keypad, processing key presses, and meeting other asynchronous demands which don't have very tight deadlines.

I can't imagine a scenario like you mention where it would be necessary to disable interrupts.  Perhaps you can give an example.  The situation I told about in the "ghost interrupts" tip was where I was clearing and setting the VIA's timer's interrupt-enable flag in the VIA's IER in order to produce 100ms tone bursts.  I was not using SEI and CLI, so new commands coming in over RS-232 still produced interrupts and got processed.

Quote:
If the hardware and the software are well synchronized so that there's no need to enable/disable the interrupts, then interrupts are a convenience and not a necessity.

They are absolutely a necessity for accurate timing.  Timer interrupts (without WAI) give an RMS jitter of approximately 1.8 clocks.  Software timing can't get very close to that, especially when doing several other things at once.

Quote:
Finally, the service time jitter introduced by the use of SEI/CLI to access critical regions should be on the order of 50, 100, or more clock cycles for any processor performing any meaningful work in an interrupt driven environment. The jitter has to be a fraction of the interrupt rate and interrupt service time in order for there to be any processing time available for the main program to perform any meaningful work.

Jitter is mostly irrelevant for something like loading from disc, but extremely important in audio sampling.  In a few of the applications I've done however, the background job requires only a fraction of the processing time, so for example an ISR may run for 33µs (including the interrupt overhead) and leave only 9µs for the background job, and it's not having any trouble keeping up.  (This is for 24ksps, 4x oversampling for voice band for aircraft communications.)  Those are the cases though where I'm counting cycles to make sure that there's no situation where it could run out of time.

Quote:
Given all of the effects that can affect the regularity of interrupt service routines, I don't consider using interrupts for timing or sampling of events which require precision greater than about 1000 clock cycles. For those types of events, I use hardware timers and independent hardware feeding into, or being fed from, FIFOs/queues. That's not to say that there's not a place for interrupts in these type of tasks, but it's that I lean toward a blended solution between custom/semi-custom HW and SW.

The hardware complexity is something I want to stay away from, as long as I can get what I need from interrupts, which so far, I've been able to.  I feel this gives much better control in most ways.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
 Post subject: Re: Interrupt Handling
PostPosted: Mon Dec 17, 2012 6:28 am 
Offline
User avatar

Joined: Tue Nov 16, 2010 8:00 am
Posts: 2353
Location: Gouda, The Netherlands
BigDumbDinosaur wrote:
Conceivably this could be the case if a wired-OR IRQ circuit is slow to return to the high state after the interrupt source has been cleared. The MPU would then be hit with a spurious interrupt, which could definitely stall everything if the ISR doesn't account for such a scenario. Garth can tell you all about it! :lol:

On modern systems this can be a problem too. I've done several projects on ARM where it takes a few cycles for the I/O write to make its way through the internal pipeline. So I'd turn the interrupt source off, and return from the ISR, only to be hit with the same interrupt again.


Top
 Profile  
Reply with quote  
 Post subject: Re: Interrupt Handling
PostPosted: Mon Dec 17, 2012 6:43 am 
Offline
User avatar

Joined: Tue Nov 16, 2010 8:00 am
Posts: 2353
Location: Gouda, The Netherlands
MichaelM wrote:
Given all of the effects that can affect the regularity of interrupt service routines, I don't consider using interrupts for timing or sampling of events which require precision greater than about 1000 clock cycles. For those types of events, I use hardware timers and independent hardware feeding into, or being fed from, FIFOs/queues. That's not to say that there's not a place for interrupts in these type of tasks, but it's that I lean toward a blended solution between custom/semi-custom HW and SW.

On a recent project, I made a PWM controller for 4 H-bridges, with the CPU controlling each leg of the H-bridge, for a total of 16 control lines. Since the microcontroller didn't have enough PWM hardware, and I didn't want to add an external CPLD/FPGA, I used 16 GPIO lines and a timer interrupt. In the graph you can see the interrupt performance. Both edges of the waveform are controlled by a timer interrupt. I let the scope run for a few minutes, collecting all the data. Each of the software PWMs ran at 50kHz, resulting in 250000 interrupts per second, and with a requirement for minimum jitter.

Image.


Top
 Profile  
Reply with quote  
 Post subject: Re: Interrupt Handling
PostPosted: Mon Dec 17, 2012 6:20 pm 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
BigDumbDinosaur wrote:
any instruction, once fetched, must be completed
It's true that instructions are indivisible, meaning an interrupt won't be permitted in the midst of an instruction's execution. However, interrupts routinely begin after an instruction opcode is fetched yet before that instruction commences execution. This is laid out in the section of the data sheets detailing cycle-by-cyle execution of all instructions and the interrupt acknowledge sequences. See Table A. 5.4 on page A-11 of the MOS Hardware Manual; also Table 5-7 of the '816 Data Sheet (the entry under Address Mode 22a. for ABORT, IRQ, NMI, RES).

The interrupt sequence begins just as an op-code is fetched. Specifically, SYNC is high (on the '816, VPA=VDA=1) and a memory read occurs. But that opcode is ignored -- apparently replaced (by internal logic) with a BRK opcode.

Since the original opcode has been fetched but ignored, some means is required to assure that it is not skipped altogether. Ie; it needs to execute later. Hence, as shown in the tables, the PC is not incremented following the fetch of the discarded op-code. (This is an anomoly, since the PC is usually incremented following every opcode fetch.) Of course the PC is pushed then later popped after the interrupt has been serviced, so the very same instruction fetch will occur again.

In light of this information the simulator's behavior seems entirely plausible. If there's any remaining doubt, a trivial experiment with an actual CPU would quickly settle the matter. Certainly I've seen nothing in the data sheets to suggest the same instruction can't be pre-empted repeatedly if circumstances again (or still) demand an interrupt.

cheers
Jeff

_________________
In 1988 my 65C02 got six new registers and 44 new full-speed instructions!
https://laughtonelectronics.com/Arcana/ ... mmary.html


Top
 Profile  
Reply with quote  
 Post subject: Re: Interrupt Handling
PostPosted: Mon Dec 17, 2012 6:46 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10938
Location: England
It might be worth mentioning that the visual6502 simulation is truly a simulation - it models the behaviour of the several thousand transistors which make up the NMOS 6502. With the exception of the unassigned opcodes, it can be expected to behave exactly as does the chip.
Cheers
Ed


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 30 posts ]  Go to page Previous  1, 2

All times are UTC


Who is online

Users browsing this forum: No registered users and 15 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: