6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sun Nov 24, 2024 11:07 am

All times are UTC




Post new topic Reply to topic  [ 48 posts ]  Go to page Previous  1, 2, 3, 4  Next
Author Message
PostPosted: Thu Feb 25, 2021 4:56 pm 
Offline
User avatar

Joined: Wed Aug 17, 2005 12:07 am
Posts: 1250
Location: Soddy-Daisy, TN USA
UPDATE:

OK, so just a brief recap. I will have two VIA's, one ACIA, one TMS9918 and FOUR slots that could cause an interrupt.

One of the VIA's is wired to /NMI because it will drive the PS/2 keyboard and SD Card. I figure keyboard input should at least always have priority to allow to (possibly) recover from a crash, etc.

So that leaves 7 IRQ devices. My target speed is between 1 - 4MHz tops. I don't think the TMS chip can handle much greater than 2MHz without wait-states. Which made me think that I might want to arrange everything like the picture below.

I assumed anything approaching 2-3MHz should probably use a totem-pole arrangement but I'm not 100% on that. However, it also bothers me a little that only one device is really using that totem-pole arrangement! In fact, /VIA2_IRQ is mostly to drive audio chips and joysticks.

So I think this is a wasted effort to go totem-pole unless you guys think differently. I've already got 19 chips on this design! :-) It's going to be a big board.

At the end of the day, I will still be happy to restrict the machine to 1-2MHz tops. I guess my question now is, is it worth the effort to use totem-pole like I am doing?

Thanks again!


Attachments:
IRQ-01.PNG
IRQ-01.PNG [ 33.24 KiB | Viewed 1389 times ]

_________________
Cat; the other white meat.
Top
 Profile  
Reply with quote  
PostPosted: Thu Feb 25, 2021 5:24 pm 
Offline
User avatar

Joined: Wed Feb 14, 2018 2:33 pm
Posts: 1488
Location: Scotland
cbmeeks wrote:
One of the VIA's is wired to /NMI because it will drive the PS/2 keyboard and SD Card. I figure keyboard input should at least always have priority to allow to (possibly) recover from a crash, etc.


Remember that the NMI input is Edge triggered - and the VIA flags interrupts from multiple sources by keeping it's IRQ output low, so you'll only see one interrupt to the 6502, so the NMI handler will need to poll the VIA at some point if you are expecting interrupts from more than one source on that particular VIA.

-Gordon

_________________
--
Gordon Henderson.
See my Ruby 6502 and 65816 SBC projects here: https://projects.drogon.net/ruby/


Top
 Profile  
Reply with quote  
PostPosted: Thu Feb 25, 2021 7:14 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
drogon wrote:
Remember that the NMI input is Edge triggered - and the VIA flags interrupts from multiple sources by keeping it's IRQ output low, so you'll only see one interrupt to the 6502, so the NMI handler will need to poll the VIA at some point if you are expecting interrupts from more than one source on that particular VIA.

In keeping with Gordon's note, I don't recommend using NMI for anything other than a single event. Although the VIA may be only one device attached to the MPU's NMIB input, it can produce multiple interrupt events, which must be treated as asynchronous in nature. Your NMI handler would have to be careful to check everything in the VIA that could generate an interrupt, even if not intentionally configured.

Trouble will come if during the tail end of the NMI handler's execution something in the VIA interrupts. Your NMI handler will never know about it and NMIB will remain low. The MPU will likewise never know about the new interrupt and your system may deadlock. Even if you check the VIA's IFR more than once, there is no guarantee you will detect a newly-generated NMI, since some time will elapse between when you make the last check and the NMI handler terminates. If the VIA generates yet another interrupt during that time your goose is cooked.

Bottom line, I would not do what you are contemplating.

In my POC units, I use NMI solely to regain control if a program gets stuck in a loop. A Maxim DS1813 is attached to the NMIB input, along with a pullup and push button. Code in the firmware's NMI handler sniffs the stack in response to the NMI and if it is determined the MPU was executing instructions from RAM when the NMI came, execution is terminated and control is given to the machine language monitor (which is in firmware). Use of the DS1813 works well in this application precisely because NMIB is edge-sensitive.

If you have truly crashed your machine (harder to do with a 65C02 or 65C816 than with a 6502, but not impossible), your keyboard isn't going to do anything for you, since there is a strong likelihood the VIA that polls the keyboard will not be getting serviced. That's when you reach for the NMI push button and hope you regain control. If not, there's the reset push button...

As for the chip count, what all do you have in this thing? You definitely need to get a priority encoder circuit in there to manage interrupt sources. Otherwise, your ISR is going to be eating up a lot of Ø2 cycles figuring out who's interrupting.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Thu Feb 25, 2021 8:04 pm 
Offline
User avatar

Joined: Wed Aug 17, 2005 12:07 am
Posts: 1250
Location: Soddy-Daisy, TN USA
BigDumbDinosaur wrote:
In keeping with Gordon's note, I don't recommend using NMI for anything other than a single event


Thanks to both of you for your response.

Actually, I think I misspoke. My intention was to only have one true /NMI and the idea was to tie it to a keyboard. But the more I think about that, the more it doesn't really make sense. I was thinking you could press some keyboard sequence (much like the RESTORE key on C64) to interrupt just about anything and return to BASIC. I think it does make more sense to tie it to a "monitor" button. So pressing it would load a system monitor or something similar. I'll see about going with that.


BigDumbDinosaur wrote:
As for the chip count, what all do you have in this thing?


This is my most ambitious project to date. My third SBC and the goal is to pay homage to the computers I most adore. C64, VIC-20, Apple II, TI-99/4A, etc. I think most of the design is nearly complete. I'm finishing up the smaller details like the IRQ, etc.

It will have: TWO VIA's, 6551 UART, MAX238, TMS9918, two AY-3-8912 audio chips, 32KiB RAM (minus I/O), 32KiB ROM, 32KiB video RAM (two banks controlled by VIA) and a couple handful's of glue logic (the TMS9918 takes a lot of glue) and SD Card (using Ada Fruit breakout board).

BigDumbDinosaur wrote:
You definitely need to get a priority encoder circuit in there to manage interrupt sources. Otherwise, your ISR is going to be eating up a lot of Ø2 cycles figuring out who's interrupting.


YES! This is what's holding me up at the moment. I'm re-reading through the primer, etc. I'm studying other designs too. But I'm always interested in a concrete example that uses all these interrupts.

As for priorities, the top priority will always be VDP. There are timing things I want to do at the end of the vertical blank and need to make sure it takes priority. Second would be UART.

Thanks for all the input!

_________________
Cat; the other white meat.


Top
 Profile  
Reply with quote  
PostPosted: Fri Feb 26, 2021 1:55 am 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
BigDumbDinosaur wrote:
You definitely need to get a priority encoder circuit in there to manage interrupt sources.
Maybe. Maybe not.

Quote:
Otherwise, your ISR is going to be eating up a lot of Ø2 cycles figuring out who's interrupting.
It's true that having lots of interrupt sources will mean "more" cycles spent figuring out who's interrupting, but you need to ask, is "more" enough to matter? How many times per second will we have to figure this out? That'll depend on the devices. The keyboard, for example, interrupts so infrequently as to hardly matter.

So, what are the tradeoffs between the two options? A priority encoder makes sense if maximum speed really is crucial, and you don't mind adding extra hardware to the design. But in other circumstances it'll be better to solve the problem in software. Even if you do end up consuming a lot of cycles, that may still be a perfectly valid choice.

-- Jeff

_________________
In 1988 my 65C02 got six new registers and 44 new full-speed instructions!
https://laughtonelectronics.com/Arcana/ ... mmary.html


Top
 Profile  
Reply with quote  
PostPosted: Fri Feb 26, 2021 6:16 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
Dr Jefyll wrote:
It's true that having lots of interrupt sources will mean "more" cycles spent figuring out who's interrupting, but you need to ask, is "more" enough to matter? How many times per second will we have to figure this out? That'll depend on the devices. The keyboard, for example, interrupts so infrequently as to hardly matter.

Yes, but... :D

If a jiffy IRQ is involved, interrupts will happen frequently. Cecil mentions wanting a machine in the spirit of 8-bitters such as the VIC-20 and C-64. In both of those machines, the NTSC version of the kernel configured a VIA (VIC-20) or CIA (C-64) to interrupt at an approximate 60 Hz rate, with PAL machines using 50 Hz. The jiffy IRQ handled timekeeping and keyboard scanning, obviously both being essential to the machine's operation. If he is aiming to reproduce that behavior with Ø2 kept at a rate tolerable by the glue logic and I/O hardware (notably the TMS9918), he can't afford to waste too many cycles on the front-end of the ISR determining from where the IRQs are coming.

The jiffy IRQ would be "synchronous," hence can be "anticipated" and therefore given high priority. However, the ISR still has to check all other active sources each time, since the rest of the sources are likely to be asynchronous. In particular, his use of a 6551 UART will demand an ISR that performs with alacrity, since that device doesn't have a receiver FIFO. If the UART is operating at it's maximum standard speed of 19.2Kbps and receives a steady input stream, it could conceivably be generating 1920 IRQs per second, assuming 8N1 data format.

Given that, the availability of some hardware that can advise the MPU which sources are interrupting could substantially reduce software interrupt latency and make the system more responsive. Assuming the use of the -S version of the 65C22, a wired-OR configuration wouldn't be possible for the VIAs' IRQ outputs. Hence the idea of an AND gate to aggregate the interrupt sources into a single output, coupled with something like a 74xx540 to provide a status byte indicating which sources are interrupting, would handle the mixed open-collector and totem-pole IRQ outputs in a reasonable way, as well provide a simple mechanism to route the ISR processing and minimize polling. That's only two chips, one of which (the AND gate) would be necessary, no matter what.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Fri Feb 26, 2021 6:27 pm 
Offline
User avatar

Joined: Wed Feb 14, 2018 2:33 pm
Posts: 1488
Location: Scotland
Dr Jefyll wrote:
So, what are the tradeoffs between the two options? A priority encoder makes sense if maximum speed really is crucial, and you don't mind adding extra hardware to the design. But in other circumstances it'll be better to solve the problem in software. Even if you do end up consuming a lot of cycles, that may still be a perfectly valid choice.

-- Jeff


Also in addition to this - if you can achieve "work" during the interrupt, then it's never wasted cycles. My favourite 6502 machine, the BBC Micro had a 100Hz interrupt tick and during that, it did a lot - poll the keyboard, run the sound system and several other tasks. Other peripherals like the serial port had their own interrupt code and at the end of the interrupt, if it wasn't "serviced" then it did a JMP (foo) to carry on with user-provided IRQ handlers. This managed extra peripherals that may be plugged into the "1Mhz" bus (a daisy chainable cpu level bus), the user port (half a VIA) or anything else. (The disk system used NMI)

So it was never wasted as it was all there for good reasons.

My Ruby 816 system has a 1000Hz interrupt and one thing that does is blink a 'heartbeat LED' using a balanced PWM to give it 2 pulses a second that fade in/out (quickly, as 2 a second) That serves no real purpose at the end if the day but it's pretty.

As an experiment, calculating Pi to 200 places - so pure compute:

Code:
% pi 200
Time taken: 23316
pi = 3.+
1415926535 8979323846 2643383279 5028841971 6939937510
5820974944 5923078164 0628620899 8628034825 3421170679
8214808651 3282306647 0938446095 5058223172 5359408128
4811174502 8410270193 8521105559 6446229489 5493038196


Turn off the LED heartbeat and run it again:

Code:
% *led 0
Old: $01, New: $00
%  pi 200
Time taken: 23219
pi = 3.+
1415926535 8979323846 2643383279 5028841971 6939937510
5820974944 5923078164 0628620899 8628034825 3421170679
8214808651 3282306647 0938446095 5058223172 5359408128
4811174502 8410270193 8521105559 6446229489 5493038196


So I "waste" 97mS in about 23 seconds which is less than 0.2%. I can live with that for a pretty LED.

-Gordon

_________________
--
Gordon Henderson.
See my Ruby 6502 and 65816 SBC projects here: https://projects.drogon.net/ruby/


Top
 Profile  
Reply with quote  
PostPosted: Sat Feb 27, 2021 6:19 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
drogon wrote:
Also in addition to this - if you can achieve "work" during the interrupt, then it's never wasted cycles.

That's true...if the "work" is actually accomplishing something of value other than "shuffling paper" (dealing with the administrative aspects of interrupt processing).

Having worked on quite a few "bare metal" programming projects over the years, a constant goal has been of creating interrupt handlers that are very economical with system resources. In one project I did, which was an intelligent TIA-232 multiplexer, there wasn't a priority interrupt encoder or programmable interrupt controller (PIC) in the hardware. That omission made the ISR inefficient due to the polling required to deal with the IRQ storm that would occur when multiple users were all banging away at their terminals and their screens were all being repainted at the same time. The hardware designers added a PIC when it became painfully apparent that foreground processing was getting the short end of the stick. The performance improvement seen after I rewrote the ISR to use the PIC was remarkable.

I partially adopted that philosophy in the design of POC V1.2 because I knew what would happen if all four serial channels were simultaneously active with continuous data flow (worst case would be 92,160 serial I/O IRQs per second at 115.2Kbps). By arranging for the virtual QUART (vQUART, two DUARTs made to appear as a single, four-channel UART) to report interrupt status in hardware, the MPU upon responding to an IRQ, could read a single register and immediately know the state of the four receivers and transmitters. That eliminated the need for polling any IRQ sources other than the timer in one of the DUARTs and the SCSI host adapter. Needless to say, checking two possible IRQs and one encoder register is a much more efficient process than individually checking four receivers, four transmitters, a timer and a SCSI controller.

Bottom line is as much as is practical should be done to minimize the impact of interrupt handlers on the foreground. As can be seen with your pi calculation program, even something as trivial as periodically blinking an LED has an effect on performance, albeit in Ruby's case, a quite small effect. :D

BTW, those are pretty impressive times for calculating pi. It would be interesting to try out your algorithm on my Linux software development box and see how it runs.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Sat Feb 27, 2021 10:03 am 
Offline
User avatar

Joined: Wed Feb 14, 2018 2:33 pm
Posts: 1488
Location: Scotland
BigDumbDinosaur wrote:

BTW, those are pretty impressive times for calculating pi. It would be interesting to try out your algorithm on my Linux software development box and see how it runs.


It takes about 7mS on my i5 Linux desktop. It's written in BCPL - the compiler compiles itself in under a second on my Linux desktop. The CINTCODE (target VM for the BCPL compiler) is somewhat inefficient and written in C under Linux. It's pure 816 on Ruby.

It's not my code - came with the BCPL distribution and it's known as "Machin's Formula"

Code here:

https://unicorn.drogon.net/pi.b.txt

-Gordon

_________________
--
Gordon Henderson.
See my Ruby 6502 and 65816 SBC projects here: https://projects.drogon.net/ruby/


Top
 Profile  
Reply with quote  
PostPosted: Sat Feb 27, 2021 6:32 pm 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
BigDumbDinosaur wrote:
You definitely need to get a priority encoder circuit in there to manage interrupt sources.
BigDumbDinosaur wrote:
[...] coupled with something like a 74xx540 to provide a status byte indicating which sources are interrupting, would [...] route the ISR processing and minimize polling.
It's not a bad idea, and I can see how it'd improve matters. Just reminding everyone that this example of a status byte is one that's not encoded, and a priority encoder is a different animal. (Edit: the PIC you mention is a related subject.)

The status byte lets you query all device bits with just a single LDA (or LDX or LDY, perhaps); also you can query one or more individual bits with just a single BIT or similar instruction. But the status byte doesn't instantly tell you which among all the requesting devices has highest priority. In contrast, a 74xx540 delivering the output of an 'HC148 will let you do this !
Code:
LDX Priority_Encoder  ;HC148 output bits, shifted left and padded with zeros.
JMP (VectorTable,X)
'HC148 datasheet attached below. I don't necessarily advocate this level of fanciness. :P I'm just scoping out the available tradeoffs. (Even fancier would be to use the CPU's VPB output to gate the '148 -> '540 onto the bus. That's simplifying slightly. But it would eliminate even the two instructions just listed. :shock: )

Quote:
The jiffy IRQ would be "synchronous," hence can be "anticipated" and therefore given high priority. [...] If the UART is operating at it's maximum standard speed of 19.2Kbps and receives a steady input stream, it could conceivably be generating 1920 IRQs per second, assuming 8N1 data format.
I agree we should give high priority to the UART, but not so much when it comes to the jiffy IRQ. Its interrupts are comparatively infrequent; also, I don't quite get your point about their periodic nature having a bearing.

Quote:
Bottom line is as much as is practical should be done to minimize the impact of interrupt handlers on the foreground.
Certainly for some projects this mantra is valid, but it would be a mistake to accept it generally as some kind of guiding principle or golden rule. Likewise we know totem-pole IRQs plus an AND will be less constraining in some ways than the Open Collector approach, but the latter uses less hardware. For Cecil's application I wouldn't summarily dismiss the Open Collector option. There are one or two points in the ISR that'd need careful attention, but attending to those might be more fun and educational than accommodating the totem-pole and the AND. It depends how he enjoys spending his time. :)

-- Jeff


Attachments:
74HC148 Priority Encoder.pdf [1.06 MiB]
Downloaded 77 times

_________________
In 1988 my 65C02 got six new registers and 44 new full-speed instructions!
https://laughtonelectronics.com/Arcana/ ... mmary.html
Top
 Profile  
Reply with quote  
PostPosted: Sun Feb 28, 2021 11:15 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
drogon wrote:
BigDumbDinosaur wrote:

BTW, those are pretty impressive times for calculating pi. It would be interesting to try out your algorithm on my Linux software development box and see how it runs.


It takes about 7mS on my i5 Linux desktop. It's written in BCPL - the compiler compiles itself in under a second on my Linux desktop. The CINTCODE (target VM for the BCPL compiler) is somewhat inefficient and written in C under Linux. It's pure 816 on Ruby.

It's not my code - came with the BCPL distribution and it's known as "Machin's Formula"

Code here:

https://unicorn.drogon.net/pi.b.txt

-Gordon

I've got Thoroughbred's Dictionary-IV (very powerful timesharing form of business BASIC) development software on my server. I'm curious to see how well it can do the computation. :D

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Sun Feb 28, 2021 12:13 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
Dr Jefyll wrote:
BigDumbDinosaur wrote:
You definitely need to get a priority encoder circuit in there to manage interrupt sources.
BigDumbDinosaur wrote:
[...] coupled with something like a 74xx540 to provide a status byte indicating which sources are interrupting, would [...] route the ISR processing and minimize polling.
It's not a bad idea, and I can see how it'd improve matters. Just reminding everyone that this example of a status byte is one that's not encoded, and a priority encoder is a different animal. (Edit: the PIC you mention is a related subject.)

Yep! I was a little careless in my verbiage, which could lead one to think that fetching a status byte is somehow like using a priority encoder.

Speaking of which, a priority encoder should be fairly straight-forward to implement in programmable logic. The 74HC148 that is often used for that purpose—the late Lee Davison posted such a circuit here. However, it is a relatively slow device.

Quote:
The status byte lets you query all device bits with just a single LDA (or LDX or LDY, perhaps); also you can query one or more individual bits with just a single BIT or similar instruction.

That's sort of what I am doing in POC V1.2. The individual interrupt outputs are wired to a 74AC540, which inverts the active-low interrupts to produce a byte in which a set bit means that channel interrupted. I read the status byte and shift it to identify the IRQ. Four right-shifts process the receivers and four left-shifts process the transmitters. If a set bit is shifted into carry the relevant ISR code is run. It's quite simple and efficient.

Quote:
I don't necessarily advocate this level of fanciness. :P

I advocate it only if the performance gains warrant it. If there are only two or three possible IRQs to service, priority encoding, etc., would be a luxury. Of course, as an educational thing to design, it could be worthwhile.

Quote:
Quote:
The jiffy IRQ would be "synchronous," hence can be "anticipated" and therefore given high priority. [...] If the UART is operating at it's maximum standard speed of 19.2Kbps and receives a steady input stream, it could conceivably be generating 1920 IRQs per second, assuming 8N1 data format.

I agree we should give high priority to the UART, but not so much when it comes to the jiffy IRQ. Its interrupts are comparatively infrequent; also, I don't quite get your point about their periodic nature having a bearing.

I didn't imply the periodic nature of the jiffy IRQ was special...merely pointing out that it was synchronous, knowing which could suggest to one how to structure the ISR.

In my POC firmware, the IRQ priority is 1) Jiffy IRQ timer; 2) UART receivers, with channel B having the highest priority; 3) UART transmitters, with channel A having the highest priority; 4) SCSI host adapter. The reason the IRQ timer is top man on the totem pole is a matter of programming convenience, nothing more. With the 65C816, the timekeeping code is very succinct, so few cycles are needed unless there is a carry into a MSW. Even then, that only happens once in a great while.

The SCSI interrupt code is unique in the system in that it can change the stack frame generated by the interrupt and thus reroute the execution of the foreground code. SCSI IRQs only happen when a SCSI transaction is being processed—the host adapter will not interrupt if the SCSI bus is quiescent. The IRQs are generated as the bus changes phases, or in the event of a bus or controller errors.

Quote:
Quote:
Bottom line is as much as is practical should be done to minimize the impact of interrupt handlers on the foreground.

Certainly for some projects this mantra is valid, but it would be a mistake to accept it generally as some kind of guiding principle or golden rule. Likewise we know totem-pole IRQs plus an AND will be less constraining in some ways than the Open Collector approach, but the latter uses less hardware. For Cecil's application I wouldn't summarily dismiss the Open Collector option.

If he's using the 65C22S then his system will have a mixed IRQ circuit. You can fake an open drain with the 65C22S by isolating it from the wired-OR circuit with a Schottky diode. I see no reason not to if you can keep the layout sufficiently compact to minimize parasitic capacitance and avoid spurious interrupts.

I wouldn't refer to suggesting that writing efficient code to minimize the effect of IRQ processing on the foreground as a mantra—I actually find that somewhat condescending. :) It is merely a case of encouraging the reader to use sound programming practices, as the payoff will be a better-running system.

In systems programming, of which I have a lot of under my belt, writing a succinct ISR is indeed a "golden rule," especially for a system that supports preemptive multitasking. On the other hand, the code for a hobby system can be less efficient to get things going, since getting the machine to run right is usually the primary goal. However, it is educational to take the time to analyze ones ISR to find and eliminate the choke points, as well as identify code that is "good enough." I say as much in my 65C816 interrupts article, in which I advise the reader who is looking to maximize ISR performance to not focus too much on parts of code that only see infrequent use.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Thu May 13, 2021 7:20 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Mon Mar 01, 2021 4:04 am 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
BigDumbDinosaur wrote:
as much as is practical should be done to minimize the impact of interrupt handlers on the foreground
Dr Jefyll wrote:
Certainly for some projects this mantra is valid
My bad. I should've made clear that Cecil's project is among those -- the majority -- for which the "protect the foreground" principle does apply. But exceptions exist. I recall one project where the only context that mattered at all was the ISR, and I allowed it to be lengthy. And why not! Everything else, including the foreground task, could take a rain cheque. Regarding "protect the foreground" and other points mentioned, what I'm stressing is that goals and circumstances vary widely from one project to the next. The shrewdest approach is one that's always appraising the tradeoffs -- perhaps unexpected -- which prevailing conditions allow.

BigDumbDinosaur wrote:
a priority encoder should be fairly straight-forward to implement in programmable logic
Yes, and even just an (E)EPROM will suffice if it contains an appropriate table. The various interrupt sources would be synchronized by a register if necessary then fed to the (E)EPROM address inputs. And (E)EPROM outputs are a full 8 bits and can tri-state, unlike those of the 'HC148. :)


By the way, you don't need a priority encoder to pull off the snappy little dispatch snippet I posted. Even an unencoded status byte will work.
Code:
LDX Status_Byte      ;7 unencoded bits, padded with zero in the LS position.
JMP (VectorTable,X)
Notice I went down to only 7 bits, and the size of the Vector Table is 256 bytes (128 16-bit enrries). To reduce that, it may be desirable to use even fewer bits, as follows.

In the following example I'll assume we have many interrupt sources, but only two that are critical. ("Critical" could mean high volume and/or sensitive to latency.)
Code:
LDX Status_Byte    ;all bits zero, except Bit2 is CriticalSource_B, Bit1 is CriticalSource_A
JMP (VectorTable,X)
Now the vector table has only four 16-bit entries. It will tell you where to immediately go if CriticalSource_A is active, if CriticalSource_B is active, or both A and B are active. For the fourth case -- ie, neither A nor B -- the Vector Table sends us to an unexciting routine that polls the remaining sources. So it's a hybrid approach.

We're spending some memory to support this, and the amount doubles for every additional input to the lookup. A priority encoder doesn't need the lookup at all. But with a priority encoder it's the input wiring which determines the various priorities. If you move that decision to a lookup table in RAM you can shuffle the priorities at will, even on the fly -- a powerful advantage in some scenarios.

-- Jeff

_________________
In 1988 my 65C02 got six new registers and 44 new full-speed instructions!
https://laughtonelectronics.com/Arcana/ ... mmary.html


Top
 Profile  
Reply with quote  
PostPosted: Tue Mar 02, 2021 4:15 pm 
Offline
User avatar

Joined: Wed Aug 17, 2005 12:07 am
Posts: 1250
Location: Soddy-Daisy, TN USA
Thanks everyone for such great information and comments.

For the record, this whole project is educational for me. With the added benefit of (hopefully) creating a computer I wanted back in 1980. Other than a few modern conveniences (like SRAM, SD and EEPROM), this computer could have been built in 1980 (albeit expensive).

Anyway, nothing is really set in stone at the moment. My schematics are 90% done. And you know the last 10% is always the hardest. For example, I have two VIA's. Well, even after adding an SD card, TWO audio chips, etc. I have a few pins left over for the VIA's that I'm struggling to find something to use them for. Mainly pins like CB1/2, etc.

So then I noticed...that one of my VIA's is mostly devoted to audio. So why do I need an IRQ on it at all? I guess because I hate leaving any pin unused.

I may start a separate topic on "what to do with unused pins?". LOL

Example of my VIA setup below:

Back to the topic on hand...if I decide to ignore this VIA's IRQ...I would then only have one VIA (keyboard, user port and SD card) a VDP (IRQ ever 1/60th of a second for raster interrupts) a UART and expansion slots.

So I guess I am wondering...is it bad practice to just simply ignore IRQ pins if I can't find a legit reason to use them?
I think I like the idea of having a "heartbeat" that occurs X times per second. That could then maybe read from RAM for the audio. Maybe handle the SD card, etc.


Attachments:
File comment: Example
VIA.PNG
VIA.PNG [ 106.96 KiB | Viewed 1196 times ]

_________________
Cat; the other white meat.
Top
 Profile  
Reply with quote  
PostPosted: Tue Mar 02, 2021 5:21 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
Hmm, if you have a VIA and don't connect the IRQ up, you're certainly losing out on some future possibilities. It seems like it shouldn't cost much to add it. (I wouldn't always be in favour of adding things just because you can.)


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 48 posts ]  Go to page Previous  1, 2, 3, 4  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 41 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: