6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sun Nov 24, 2024 7:01 pm

All times are UTC




Post new topic Reply to topic  [ 13 posts ] 
Author Message
PostPosted: Sat Oct 08, 2011 5:05 am 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
In another thread we were discussing hardware that detects 65C02 op-code fetches. Then...
Dajgoro wrote:
Is it possible to make an illegal opcode detection, which triggers an nmi that would report an error?
After some discussion (thanks, BDD and Garth) it became clear that Dajgoro was wondering what it would take for an NMOS 6502 to emulate a 65C02.
Dajgoro wrote:
In fact, the idea came to me when i was listening my operation systems class few days ago, and the professor mentioned that some more modern cpu would trigger an interrupt when an illegal opcode happens, so it could simulate non existing instructions that newer cpu might have, so why the 6502 would not be able to do that.
In my case it would only be necessary to trace 65c02 opcodes, i was thinking something simple and cheap might be possible.
There should be a 16 bit latch that would trap the address, and a logic that would detect 65c02 instructions,trigger a nmi, and change the instruction to a nop.
Maybe it would be best if a new topic is started, i feel like i hijacked this one...


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sat Oct 08, 2011 5:06 am 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
I think Dajgoro's topic is more interesting than it perhaps first appears. Granted, it's not very exciting to force an NMOS '02 to mimic what a 65C02 can do -- that's something that a simple CPU swap can achieve. But in more general terms, the Virtual Instructions could be used for almost anything -- for example, a Wozniak-style Sweet Sixteen interpreter. And the scope is substantial. The 65C02 has 46 unused opcodes that could be converted into unique software interrupts. The NMOS CPU has 105.

On the hardware side, I came up with an approach that's actually quite simple. It also has shortcomings, but for discussion's sake it's a useful place to start.
Attachment:
Re-mapping_Op-codes.gif
Re-mapping_Op-codes.gif [ 8.43 KiB | Viewed 4178 times ]

On Write cycles, data passes outward from the CPU through the tri-state buffer to memory. On Read cycles, data returning from memory is passed to the EPROM. Regarding the EPROM...

When SYNC is low, only the bottom 256 bytes of the EPROM can be accessed. Their contents are such that address $00 sends $00 back to the CPU, $01 yields $01, and so on up to $FF which yields $FF. Think of this as being a substitution -- with every value replaced with itself. :)

When SYNC is high (ie; op-code fetch), the next 256 bytes are used and the effect is similar. But the contents are such that certain substitutions -- those corresponding to illegal op's -- yield $00, the BRK op-code. This triggers the interrupt, and after that it's all a matter of some (hopefully very clever) software!

-- Jeff


Last edited by Dr Jefyll on Sat Dec 14, 2013 11:10 pm, edited 1 time in total.

Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sat Oct 08, 2011 1:10 pm 
Offline
User avatar

Joined: Mon Aug 08, 2011 2:48 pm
Posts: 808
Location: Croatia
In a system that would have more threads(i saw a 4 thread switching code in the code repository) this opcodes might be used as system calls...


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sat Oct 08, 2011 1:44 pm 
Offline

Joined: Fri Jun 27, 2003 8:12 am
Posts: 618
Location: Meadowbrook
Why convert to interrupts when they can convert to hardware codes?

as a dumb example for me now that I am thinking about it, I have an LED port on the pinball. By making 2 opcodes in CPLD logic, I can save operating cycles.


Old method:

LDA #$01
STA STATUSLED



Let me say that I can instead make 2 new opcodes such as TLH (Turn LED High) and TLL (Turn LED Low). This way, those would be single cycle instructions instead of needing to use 2 commands as I did before.

How can the Kowalski program use those, by macros?

_________________
"My biggest dream in life? Building black plywood Habitrails"


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sat Oct 08, 2011 4:26 pm 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
Nightmaretony wrote:
Why convert to interrupts when they can convert to hardware codes?
The short answer is, because an opcode is just a single instruction, whereas an interrupt can invoke an entire subroutine. Either approach may be preferable, depending on your priorities. The interrupt incurs a lot of overhead pushing PC and P registers to stack and then determining how the ISR should vector. On the other hand it's highly flexible and capable of a very complex result -- hence the concept of a Virtual Instruction.
Nightmaretony wrote:
I have an LED port on the pinball. By making 2 opcodes in CPLD logic, I can save operating cycles [....]
If you don't mind I'll copy this part of your post over to the Ultra-fast output port thread. :)

Dajgoro wrote:
this opcodes might be used as system calls...
Sure. It could work a lot like the Int instruction on x86 processors. Unfortunately our system has no hardware support to help differentiate between interrupts, so we need some software at the beginning of the Interrupt Service Routine to look back, determine which opcode caused the BRK, and jump to an appropriate vector.

I wrote:
On the hardware side, I came up with an approach that's actually quite simple. It also has shortcomings [...]
The main shortcoming is pretty obvious. In the circuit as shown, the EPROM creates delay during every Read cycle, and the system may not have adequate timing margins to tolerate this. Just in case anyone's crazy enough to actually build my little circuit, there are a variety of remedies possible. Here's a partial list:
  • get the fastest possible EPROM (or alternative memory device). With luck, that might be all you'll need.
  • as above, but also upgrade to faster system ram. (ROM and IO may require a wait state.)
  • slow down all bus cycles by reducing the clock rate overall :(
  • slow down only op-code fetch cycles, via a wait state. Other Read cycles can occur at full speed if a tranceiver replaces the '244 buffer. (ie: Read data other than opcodes bypasses the EPROM).
-- Jeff


Top
 Profile  
Reply with quote  
 Post subject: INT vs. BRK
PostPosted: Sat Oct 08, 2011 5:33 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
Dr Jefyll wrote:
Sure. It could work a lot like the Int instruction on x86 processors. Unfortunately our system has no hardware support to help differentiate between interrupts, so we need some software at the beginning of the Interrupt Service Routine to look back, determine which opcode caused the BRK, and jump to an appropriate vector.

Actually, the x86 systems aren't much better in that regard. On the original x86 systems, a programmable interrupt controller (PIC) was used to identify IRQ sources, as the x86 MPU itself has only one IRQ input. Also, INT <x> is no different than BRK followed by a signature byte on a 65xx system.

With the WDC version of the 65C02, as well as the 65C816, the MPU's VPB output can tell system logic when an interrupt vector is to be accessed, making it possible to change vectors on the fly according to the interrupting source. In the case of using BRK as a vectoring software interrupt, some simple stack acrobatics can point to the signature byte, which can be used to tell the OS what to do. It's even easier on the '816, as it has stack relative addressing to help.

You shouldn't sell the 65xx family short when it comes to interrupt processing. In many cases, it's better than other MPU designs, and 65xx interrupt latency is very low by comparison.


Top
 Profile  
Reply with quote  
 Post subject: Re: INT vs. BRK
PostPosted: Sat Oct 08, 2011 7:19 pm 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
BigDumbDinosaur wrote:
Also, INT <x> is no different than BRK followed by a signature byte on a 65xx system.
The two-byte format used by INT opcode CDh is the same, yes. Yet, my trusty (and somewhat dusty!) Intel manual says the CDh opcode uses the immediate operand to choose between 256 different vectors (unlike BRK). However, it's true, for example, that DOS relied very heavily on INT 21, and that software had to decide between a long list of sub-vectors.

BigDumbDinosaur wrote:
You shouldn't sell the 65xx family short when it comes to interrupt processing. In many cases, it's better than other MPU designs, and 65xx interrupt latency is very low by comparison.
I absolutely agree. As for latency, while playing with the Visual6502 simulator recently I noticed that IRQ responds one cycle faster than NMI (maybe because of NMI's edge-detector logic?). Worth remembering if you're ever desperate for absolute minimum latency.

-- Jeff


Top
 Profile  
Reply with quote  
 Post subject: Re: INT vs. BRK
PostPosted: Sun Oct 09, 2011 5:03 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
Dr Jefyll wrote:
As for latency, while playing with the Visual6502 simulator recently I noticed that IRQ responds one cycle faster than NMI (maybe because of NMI's edge-detector logic?). Worth remembering if you're ever desperate for absolute minimum latency.

Interesting that none of the 65xx data sheets makes any distinction in execution time for IRQ vs. NMI processing. Also, when you consider that both interrupt inputs are asynchronous, the response time in Visual6502 becomes suspect. The odds are in most cases the interrupt will hit while the MPU is working on the intermediate steps of an instruction, especially a RMW one, such as ASL <ADDR>, which uses quite a few clock cycles. So the detection logic would have settled well before the completion of the instruction, which is when /IRQ and /NMI are sampled.


Top
 Profile  
Reply with quote  
 Post subject: Re: INT vs. BRK
PostPosted: Sun Oct 09, 2011 2:58 pm 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
BigDumbDinosaur wrote:
The odds are in most cases the interrupt will hit while the MPU is working on the intermediate steps of an instruction, especially a RMW one, such as ASL <ADDR>, which uses quite a few clock cycles.
Hmmm. Now you've got me thinking. I agree that, in the middle of a lengthy instruction, it makes no difference if the interrupt signal arrives a little early or late. But as "late" becomes later and later, eventually a timing deadline will get missed, delaying interrupt response by one entire instruction -- ie, several cycles. So I guess what I'm suggesting is that IRQ bypasses the input conditioning which NMI uses for edge detection. Bypassing that circuit allows the IRQ signal to arrive later yet still meet the same deadline as NMI. Hence my (tentatively confirmed) theory that using the IRQ input rather than the NMI input will reduce average latency by one cycle.

I should mention I only ran one very simple interrupt experiment on the Visual6502 simulator. (And I don't necessarily consider its results to be utterly infallible.)

BigDumbDinosaur wrote:
Interesting that none of the 65xx data sheets makes any distinction in execution time for IRQ vs. NMI processing.
I don't consider Data Sheets to be infallible, either! :wink:

-- Jeff


Last edited by Dr Jefyll on Sun Oct 09, 2011 5:10 pm, edited 1 time in total.

Top
 Profile  
Reply with quote  
 Post subject: Re: INT vs. BRK
PostPosted: Sun Oct 09, 2011 5:08 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8514
Location: Midwestern USA
Dr Jefyll wrote:
Hmmm. Now you've got me thinking. I agree that, in the middle of a lengthy instruction, it makes no difference if the interrupt signal arrives a little early or late. But as "late" becomes later and later, eventually a timing deadline will get missed, delaying interrupt response by one entire instruction (ie, several cycles).

The 65C02 timing diagram shows both IRQ and NMI being sampled at the fall of Ø2, which would be the end of the current instruction. The timing diagram also seems to infer that whatever is driving either of those inputs has between 10 and 60 nanoseconds (depending on Ø2 frequency) to assert the signal before the actual sample occurs. One could thus infer from that that as long as IRQ or NMI is low prior to the fall of Ø2, a response will occur. I suppose, however, that a small but inexorable internal propagation delay would exist with NMI that would narrow that window of opportunity.

Be that as it may, NMI was never really intended to be a general purpose interrupt like IRQ. Most systems reserve NMI for a single critical event, such as incipient power failure. The Commodore VIC-20, C-64 and C-128 attached NMI to CIA #2, which was responsible for driving the fake RS-232 routines. Those routines suffered from a number of errors, to which, I'm sure, a delayed NMI response would have added. Yet I never read anything about it that hinted that using NMI to signal when data was waiting ever caused drop characters.

Quote:
I don't consider Data Sheets to be infallible, either! :wink:

Especially some of WDC's data sheets. You'd think with these products having been in circulation for some 30 years they would have gotten the data sheets to a pristine state by now.


Top
 Profile  
Reply with quote  
 Post subject: Re: INT vs. BRK
PostPosted: Sun Oct 09, 2011 6:57 pm 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
I admit this interrupt latency anomaly causes me to raise my eyebrows -- it does seem odd that it'd only now come to light. Yet, some significant new 65xx info was unearthed only just recently in this thread. As for interrupt latency, allow me to make another comment or two. Then, if anyone wants to elaborate, just PM me and I'll start a new topic! (Or you can do so yourself.)

BigDumbDinosaur wrote:
The 65C02 timing diagram shows both IRQ and NMI being sampled at the fall of Ø2, which would be the end of the current instruction.
It'd be nice if they did specify it was the end of the current instruction, but do they say that, or are you reading between the lines? I don't recall seeing interrupt latency (NMI or IRQ !) quantified as a clear and legitimate spec on any 65xx Data Sheet, ever! But I'd be grateful if anyone can refer me to a reference, and then we can lay the matter to rest.

Does anyone want to run a little experiment? All it would take is a timer (6522?) connected first to IRQ, then NMI. The ISR could read the underflowed timer and determine the latencies, and whether IRQ and NMI are the same. (I'd do it myself, but the truth is I have no functional 6502 hardware at present! :oops: )

-- Jeff
ps- I re-worded my previous post, hoping to un-muddle my explanation slightly.


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sun Oct 09, 2011 8:20 pm 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8546
Location: Southern California
Quote:
Most systems reserve NMI for a single critical event, such as incipient power failure.

In most systems the people on 6502.org are making, what happens in the last milliseconds before power is gone is of no concern. If you have a system that remembers things when it's off, it probably has batteries and can turn itself off in an orderly fashion. Otherwise, if you accidentally pull the power cord, there's no time to store anything useful on a disc anyway. I use NMI for my software RTC running on interrupts from one of the VIAs' free-running T1. That way the clock keeps running if other interrupts are disabled, and the ISR has one less possible source to waste time polling for. If I really need the clock interrupts stopped, I clear the T1's flag in the IER (interrupt-enable register).

Quote:
Does anyone want to run a little experiment? All it would take is a timer (6522?) connected first to IRQ, then NMI. The ISR could read the underflowed timer and determine the latencies, and whether IRQ and NMI are the same.

I did it on a 65c02 years ago and IIRC the NMI line could fall in the last cycle of the currently executing instruction and the processor would still begin the interrupt sequence immediately, but I'll have to find my records to verify. [Edit: That is correct. I checked.]

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
 Post subject:
PostPosted: Sun Oct 09, 2011 8:35 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
Don't forget that visual6502 simulates an NMOS device. The 65C02 may well be different.
Cheers
Ed


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 13 posts ] 

All times are UTC


Who is online

Users browsing this forum: No registered users and 18 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: