6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Fri Nov 22, 2024 12:11 am

All times are UTC




Post new topic Reply to topic  [ 34 posts ]  Go to page Previous  1, 2, 3  Next
Author Message
PostPosted: Wed Nov 06, 2013 3:30 am 
Offline

Joined: Sun Jul 28, 2013 12:59 am
Posts: 235
barrym95838 wrote:
RE: NMI. Not to beat a dead horse, but what I'm envisioning is a device that needs to be serviced with an understandable delay, but with no jitter. I don't know if there are devices out there with synchronization needs that are so strict, but if one did exist, it could assert NMI and hold it long enough to guarantee that the processor had finished the current instruction, stacked the state, and loaded the NMI vector into PC. It would then release NMI and 'know' that the ISR would begin execution with EXACT timing precision. It would be kind of like a reset, but with an RTI available. Or like a hardware-triggered version of WAI. Am I making any sense?

Mike


Makes sense to me... But wouldn't that be the wrong edge?

For some reason, I'm reminded of something that Don Lancaster came up with for the Apple ][ series computers, the "Vapourlock". It used reading from a write-only memory location (something to do with the tape drive hardware) to sample the memory data fetched by the video hardware to work out exactly where the video hardware was in its scan pattern, synchronized to that over some number of scan lines (there was a CMP / BNE loop to start with, so a seven cycle window to start, cut in half each scan line), and at the end the program knew exactly where the raster beam was, and could switch video modes at least on a scanline boundary with no jitter (horizontal raster effects), and possibly within the scanline (vertical raster effects, if repeated properly).

It might be better in your "no jitter interrupt servicing" to have some way to query the device as to how many cycles are left before it really needs service, and burn those cycles off "correctly". Conditional jumps to the following instruction (two cycles if not taken, three if taken, no other effect) might help here.


Top
 Profile  
Reply with quote  
PostPosted: Wed Nov 06, 2013 3:47 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8543
Location: Southern California
Mike, something relevant is that years ago I did a cycle-by-cycle test on the Rockwell 65c02's NMI and found that I could hold NMI\ down for only one cycle of an instruction, even with more cycles to go before finishing that instruction, and when the instruction was finished, the interrupt sequence would begin.  It was not necessary to hold NMI down through that entire time.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Wed Nov 06, 2013 5:46 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8504
Location: Midwestern USA
barrym95838 wrote:
RE: NMI. Not to beat a dead horse, but what I'm envisioning is a device that needs to be serviced with an understandable delay, but with no jitter. I don't know if there are devices out there with synchronization needs that are so strict, but if one did exist, it could assert NMI and hold it long enough to guarantee that the processor had finished the current instruction, stacked the state, and loaded the NMI vector into PC. It would then release NMI and 'know' that the ISR would begin execution with EXACT timing precision. It would be kind of like a reset, but with an RTI available. Or like a hardware-triggered version of WAI. Am I making any sense?

Mike

I've toyed around with something similar in POC V1.1, attaching the IRQ output of the Dallas watchdog to NMI for the purposes of maintaining the jiffy timers even when SEI has been executed. The reality was that any beneficial effect that might have occurred was unnoticeable.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Wed Nov 06, 2013 10:39 pm 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
barrym95838 wrote:
I don't know if there are devices out there with synchronization needs that are so strict, but if one did exist, it could assert NMI and hold it long enough to guarantee that the processor had finished the current instruction, stacked the state, and loaded the NMI vector into PC. It would then release NMI and 'know' that the ISR would begin execution with EXACT timing precision.
(Emphasis added.) Hi Mike; here's an example. It's critical that samples taken from an analog-to-digital converter (or fed to a digital-to-analog converter) be timed at regular intervals. That's because any timing variations (jitter) effectively translate into amplitude distortion of the signal you're handling. Usually latency (delay in responding to the interrupt) can be tolerated, but only if the delay is fixed -- no variation. Unfortunately, variation is introduced according to which instruction happens to get interrupted. 6502 instructions vary from 2 to (I think) 7 cycles, so that's up to 5 cycles of jitter. That may be tolerable for low frequency sampling (a seismograph?). But if 5 cycles is a significant fraction of the sample period, now you have an issue.
nyef wrote:
It might be better in your "no jitter interrupt servicing" to have some way to query the device as to how many cycles are left before it really needs service, and burn those cycles off "correctly". Conditional jumps to the following instruction (two cycles if not taken, three if taken, no other effect) might help here.
Right. In the ADC/DAC example, probably the interrupt originates from a free-running timer. So, the ISR can eliminate jitter by reading the timer value then deriving and executing an appropriate do-nothing delay (such that the latency plus the computed delay result in a fixed value every time). It can then proceed to the I/O operation.

-- Jeff

_________________
In 1988 my 65C02 got six new registers and 44 new full-speed instructions!
https://laughtonelectronics.com/Arcana/ ... mmary.html


Top
 Profile  
Reply with quote  
PostPosted: Wed Nov 06, 2013 11:39 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8504
Location: Midwestern USA
Dr Jefyll wrote:
barrym95838 wrote:
I don't know if there are devices out there with synchronization needs that are so strict, but if one did exist, it could assert NMI and hold it long enough to guarantee that the processor had finished the current instruction, stacked the state, and loaded the NMI vector into PC. It would then release NMI and 'know' that the ISR would begin execution with EXACT timing precision.
(Emphasis added.) Hi Mike; here's an example. It's critical that samples taken from an analog-to-digital converter (or fed to a digital-to-analog converter) be timed at regular intervals. That's because any timing variations (jitter) effectively translate into amplitude distortion of the signal you're handling. Usually latency (delay in responding to the interrupt) can be tolerated, but only if the delay is fixed -- no variation. Unfortunately, variation is introduced according to which instruction happens to get interrupted. 6502 instructions vary from 2 to (I think) 7 cycles, so that's up to 5 cycles of jitter. That may be tolerable for low frequency sampling (a seismograph?). But if 5 cycles is a significant fraction of the sample period, now you have an issue.
nyef wrote:
It might be better in your "no jitter interrupt servicing" to have some way to query the device as to how many cycles are left before it really needs service, and burn those cycles off "correctly". Conditional jumps to the following instruction (two cycles if not taken, three if taken, no other effect) might help here.
Right. In the ADC/DAC example, probably the interrupt originates from a free-running timer. So, the ISR can eliminate jitter by reading the timer value then deriving and executing an appropriate do-nothing delay (such that the latency plus the computed delay result in a fixed value every time). It can then proceed to the I/O operation.

-- Jeff

I don't think jitter in the hypothetical D/A conversion can ever be totally eliminated unless the MPU is dedicated to that one task, using the SEI -- WAI sequence to reduce interrupt latency to a single clock cycle. So the discussion becomes one of how much jitter is tolerable.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Wed Nov 06, 2013 11:40 pm 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8543
Location: Southern California
Dr Jefyll wrote:
Unfortunately, variation is introduced according to which instruction happens to get interrupted. 6502 instructions vary from 2 to (I think) 7 cycles, so that's up to 5 cycles of jitter.  That may be tolerable for low frequency sampling (a seismograph?). But if 5 cycles is a significant fraction of the sample period, now you have an issue.

Bad news / good news:  The bad news is that it's worse than five, because the interrupt can also hit during the last two cycles, meaning there would be jitter even if every instruction were only two cycles long.  The good news, besides the fact that the 7-clock instruction is rare (on 6502), is that this is the peak-to-peak jitter.  The RMS used in calculating the resulting noise is a lot less.  I recently calculated it to be, based on an equal distribution of 2- to 6-clock instructions, to be about 1.8 clocks, RMS.

The rest here is mostly from my potpourri page:

The signal-to-noise ratio is:
Attachment:
jitterEqn.gif
jitterEqn.gif [ 954 Bytes | Viewed 3640 times ]

where f is the analog input frequency, and tj is the RMS jitter time.  At a 5MHz Φ2 rate, the jitter-induced noise is 38dB down from the signal at 5.66kHz.  The noise is reduced of course with decreasing input frequency.

Although it would be nice to have the same SNR at a much higher frequency, 38dB down is really not bad for an 8-bit converter, considering that audio amplitudes tend to be very inconsistent and you might not be able to keep the signal in the top 2 bits of the converter's resolution anyway.  As the top end of your input signal's frequency spectrum drops, so will the problems resulting from jitter.  The maximum S/N ratio you can get with a perfect 8-bit converter and no jitter is 50dB (from 8bits * 6.02dB + 1.76dB).  (That's while there's signal.  With no signal, the converter's output remains constant, with no noise at all if the reference voltage is quiet; so it's not like cassettes which gave tape noise between songs.)

There's a tutorial on jitter and ENB (effective number of bits) here, and an excellent lecture and demonstration of what is and is not important in audio, and down-to-earth proofs of the "golden ears" baloney, at http://www.youtube.com/watch?v=BYTlN6wjcvQ.  Yes, it's on YouTube which compresses the audio and loses information, but he gives the URL where you can download the raw wave files if you want to, otherwise see what he does with various experiments right there.

If you are sampling AC signals, it would be expected that you have at least a basic understanding of the Nyquist frequency and aliasing.  I might comment however that there are times that it is appropriate to undersample if you are dealing with a limited bandwidth of for example 200-220kHz.  You don't necessarily have to sample much above 40kSPS; but the jitter still needs to be suitable for a 220kHz signal, not 20kHz.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 07, 2013 1:42 am 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
GARTHWILSON wrote:
there would be jitter even if every instruction were only two cycles long
Doh! Thanks for the correction!

BigDumbDinosaur wrote:
I don't think jitter in the hypothetical D/A conversion can ever be totally eliminated unless the MPU is dedicated to that one task, using the SEI -- WAI sequence to reduce interrupt latency to a single clock cycle.
SEI -- WAI has the ability to eliminate latency itself (and thereby jitter as well). The solution I describe can only eliminate jitter, but it does so just as effectively as SEI -- WAI.

I hope I'm clear about what's proposed. A free-running counter/timer (such as that in a 6522) is in an endlessly repeating cycle that generates interrupts. Every time the count reaches zero, an interrupt request is generated and the counter reloads itself from the value stored in the associated latch. Counting continues, but from the reload value. One cycle later, the counter will hold ReloadValue - 1. On the following cycle it will hold ReloadValue - 2, and so on.

Some variable number of cycles later we get to the heart of the corresponding ISR code. From there, the first step is to take the reload value, subtract whatever's now in the timer (ie, ReloadValue - n) and thus derive n, the elapsed delay. The second step is to add a complementary delay, as already explained. The total delay is now constant, right to the exact clock cycle. Is there a problem with either of these two steps?

-- Jeff

_________________
In 1988 my 65C02 got six new registers and 44 new full-speed instructions!
https://laughtonelectronics.com/Arcana/ ... mmary.html


Last edited by Dr Jefyll on Thu Nov 07, 2013 4:47 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 07, 2013 2:16 am 
Offline

Joined: Sun Jul 28, 2013 12:59 am
Posts: 235
Dr Jefyll wrote:
When the corresponding ISR code executes some variable number of cycles later, the first step is to take the reload value, subtract whatever's now in the timer (ie, ReloadValue - n) and thus derive n -- the elapsed delay. The second step is to add a complementary delay, as already explained -- the total delay is now invariable. Is there something I've overlooked -- a problem with either of the two steps?

This seems right to me, and is precisely the sort of solution I was trying to describe earlier.

The flipside on this is outputting samples to a DAC. My first "real" job at one point had me helping someone to figure out why their DTMF tone generator wasn't working right. The hardware was a 6-bit ladder DAC on a Z8 microcontroller. For some reason, the DAC was spread out so that three bits were on one I/O port and three bits were on another I/O port. Tone generation was being done in a timer interrupt handler... And there was a weird ripple that brought the tone strength too close to the noise floor to be usable. Turned out in the end that the interrupt handler used up most of the timer period to do its work, so only one instruction on the normal program would run in between interrupts, and there were several different instruction lengths in terms of clock cycles... Given how little code was executed during tone generation, it was decided to switch to a straight-up loop, with cycle counts on each instruction so that no matter what path was taken through the loop it always came out to the same time... Which only served to highlight the errors in the databook for certain instructions (it claimed that some conditional branches were more expensive when not taken). I was probably about as useful as a "cardboard consultant" in figuring out what was going on, but I learned quite a bit.

Anyway, sometimes you really do need to hit some I/O device at a precise rate with no jitter, or at least no jitter greater than your oscillator variation, and having a scheme whereby you can do so on an interrupt-driven basis can be useful, as long as your clock speed is fast enough, and your interrupt handler short enough, that you can get useful work done in the meantime and still compensate for the jitter inherent in interrupt response on most platforms.


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 07, 2013 2:26 am 
Offline
User avatar

Joined: Sun Jun 30, 2013 10:26 pm
Posts: 1949
Location: Sacramento, CA, USA
I'm pretty sure that I understand your jitter-squelching solution, Jeff. My solution of an NMI that acts like a WAI until NMIB is released lives in my imagination, so I suppose that I was trying to get a feel for how useful it could be in a certain hypothetical 32-bit processor design. For example:

RESB unconditionally inits all internal registers (including n, the instruction pointer) to -1, and waits for RESB to be released before proceeding.

IRQB (if enabled) allows the current instruction to finish, stacks two words of state, disables IRQB, and does a jmp ,r (trn) ... r is an internal vector register that should be pre-initialized with the ISR address.

NMIB allows the current instruction to finish, stacks two words of state, disables IRQB, and does a jmp ,m (tmn) ... but waits for NMIB to be released before proceeding. Kind of a hybrid between RESB and IRQB.

Do you guys think that this different, hypothetical NMI behavior could prove to be more useful in today's world? Or is NMI\ probably the way to go? Or ... both (with individual pins and vector registers)?!?!?

Mike

[Edit: Changed vector behavior to internal ... no need for external vector reads in hypothetical land.]


Last edited by barrym95838 on Thu Nov 07, 2013 2:55 am, edited 3 times in total.

Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 07, 2013 2:42 am 
Offline

Joined: Sun Jul 28, 2013 12:59 am
Posts: 235
The problem with waiting for your interrupt signal (RESB, IRQB, or NMIB) to be de-asserted is that some (many? all?) devices won't de-assert the signal until the CPU does something specifically to cause it to be de-asserted.

It occurs to me that another mechanism to cause a precise synchronization would be to use wait-states on some memory access that is either part of the interrupt response sequence or whichever I/O device has such tight timing requirements. Using the free-running timer that Dr Jefyll postulated, if a read/write to/from your I/O device causes wait-states to be generated until the timer hits a specific value would cause an effective synchronization without having to deal with any tricky code to compute and burn off an appropriate number of cycles to compensate for interrupt-response jitter. And this mechanism would also work for non-interrupt-driven operation as well.


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 07, 2013 5:43 am 
Offline
User avatar

Joined: Tue Nov 16, 2010 8:00 am
Posts: 2353
Location: Gouda, The Netherlands
BigDumbDinosaur wrote:
I don't think jitter in the hypothetical D/A conversion can ever be totally eliminated unless the MPU is dedicated to that one task, using the SEI -- WAI sequence to reduce interrupt latency to a single clock cycle. So the discussion becomes one of how much jitter is tolerable.

Or use a double buffered D/A converter with internal sample clock. This could also be built with a CPLD or a few discrete components.


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 07, 2013 5:58 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8543
Location: Southern California
Jeff, you're making perfect sense. I'm working out a routine to do that.  It takes quite a lot of instructions, but gets rid of the jitter.  I might try for the double-buffered data converter too as an alternate method, but it does make for more hardware.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 07, 2013 6:13 am 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
GARTHWILSON wrote:
I'm working out a routine to do that. It takes quite a lot of instructions, but gets rid of the jitter.
Hmmm... Interesting coding challenge, trying to do that "efficiently." The irony is that we're purposely wasting time in order to increase latency to a constant value. But the added variable delay entails some fixed overhead, and that can be minimized, as can the memory usage.

nyef wrote:
if a read/write to/from your I/O device causes wait-states to be generated until the timer hits a specific value would cause an effective synchronization
Yes -- I like it! It has a lot in common with the WAI approach, really.

Slightly OT:

Years ago I built a Z80 SBC with an FDC chip and a floppy drive. During disk read/writes, software had the job of waiting on the FDC chip and then transferring each byte at the appropriate instant. That worked fine, and I even had some extra speed to spare. But later when I updated the project to manage 1.44M high-density floppies, the byte rate doubled. The loop had to run twice as fast -- but still synchronizing with DRQ, the "go" signal from the FDC. Polling the chip's DRQ bit (using an IN instruction followed by a conditional backward branch) was no longer feasible.

As a speedy substitute for explicitly polling DRQ, I doubled the size of the EPROM and filled it with two copies (images) of its original contents. But the wiring was such that the CPU could only "see" one image at a time, as determined by the state of DRQ. I had the EPROM's MS address line fed (via a flipflop) from the FDC's DRQ pin -- not the CPU! :twisted: The two images were identical except for a few bytes in the FDC routines. Just before the critical byte-transfer instructions, one image would have an unconditional branch-to-self (ie, "wait"), and the other image would have NOPs (ie, "now go ahead")!
[Edit]: my recollection is foggy; maybe there weren't actually NOP's in there. In hindsight I see it would be better to omit those -- IOW better if, congruent with the branch-to-self, was the first instruction of the byte-transfer sequence -- which, unlike a NOP, does useful work!

_________________
In 1988 my 65C02 got six new registers and 44 new full-speed instructions!
https://laughtonelectronics.com/Arcana/ ... mmary.html


Last edited by Dr Jefyll on Tue Nov 12, 2013 4:15 am, edited 2 times in total.

Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 07, 2013 4:53 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8504
Location: Midwestern USA
Dr Jefyll wrote:
Slightly OT:
Years ago I built a Z80 SBC with an FDC chip and a floppy drive. During disk read/writes, software had the job of waiting on the FDC chip and then transferring each byte at the appropriate instant. That worked fine, but later when I updated the project to manage 1.44M high-density floppies, the byte rate doubled. The loop had to run twice as fast -- but still synchronizing with DRQ, the "go" signal from the FDC. Polling the chip's DRQ bit (using an IN followed by a conditional backward branch) was no longer feasible.

That's brutal! :twisted: :twisted:

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 07, 2013 6:01 pm 
Offline
User avatar

Joined: Fri Dec 11, 2009 3:50 pm
Posts: 3367
Location: Ontario, Canada
Glad you like it. When it comes to a hardware/software interface, I pride myself on a complete absence of scruples or conscience. In a tough situation I have no qualms about lying to chips and pulling the wool over their eyes in order to bend them to my will --I routinely treat them with no respect whatsoever! Due credit: it's something I learned from Don Lancaster.

_________________
In 1988 my 65C02 got six new registers and 44 new full-speed instructions!
https://laughtonelectronics.com/Arcana/ ... mmary.html


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 34 posts ]  Go to page Previous  1, 2, 3  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 17 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron