Timing In Software
- James_Parsons
- Posts: 67
- Joined: 10 Jul 2013
Timing In Software
Say I have a driver that needs to read form $0200 every 100ms. How exactly do i do this in assembly
JMP $FFD2
- GARTHWILSON
- Forum Moderator
- Posts: 8773
- Joined: 30 Aug 2002
- Location: Southern California
- Contact:
Re: Timing In Software
Set up an interrupt from a timer, like T1 in the 6522 VIA. Depending on your clock speed, T1's maximum time-out will probably be less, so you'll have to do the operation every so many timeouts (for example, every 20th time that it rolls over); so when you service the interrupt for the timeouts in between, it will just increment a variable and test it to see if it's time to do the operation you wanted, and if not, just exit.
The 6502 interrupts primer should be very useful. It can't cover every possible scenario, but should give a pretty good understanding of how to get what you need in that area. It does have code showing how to set up a VIA T1 interrupt for keeping time; and then what I've done on the workbench computer is to have it compare the time to the next one in a list of alarms to see if the alarm is due, and if so, to service it. It runs in the background, taking a negligible percentage of the processor time, and lets the computer do something useful while there's no alarm due. I have 10ms resolution on that one; but for the faster timed interrupts, like every 40µs for example, I'll use a VIA T1 without the real-time clock.
I know your title was about doing it in software, but the interrupts primer shows why that very quickly becomes impractical. If you really want to do it in software though (which would really only be to learn why not to do it that way
), you can set up a delay loop between the times the incoming data is serviced. What's your clock speed? (That will determine details in the code whether you use a software delay loop or a timer interrupt).
The 6502 interrupts primer should be very useful. It can't cover every possible scenario, but should give a pretty good understanding of how to get what you need in that area. It does have code showing how to set up a VIA T1 interrupt for keeping time; and then what I've done on the workbench computer is to have it compare the time to the next one in a list of alarms to see if the alarm is due, and if so, to service it. It runs in the background, taking a negligible percentage of the processor time, and lets the computer do something useful while there's no alarm due. I have 10ms resolution on that one; but for the faster timed interrupts, like every 40µs for example, I'll use a VIA T1 without the real-time clock.
I know your title was about doing it in software, but the interrupts primer shows why that very quickly becomes impractical. If you really want to do it in software though (which would really only be to learn why not to do it that way
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
- BigDumbDinosaur
- Posts: 9425
- Joined: 28 May 2009
- Location: Midwestern USA (JB Pritzker’s dystopia)
- Contact:
Re: Timing In Software
James_Parsons wrote:
Say I have a driver that needs to read form $0200 every 100ms. How exactly do i do this in assembly
If your system has a timer generating a jiffy IRQ, you can slave your driver from the IRQ by using a down-counter located at a convenient place in RAM. For example, if you set up your jiffy IRQ to occur at 10ms intervals, you'd set the down counter to 10 and decrement it on each jiffy interrupt. When the counter reached zero, you'd reset it to 10 and execute the read from $0200.
It's not complicated and I'm sure you will figure it out.
x86? We ain't got no x86. We don't NEED no stinking x86!
- barrym95838
- Posts: 2056
- Joined: 30 Jun 2013
- Location: Sacramento, CA, USA
Re: Timing In Software
BigDumbDinosaur wrote:
... For example, if you set up your jiffy IRQ to occur at 10ms intervals, you'd set the down counter to 10 and decrement it on each jiffy interrupt. When the counter reached zero, you'd reset it to 10 and execute the read from $0200.
Mike
- GARTHWILSON
- Forum Moderator
- Posts: 8773
- Joined: 30 Aug 2002
- Location: Southern California
- Contact:
Re: Timing In Software
BDD is the only one I've heard use the term "jiffy IRQ" but I think I understand what he's communicating by it, that it's mainly for incrementing a set of bytes, in essence just keeping time, as discussed in the 6502 interrupts primer starting about six paragraphs after the 2.1 heading. Then any routine that wants the time just looks at these bytes.
Slightly different from the list of alarms that I mentioned earlier, you can have a lot of different jobs taking turns in a loop, and each one that was timing something can keep a record of when it should do something the next time, compare the current time to the record, and if it's not time yet, just exit and let the next job do the same thing. That way lots of things can be timed, all watching the same clock, and, if I can make the analogy, not fighting over when to turn the hourglass over, or how much sand should be in the hour glass. There's one clock, and as many timed jobs as you want. Here's the idea:
In this case, each task might be called up many, many times before it finds it has anything to do. The interrupt is only used to increment the time in RAM variables.
Edit: Since the jiffy interrupt service increments more than one byte, and the interrupt will interrupt the routines at unpredictable times, a time byte might get incremented at a time that could give you a very wrong answer if you're not careful. Take the four-byte centiseconds variable (cs_32) in the interrupts primer for example. If you read one byte as $FF and then the interrupt hits and rolls it over to 00 and increments the next higher byte, you may get $1FF when you should have gotten $0FF (reading slightly sooner) or $100 (reading slightly later). The solution then is to read it twice in a row and make sure the readings match, and if they don't, read it until you get two consecutive ones that do. Another possibility is to disable the timer interrupt just for the few instructions it takes to read the set of time bytes. This might be done by disabling only the one interrupt source (for example the VIA's T1) and still allowing other interrupts.
The earlier way I was suggesting with the alarms is more like this:
This method lets one one program hog almost all the processor time (minus a fraction of a percent that the jiffy IRQ takes away). This program can be oblivious to pending jobs. It may be more suitable for a situation where you have long periods of time between alarms. For example, I used it when I was running a test where every 15 minutes, I had the workbench computer pause what it was doing long enough to take a few measurements and print them out along with some status, then go back to what it was doing. The program that was running most of the time was unrelated and did not have to be aware of the alarm job.
Both of these methods allow the computer to do something useful while waiting for the times to do particular jobs. Delay loops OTOH are very wasteful of processor time, crippling the computer, and possibly making it hard to get any timing accuracy, especially if you have more than one job for it to do, or if the time required to do the job between delays varies widely depending on branching conditions.
Slightly different from the list of alarms that I mentioned earlier, you can have a lot of different jobs taking turns in a loop, and each one that was timing something can keep a record of when it should do something the next time, compare the current time to the record, and if it's not time yet, just exit and let the next job do the same thing. That way lots of things can be timed, all watching the same clock, and, if I can make the analogy, not fighting over when to turn the hourglass over, or how much sand should be in the hour glass. There's one clock, and as many timed jobs as you want. Here's the idea:
Code: Select all
JIFFY_ISR:
Increment the time bytes.
RTI
;----------------
MAIN_LOOP:
BEGIN
JSR TASK_1
JSR TASK_2
JSR TASK_3
<etc.>
AGAIN
TASK_x: ; (example of a task using timing)
Is it waiting for something?
IF so,
Compare the current time to the target time stored earlier.
IF it's time,
Carry out the job.
Set the next target time if applicable. A common way is to take the current time and add some amount to it, and store the result as a target.
END_IF
ELSE
Do inputs indicate that a timed process should begin?
IF so,
Start the process,
Set the target time for the next time to come back and do something, by the method given above.
END_IF
END_IF
RTS ; Exit (If it's not time yet, it just exits here too.)
;----------------In this case, each task might be called up many, many times before it finds it has anything to do. The interrupt is only used to increment the time in RAM variables.
Edit: Since the jiffy interrupt service increments more than one byte, and the interrupt will interrupt the routines at unpredictable times, a time byte might get incremented at a time that could give you a very wrong answer if you're not careful. Take the four-byte centiseconds variable (cs_32) in the interrupts primer for example. If you read one byte as $FF and then the interrupt hits and rolls it over to 00 and increments the next higher byte, you may get $1FF when you should have gotten $0FF (reading slightly sooner) or $100 (reading slightly later). The solution then is to read it twice in a row and make sure the readings match, and if they don't, read it until you get two consecutive ones that do. Another possibility is to disable the timer interrupt just for the few instructions it takes to read the set of time bytes. This might be done by disabling only the one interrupt source (for example the VIA's T1) and still allowing other interrupts.
The earlier way I was suggesting with the alarms is more like this:
Code: Select all
JIFFY_ISR:
Increment the time bytes.
Is there at least one alarm pending?
IF so,
Examine the next alarm time in line and compare it to the current time. Do they match?
IF so,
Copy to a temporary location the address of the routine associated with that alarm, and delete the alarm.
Run the routine whose address you just copied. This routine might set up another alarm to run itself again in the future.
END_IF
END_IF
RTI ; Exit. Note that since the alarms are sorted, the first one is the only one we need to examime to see if one is due.
;----------------
ALARM_LIST: ; Each alarm's variable space here includes at least the target time, and the address of the routine to run when due.
ALARM_1:
ALARM_2:
ALARM_3:
<etc.>
ALARM_INSTALLATION: ; Routine to install an alarm by putting it in the list and sorting the list according to chronological order of due times.
<code>
RTS
;----------------This method lets one one program hog almost all the processor time (minus a fraction of a percent that the jiffy IRQ takes away). This program can be oblivious to pending jobs. It may be more suitable for a situation where you have long periods of time between alarms. For example, I used it when I was running a test where every 15 minutes, I had the workbench computer pause what it was doing long enough to take a few measurements and print them out along with some status, then go back to what it was doing. The program that was running most of the time was unrelated and did not have to be aware of the alarm job.
Both of these methods allow the computer to do something useful while waiting for the times to do particular jobs. Delay loops OTOH are very wasteful of processor time, crippling the computer, and possibly making it hard to get any timing accuracy, especially if you have more than one job for it to do, or if the time required to do the job between delays varies widely depending on branching conditions.
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
- barrym95838
- Posts: 2056
- Joined: 30 Jun 2013
- Location: Sacramento, CA, USA
Re: Timing In Software
GARTHWILSON wrote:
BDD is the only one I've heard use the term "jiffy IRQ" but I think I understand what he's communicating by it, that it's mainly for incrementing a set of bytes, in essence just keeping time ...
Nice examples, BTW!!
Mike
Re: Timing In Software
In Linux, the term 'jiffy' is also used for the system timer tick. The exact interval is configurable, so the system provides a predefined 'HZ' symbol that expresses the number of jiffies per second.
-
White Flame
- Posts: 704
- Joined: 24 Jul 2012
Re: Timing In Software
While I'm also familiar with the "jiffy clock" from C64-land, for some reason I never associated it with the colloquialism as in "be back in a jiffy". https://en.wikipedia.org/wiki/Jiffy_(time)
Time on the C64 is kind of weird. Even though the C64 has video interrupts, like Linux it set a hardware timer for ~1/60th of a second, regardless if it was PAL or NTSC, to increment the jiffy clock and do its maintenance like keyboard scanning. Even on NTSC, this timer was not synced to the raster refresh as video isn't exactly 60Hz, and visual effects performed on the stock IRQ handler would roll about the screen.
BASIC's TI integer variable reflected the software jiffy clock, while the TI$ variable reflected the hardware Time of Day registers truncated to the nearest second, so those 2 time representations could easily drift out of sync.
Time on the C64 is kind of weird. Even though the C64 has video interrupts, like Linux it set a hardware timer for ~1/60th of a second, regardless if it was PAL or NTSC, to increment the jiffy clock and do its maintenance like keyboard scanning. Even on NTSC, this timer was not synced to the raster refresh as video isn't exactly 60Hz, and visual effects performed on the stock IRQ handler would roll about the screen.
BASIC's TI integer variable reflected the software jiffy clock, while the TI$ variable reflected the hardware Time of Day registers truncated to the nearest second, so those 2 time representations could easily drift out of sync.
- BigDumbDinosaur
- Posts: 9425
- Joined: 28 May 2009
- Location: Midwestern USA (JB Pritzker’s dystopia)
- Contact:
Timing In Software: Jiffy IRQ
"Jiffy IRQ" has been in the computer lexicon for as long as I can remember, which memory goes back some 45 years. The term refers to a regularly spaced interrupt caused by a hardware timer whose cadence is independent of the central processing unit (as it was called back then). In many systems, the cadence was set from the power line frequency and as the expected interval between jiffy IRQs would be 16.6666... milliseconds, a computer intended for use in North America couldn't be run in a locale with 50 Hz power, as the interval would now be 20ms, and all sorts of timing snafus would occur. The introduction of stable hardware interval timers (c. 1972, if I recall) took care of that little problem.
The Commodore CBM series and VIC-20 used a timer in a 6522 to generate the jiffy IRQ. The C-64 used timer A in CIA #2 for that purpose. The C-128 used a VIC raster interrupt for jiffy IRQ generation, since the interrupt-driven BASIC split screen graphics commands had to be synced to the display. C-128 PAL machines had a slower IRQ than NTSC machines. The UDTIM IRQ handler in the kernel compensated for the differing jiffy IRQ rates so TI would update 60 times per second no matter what. The compensation wasn't perfect.
BASIC's TI and TI$ "clock" (and the C-128's SLEEP timer) were notoriously inaccurate because any number of things could disrupt TI updating and cause drift. Serial bus activity was a common cause. The solution to the TI accuracy problem (that is, the lack of accuracy) in the C-64 and C-128 was to set and use a TOD clock in one of the CIA devices. TOD was driven from the power line frequency and hence was quite stable.
The Commodore CBM series and VIC-20 used a timer in a 6522 to generate the jiffy IRQ. The C-64 used timer A in CIA #2 for that purpose. The C-128 used a VIC raster interrupt for jiffy IRQ generation, since the interrupt-driven BASIC split screen graphics commands had to be synced to the display. C-128 PAL machines had a slower IRQ than NTSC machines. The UDTIM IRQ handler in the kernel compensated for the differing jiffy IRQ rates so TI would update 60 times per second no matter what. The compensation wasn't perfect.
BASIC's TI and TI$ "clock" (and the C-128's SLEEP timer) were notoriously inaccurate because any number of things could disrupt TI updating and cause drift. Serial bus activity was a common cause. The solution to the TI accuracy problem (that is, the lack of accuracy) in the C-64 and C-128 was to set and use a TOD clock in one of the CIA devices. TOD was driven from the power line frequency and hence was quite stable.
Last edited by BigDumbDinosaur on Wed Oct 30, 2013 2:58 am, edited 1 time in total.
x86? We ain't got no x86. We don't NEED no stinking x86!
- BigDumbDinosaur
- Posts: 9425
- Joined: 28 May 2009
- Location: Midwestern USA (JB Pritzker’s dystopia)
- Contact:
Re: Timing In Software
White Flame wrote:
Even though the C64 has video interrupts, like Linux it set a hardware timer for ~1/60th of a second, regardless if it was PAL or NTSC, to increment the jiffy clock and do its maintenance like keyboard scanning.
Quote:
...while the TI$ variable reflected the hardware Time of Day registers truncated to the nearest second...
x86? We ain't got no x86. We don't NEED no stinking x86!