6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sat Nov 23, 2024 6:03 pm

All times are UTC




Post new topic Reply to topic  [ 10 posts ] 
Author Message
 Post subject: Timing In Software
PostPosted: Sun Oct 27, 2013 3:36 am 
Offline
User avatar

Joined: Wed Jul 10, 2013 3:13 pm
Posts: 67
Say I have a driver that needs to read form $0200 every 100ms. How exactly do i do this in assembly

_________________
JMP $FFD2


Top
 Profile  
Reply with quote  
 Post subject: Re: Timing In Software
PostPosted: Sun Oct 27, 2013 3:57 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8545
Location: Southern California
Set up an interrupt from a timer, like T1 in the 6522 VIA.  Depending on your clock speed, T1's maximum time-out will probably be less, so you'll have to do the operation every so many timeouts (for example, every 20th time that it rolls over); so when you service the interrupt for the timeouts in between, it will just increment a variable and test it to see if it's time to do the operation you wanted, and if not, just exit.

The 6502 interrupts primer should be very useful.  It can't cover every possible scenario, but should give a pretty good understanding of how to get what you need in that area.  It does have code showing how to set up a VIA T1 interrupt for keeping time; and then what I've done on the workbench computer is to have it compare the time to the next one in a list of alarms to see if the alarm is due, and if so, to service it.  It runs in the background, taking a negligible percentage of the processor time, and lets the computer do something useful while there's no alarm due.  I have 10ms resolution on that one; but for the faster timed interrupts, like every 40µs for example, I'll use a VIA T1 without the real-time clock.

I know your title was about doing it in software, but the interrupts primer shows why that very quickly becomes impractical.  If you really want to do it in software though (which would really only be to learn why not to do it that way :lol: ), you can set up a delay loop between the times the incoming data is serviced.  What's your clock speed?  (That will determine details in the code whether you use a software delay loop or a timer interrupt).

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
 Post subject: Re: Timing In Software
PostPosted: Sun Oct 27, 2013 7:18 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8509
Location: Midwestern USA
James_Parsons wrote:
Say I have a driver that needs to read form $0200 every 100ms. How exactly do i do this in assembly

As Garth said, it's not a practical thing to do in software alone. That's why devices with timers (e.g., the 65C22 or a real-time clock, such as the Maxim DS1501/1511) exist.

If your system has a timer generating a jiffy IRQ, you can slave your driver from the IRQ by using a down-counter located at a convenient place in RAM. For example, if you set up your jiffy IRQ to occur at 10ms intervals, you'd set the down counter to 10 and decrement it on each jiffy interrupt. When the counter reached zero, you'd reset it to 10 and execute the read from $0200.

It's not complicated and I'm sure you will figure it out.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
 Post subject: Re: Timing In Software
PostPosted: Sun Oct 27, 2013 7:31 pm 
Offline
User avatar

Joined: Sun Jun 30, 2013 10:26 pm
Posts: 1949
Location: Sacramento, CA, USA
BigDumbDinosaur wrote:
... For example, if you set up your jiffy IRQ to occur at 10ms intervals, you'd set the down counter to 10 and decrement it on each jiffy interrupt. When the counter reached zero, you'd reset it to 10 and execute the read from $0200.


Although the duration of a jiffy is rather loosely defined, I think that it would be prudent to keep other ISR side-effects in mind before re-defining this duration on an existing system. If you're building a complete system from scratch, then you have full authority to define it any way you choose, of course.

Mike


Top
 Profile  
Reply with quote  
 Post subject: Re: Timing In Software
PostPosted: Sun Oct 27, 2013 9:26 pm 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8545
Location: Southern California
BDD is the only one I've heard use the term "jiffy IRQ" but I think I understand what he's communicating by it, that it's mainly for incrementing a set of bytes, in essence just keeping time, as discussed in the 6502 interrupts primer starting about six paragraphs after the 2.1 heading.  Then any routine that wants the time just looks at these bytes.

Slightly different from the list of alarms that I mentioned earlier, you can have a lot of different jobs taking turns in a loop, and each one that was timing something can keep a record of when it should do something the next time, compare the current time to the record, and if it's not time yet, just exit and let the next job do the same thing.  That way lots of things can be timed, all watching the same clock, and, if I can make the analogy, not fighting over when to turn the hourglass over, or how much sand should be in the hour glass.  There's one clock, and as many timed jobs as you want.  Here's the idea:
Code:
JIFFY_ISR:
    Increment the time bytes.
    RTI
 ;----------------

MAIN_LOOP:
    BEGIN
        JSR  TASK_1
        JSR  TASK_2
        JSR  TASK_3
        <etc.>
    AGAIN


TASK_x:                ; (example of a task using timing)
    Is it waiting for something?
    IF so,
        Compare the current time to the target time stored earlier.
        IF it's time,
            Carry out the job.
            Set the next target time if applicable.  A common way is to take the current time and add some amount to it, and store the result as a target.
        END_IF
    ELSE
        Do inputs indicate that a timed process should begin?
        IF so,
            Start the process,
            Set the target time for the next time to come back and do something, by the method given above.
        END_IF
    END_IF
    RTS                ; Exit  (If it's not time yet, it just exits here too.)
 ;----------------

In this case, each task might be called up many, many times before it finds it has anything to do.  The interrupt is only used to increment the time in RAM variables.

Edit: Since the jiffy interrupt service increments more than one byte, and the interrupt will interrupt the routines at unpredictable times, a time byte might get incremented at a time that could give you a very wrong answer if you're not careful.  Take the four-byte centiseconds variable (cs_32) in the interrupts primer for example.  If you read one byte as $FF and then the interrupt hits and rolls it over to 00 and increments the next higher byte, you may get $1FF when you should have gotten $0FF (reading slightly sooner) or $100 (reading slightly later).  The solution then is to read it twice in a row and make sure the readings match, and if they don't, read it until you get two consecutive ones that do.  Another possibility is to disable the timer interrupt just for the few instructions it takes to read the set of time bytes.  This might be done by disabling only the one interrupt source (for example the VIA's T1) and still allowing other interrupts.

The earlier way I was suggesting with the alarms is more like this:
Code:
JIFFY_ISR:
    Increment the time bytes.
    Is there at least one alarm pending?
    IF so,
        Examine the next alarm time in line and compare it to the current time.  Do they match?
        IF so,
            Copy to a temporary location the address of the routine associated with that alarm, and delete the alarm.
            Run the routine whose address you just copied.  This routine might set up another alarm to run itself again in the future.
        END_IF
    END_IF
    RTI                ; Exit.  Note that since the alarms are sorted, the first one is the only one we need to examime to see if one is due.
 ;----------------

ALARM_LIST:            ; Each alarm's variable space here includes at least the target time, and the address of the routine to run when due.
ALARM_1:
ALARM_2:
ALARM_3:
<etc.>

ALARM_INSTALLATION:    ; Routine to install an alarm by putting it in the list and sorting the list according to chronological order of due times.
    <code>
    RTS
 ;----------------

This method lets one one program hog almost all the processor time (minus a fraction of a percent that the jiffy IRQ takes away).  This program can be oblivious to pending jobs.  It may be more suitable for a situation where you have long periods of time between alarms.  For example, I used it when I was running a test where every 15 minutes, I had the workbench computer pause what it was doing long enough to take a few measurements and print them out along with some status, then go back to what it was doing.  The program that was running most of the time was unrelated and did not have to be aware of the alarm job.

Both of these methods allow the computer to do something useful while waiting for the times to do particular jobs.  Delay loops OTOH are very wasteful of processor time, crippling the computer, and possibly making it hard to get any timing accuracy, especially if you have more than one job for it to do, or if the time required to do the job between delays varies widely depending on branching conditions.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
 Post subject: Re: Timing In Software
PostPosted: Sun Oct 27, 2013 9:52 pm 
Offline
User avatar

Joined: Sun Jun 30, 2013 10:26 pm
Posts: 1949
Location: Sacramento, CA, USA
GARTHWILSON wrote:
BDD is the only one I've heard use the term "jiffy IRQ" but I think I understand what he's communicating by it, that it's mainly for incrementing a set of bytes, in essence just keeping time ...


I instantly knew what he meant, because I fiddled around with low-level Commodore 8-bit programming back in the day. It's essentially as you described it, at least in that context.

Nice examples, BTW!!

Mike


Top
 Profile  
Reply with quote  
 Post subject: Re: Timing In Software
PostPosted: Mon Oct 28, 2013 6:52 am 
Offline
User avatar

Joined: Tue Nov 16, 2010 8:00 am
Posts: 2353
Location: Gouda, The Netherlands
In Linux, the term 'jiffy' is also used for the system timer tick. The exact interval is configurable, so the system provides a predefined 'HZ' symbol that expresses the number of jiffies per second.


Top
 Profile  
Reply with quote  
 Post subject: Re: Timing In Software
PostPosted: Mon Oct 28, 2013 7:28 am 
Offline

Joined: Tue Jul 24, 2012 2:27 am
Posts: 679
While I'm also familiar with the "jiffy clock" from C64-land, for some reason I never associated it with the colloquialism as in "be back in a jiffy". https://en.wikipedia.org/wiki/Jiffy_(time)

Time on the C64 is kind of weird. Even though the C64 has video interrupts, like Linux it set a hardware timer for ~1/60th of a second, regardless if it was PAL or NTSC, to increment the jiffy clock and do its maintenance like keyboard scanning. Even on NTSC, this timer was not synced to the raster refresh as video isn't exactly 60Hz, and visual effects performed on the stock IRQ handler would roll about the screen.

BASIC's TI integer variable reflected the software jiffy clock, while the TI$ variable reflected the hardware Time of Day registers truncated to the nearest second, so those 2 time representations could easily drift out of sync.

_________________
WFDis Interactive 6502 Disassembler
AcheronVM: A Reconfigurable 16-bit Virtual CPU for the 6502 Microprocessor


Top
 Profile  
Reply with quote  
PostPosted: Wed Oct 30, 2013 2:36 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8509
Location: Midwestern USA
"Jiffy IRQ" has been in the computer lexicon for as long as I can remember, which memory goes back some 45 years. The term refers to a regularly spaced interrupt caused by a hardware timer whose cadence is independent of the central processing unit (as it was called back then). In many systems, the cadence was set from the power line frequency and as the expected interval between jiffy IRQs would be 16.6666... milliseconds, a computer intended for use in North America couldn't be run in a locale with 50 Hz power, as the interval would now be 20ms, and all sorts of timing snafus would occur. The introduction of stable hardware interval timers (c. 1972, if I recall) took care of that little problem.

The Commodore CBM series and VIC-20 used a timer in a 6522 to generate the jiffy IRQ. The C-64 used timer A in CIA #2 for that purpose. The C-128 used a VIC raster interrupt for jiffy IRQ generation, since the interrupt-driven BASIC split screen graphics commands had to be synced to the display. C-128 PAL machines had a slower IRQ than NTSC machines. The UDTIM IRQ handler in the kernel compensated for the differing jiffy IRQ rates so TI would update 60 times per second no matter what. The compensation wasn't perfect.

BASIC's TI and TI$ "clock" (and the C-128's SLEEP timer) were notoriously inaccurate because any number of things could disrupt TI updating and cause drift. Serial bus activity was a common cause. The solution to the TI accuracy problem (that is, the lack of accuracy) in the C-64 and C-128 was to set and use a TOD clock in one of the CIA devices. TOD was driven from the power line frequency and hence was quite stable.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Wed Oct 30, 2013 2:58 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
 Post subject: Re: Timing In Software
PostPosted: Wed Oct 30, 2013 2:58 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8509
Location: Midwestern USA
White Flame wrote:
Even though the C64 has video interrupts, like Linux it set a hardware timer for ~1/60th of a second, regardless if it was PAL or NTSC, to increment the jiffy clock and do its maintenance like keyboard scanning.

The HPET on a modern Linux system (x86 hardware) is set to interrupt at 4ms intervals, for an effective jiffy IRQ rate of 250 Hz. I have the RTC in POC generating a 100 Hz jiffy IRQ for timing purposes.

Quote:
...while the TI$ variable reflected the hardware Time of Day registers truncated to the nearest second...

An often-repeated fallacy. TI$ is computed from TI by BASIC in all Commodore 8 bit machines except the B-128. The latter derived TI$ from the CIA 6526 TOD clock.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 10 posts ] 

All times are UTC


Who is online

Users browsing this forum: No registered users and 54 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: