6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sat Nov 23, 2024 12:38 am

All times are UTC




Post new topic Reply to topic  [ 48 posts ]  Go to page 1, 2, 3, 4  Next
Author Message
PostPosted: Fri Mar 13, 2015 8:12 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
CALENDRIC & TEMPORAL PROGRAMMING

Over in my "Designing A New Filesystem" topic I posted about time-stamping as part of filesystem management. Calendric and temporal matters are an interesting topic, especially in organizing dates and times in a fashion that lends itself to computer usage. In particular, the apparent irregularity of the calendar presents a challenge in trying to reduce any given date to a sequential number that lends itself to mechanical computation. Even more of a challenge is doing this stuff in assembly language using only integer arithmetic on a microprocessor that has no multiplication and division instructions. It can be done, of course, but that's getting a bit ahead of the program, so to speak.

I first got into this stuff in the mid-1980s, when I wrote a Clock-Calendar program to run on the Commodore 128, using interrupts to drive a continuous on-screen display, as well as maintain a set of date and time "registers" in software. CC128, as I called it, could provide the date and time to other programs and even had an alarm feature that would cause the C-128 to emit an irritating sound when the alarm went off. Years later while laid up in the hospital following major surgery and being bored out of my skull watching daytime TV and listening to the other guy in the room moaning and groaning, I resurrected the Clock-Calendar program on paper, modified it to run underneath the C-128's kernel ROM above the BASIC text area, and added a bit to it. In the process, I wrote a more efficient algorithm to handle leap years and was able to shrink the code to fit in about 1KB total. The program was subsequently made available on the now-defunct C-128 forum that was run by Lance Lyon in Australia.

CC128 used the time-of-day (TOD) clock in one the 6526 complex interface adapters (CIA) as the primary time source, and maintained the date as a set of BCD values, one each for the day, month and year. The day-of-the-week was computed from these values when requested. The interrupt request handler would read the TOD clock as required and update some RAM locations each time the tenths register changed. The code to do all this was somewhat convoluted because it was working with date and time values that were the analog of human-readable data. Back when I originally wrote CC128 (late 1985), I wasn't aware of other timekeeping methods that might have been batter for this application, especially the methods used in UNIX—I was relatively new to UNIX at the time and wasn't very familiar with what was going on inside the kernal.

Not too long after constructing my POC V1.0 computer, I added code to the firmware that would make a continuously-updating time-and-date field available to any application that needed it. Rather than do what I did with CC128, I decided to adopt a modified form of UNIX time. Early on in the development of UNIX, a simple but effective system-level method of keeping time had been devised. UNIX timekeeping is nothing more than counting the number of seconds that have elapsed since an arbitrary point in the past, which point is referred to as the "epoch." The epoch that was chosen for UNIX was midnight Greenwich Mean Time¹ on Thursday, January 1, 1970, a date that approximately coincided with the genesis of UNIX itself.² That epoch, also used by Linux, was a pragmatic choice with the PDP-11 hardware on which UNIX was developed, but has more recently proved to be a bit of a nuisance, as you will soon see.

On the PDP-11, a 32-bit signed integer was the largest integer data type that was available, which made it the natural choice for keeping time. The UNIX time field is defined in C programs as being of type time_t, which is a 64-bit signed integer on UNIX and Linux systems running a 64-bit kernel, and continues to be a 32-bit signed integer on older hardware. I will refer to that integer in a generalized way as "UNIX time."

When a UNIX or Linux system is booted, the battery-backed hardware clock is read, and the date and time-of-day is converted into seconds relative to the epoch. The result of this conversion is then used to initialize the internal UNIX time field maintained by the kernel. As UNIX time is nothing more than a count of the number of seconds that have elapsed since the epoch, and since the epoch is anchored to UTC, the kernel's timekeeping will be set to UTC, not the local time zone. Thereafter, the kernel's interrupt request handler will increment UNIX time at one second intervals. That is the only calendric and temporal processing by the kernel, as it "knows" nothing about the human-readable form of the date and time-of-day, as well as nothing about time zones and daylight saving time (summer time). All such matters are handled in external library functions.

UNIX time is a non-zero positive value if the date and time is in the future relative to the epoch, and a negative value if the date is prior to the epoch. As the numeric range that can be stored in a 32-bit signed integer is +2,147,483,647 ($7FFFFFFF) to -2,147,483,648 ($80000000), the usable date range is slightly more than 136 years, with it split between the past and the future, relative to the epoch. A consequence of this limitation is that on 32-bit systems, UNIX time will roll over on January 19, 2038 at 03:14:07 UTC, less than 23 years hence at this writing. The rollover will cause UNIX time to become a negative value and drop back to December 13, 1901 at 20:45:52 UTC (coincidentally, that day is Friday the 13th, for any reader who is superstitious). This event is often referred to as the "year 2038" problem, and is analogous to the Y2K problem. Systems that keep time in a 64-bit integer will not roll over for some 292 billion years.

Internally, the kernel uses its ever-incrementing copy of UNIX time to time-stamp files, directories and filesystems as they are created, read and written. This process is succinct because all that has to be done to time-stamp something is to make a copy of the kernel's UNIX time field, with provisions to assure that the copy is an atomic operation to prevent a carry error.

UNIX time is also available via a kernel API call to external programs that need to maintain the date and time, such as database managers. User programs such as date fetch UNIX time from the kernel and then apply various mathematical operations to generate a human-readable date and time of day that is adjusted to the local time zone, with compensation for daylight saving time where applicable. The library functions that do the conversions are also used by standard UNIX utilities such as ls (directory lister) that need to convert filesystem time-stamps to human-readable format. In all cases, library functions process calendric and time zone calculations, not the kernel.

In addition to UNIX time, another time field referred to as system up-time ("uptime") is maintained by the kernel. Uptime is similar to UNIX time in that it is incremented at one second intervals. However, uptime is always initialized to zero when the system is booted and cannot be changed by ordinary means. Hence uptime represents the number of seconds that have elapsed since the system came up.

In my next post, I'll bloviate a bit on how the UNIX timekeeping methods can be utilized in a 6502 system.

——————————————————————————————————————
¹UTC (coordinated universal time) has replaced GMT in most civil time reckoning.

²The original UNIX timekeeping method used midnight GMT on January 1, 1971 as the epoch and incremented the 32-bit counter at a 60 Hz rate, which means the count would have overflowed in about 2.5 years. Hence the decision was made to increment the count at a 1 Hz rate. At the same time, the epoch was changed to its present definition.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Fri Apr 08, 2022 12:29 pm, edited 3 times in total.

Top
 Profile  
Reply with quote  
PostPosted: Sat Mar 14, 2015 9:46 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
Once you've coped successfully with leap years, and if you've decided to take the wise decision to leave timezones, daylight saving and string representations to a higher level of code, the remaining difficulty is the leap second. You implied in the other thread that you were going to treat the leap second as a correctable error in the time source, which may well be a solid practical thing to do. Indeed, the POSIX behaviour is to jump, at the appropriate time, such that times are not quite unique but days are always the same length. Google's approach is to slew time before and after, such that there are no jumps but seconds are not always the same length.

For anyone interested in the gory details, see for example
http://en.wikipedia.org/wiki/Unix_time#Leap_seconds

Note that GPS time has an offset to UTC, although it is otherwise a useful accurate timesource:
http://www.leapsecond.com/java/gpsclock.htm


Top
 Profile  
Reply with quote  
PostPosted: Sat Mar 14, 2015 9:31 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
BigEd wrote:
Once you've coped successfully with leap years, and if you've decided to take the wise decision to leave timezones, daylight saving and string representations to a higher level of code, the remaining difficulty is the leap second. You implied in the other thread that you were going to treat the leap second as a correctable error in the time source, which may well be a solid practical thing to do.

We have a leap second coming up on June 30, at which time the clock will progress from 23:59:59 to 23:59:60 and then 00:00:00 on July 1. Everybody will get a smidgen of extra sleep! :lol:

Linux man pages related to timekeeping pretty much imply that leap second correction comes about from manual adjustment or reference to some other time source. Both of our servers use stratum 2 Internet time servers to stay in sync, so they will pick up the leap second within a few hours of it happening.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Sun Mar 15, 2015 12:56 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Sun Mar 15, 2015 12:17 am 
Offline
User avatar

Joined: Mon Apr 23, 2012 12:28 am
Posts: 760
Location: Huntsville, AL
BigDumbDinosaur wrote:
We have a leap second coming up on June 30, at which time the clock will progress from 23:50:59 to 23:59:60 and then 00:00:00 on July 1. Everybody will get a smidgen of extra sleep! :lol:
I can't speak for anyone but myself, but I am pretty sure that I'll notice a loss 09:02 of sleep. :lol:

_________________
Michael A.


Top
 Profile  
Reply with quote  
PostPosted: Sun Mar 15, 2015 12:57 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
MichaelM wrote:
BigDumbDinosaur wrote:
We have a leap second coming up on June 30, at which time the clock will progress from 23:50:59 to 23:59:60 and then 00:00:00 on July 1. Everybody will get a smidgen of extra sleep! :lol:
I can't speak for anyone but myself, but I am pretty sure that I'll notice a loss 09:02 of sleep. :lol:

That was a typo. It should be 23:59:59 to 23:59:60...

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Thu Mar 19, 2015 6:59 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
In my first post, I described how time is kept in the UNIX and Linux worlds. This post will continue in that vein.

The principle of counting seconds that have elapsed since a point in the past—traditionally referred to as UNIX time—is readily programmed in 6502 assembly language, although the methods will vary according to which of the 6502 MPUs you are using. The code required to convert between broken-down time—the term used to describe a time structure containing hours, seconds, months, etc.—and UNIX time is more complicated but not out of reach for a reasonably competent assembly language programmer. It is the conversion code that understands the relationships between calendric and temporal units, as well as the meaning of a time zone and daylight saving time (summer time). Hence the greater complexity.

However, before getting bogged down in a lot of details, here is a list of what is needed to support UNIX style timekeeping:

  • Time Base

    The "time base" is a periodic interrupt source that will cause the MPU to execute code that will keep UNIX time—the interrupt generated by the time base is often referred to as a "jiffy IRQ". Over the years, computer systems have used a variety of time bases, such as a programmable interrupt timer (PIT) or a square wave derived from the AC power mains.

    You can adapt a PIT if you are so inclined, or if your system has a 65C22 VIA, you can set up one of its timers to generate the jiffy IRQ. Just keep in mind that VIA timers are usually run at a sub-multiple of the Ø2 system clock frequency, which means you'll be changing your setup code if you change Ø2.

    Use of the AC mains as a time base offers excellent long-term stability, as power generating stations typically take great pains to keep the line frequency as close to exact as possible. You may find this method to be a challenge to implement, however.

    In my POC units, I use the counter/timer (C/T) in the NXP 28L92 dual-channel UART (DUART) to generate a 100 Hz jiffy IRQ. The C/T is run at a sub-multiple of the 3.6864 MHz "X1" clock oscillator that is the time reference for the DUART. C/T stability will be as good as the oscillator's stability; observations have shown that timekeeping using a 100 PPM oscillator is accurate to within a fraction of a second per month.

    Although UNIX time itself increments at one second intervals, the jiffy IRQ rate should be much faster—100 Hz is typical in many UNIX systems and 250 Hz with Linux is common. 100 Hz will give your UNIX timekeeping a 10 millisecond resolution and will also make for simpler code in your interrupt service handler. Avoid the temptation to run a super fast jiffy IRQ, unless you are prepared to give over more MPU time to servicing interrupts and less time to servicing foreground tasks. I would not go any faster than 250 Hz, which will give your UNIX timekeeping a 4 millisecond resolution.

  • Interrupt Service Handler

    The interrupt service handler is what will keep UNIX time for you. The required code is surprisingly succinct—all that happens is some bytes are incremented. The method for keeping time with the 65(c)02 is somewhat different than with the 65C816 due to the latter's 16 bit capabilities.

  • Conversion Routines

    As I noted above, the conversion routines are where the complexity of going between broken-down time and UNIX time is encountered. There are two such basic routines:

    1. mktime

      mktime converts the broken-down time in a data structure into UNIX time, with compensation for the local time zone and whether or not the locale is observing daylight saving time—the result is a time value that is synchronized to UTC. The output of mktime can subsequently be used to set the system time by writing into the memory locations that maintain the time values.

    2. localtim

      localtim converts UNIX time into broken-down time, with compensation for the local time zone and whether or not the locale is observing daylight saving time—the result is a set of date and time values that is synchronized to the local time zone. The broken-down time is stored in the same type of data structure used for input to mktime, and may include day-of-the-week and day-of-the-year fields. UNIX and Linux refer to this function as localtime(), but since I limit label and symbol names in my programs to eight characters, mostly due to years of working with assemblers that couldn't handle larger symbol and label names, I've dropped the 'e' from the function name.

    The temporal part of conversion is relatively straight-forward, as time unit relationships are simple mathematical ratios. For example, a day (usually) contains 86,400 seconds.¹ The calendric part of conversion is not as simple, since the number of days in a month varies and the occasional leap year gets into the picture to complicate matters. Fortunately, conversion can accomplished with "four-function" integer arithmetic, albeit with large (by 6502 standards) integers.

  • Primary Time Source

    The primary time source is used to generate an initial time and date value when the system is booted. Traditionally, this source is a battery-backed real-time clock (RTC). At boot time, code will read the RTC's registers, convert the the time and date data into UNIX time format and write the result into the time field maintained by the jiffy IRQ. Many RTC's keep time in binary-coded decimal (BCD), which means an extra step is needed to convert BCD fields into binary equivalents that can be understood by the mktime conversion function. It is convenient, though not essential, to set the RTC's date and time to UTC—that is how I have my POC unit set up. If the RTC is set to local time then time zone and (possibly) daylight saving time compensation has to be made in order to set UNIX time when the system comes up. It's best to avoid such gyrations and set the RTC to UTC.

    Another way to get a primary time source is to send a formatted time string from another computer to your 65xx machine. I use this technique to set the RTC itself when it is initially energized, since the DS1511Y is shipped with the oscillator turned off to conserve the battery. Thereafter, the RTC becomes the primary time source.

    If you have no primary time source, you will have to take on that role by manually entering the date and time when the system is booted. This was a common procedure with CP/M and early MS-DOS machines, as they lacked an RTC.

In my next post, I'll explain how to maintain UNIX time on a running system. In a later post, I will present the equations that must be solved in order to make the conversions between UNIX time and broken-down time.

————————————————————
¹Periodically a leap second will be added or subtracted to keep UTC synchronized with the earth's rotation, since the latter isn't exactly stable. A leap second was added on June 30, 2015.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Sun Dec 01, 2019 7:14 am, edited 5 times in total.

Top
 Profile  
Reply with quote  
PostPosted: Thu Mar 19, 2015 7:46 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
How do you store and communicate the local time zone in your OS, BDD?
In the posix equivalent routines, there's a process-global variable tzname and an inherited env var called TZ which do the job.
It's been a useful feature of unix-like systems that each process has its own view of the local timezone, which is inherited into subprocesses: useful when people from around the world are remotely sharing the same machine, as each person's processes can run using that user's preference for timezone.
See
http://linux.die.net/man/3/mktime
http://linux.die.net/man/3/tzset
For example I can do
$ env TZ=est date
Thu 19 Mar 2015 02:38:33 EST
(There was for a while an annoying ambiguity whereby BST, intended to mean British Summer Time, was several hours adrift, perhaps corresponding to Midway Island's Bering Standard Time. In fact it was far enough adrift that it swapped AM for PM.)


Top
 Profile  
Reply with quote  
PostPosted: Thu Mar 19, 2015 5:58 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
BigEd wrote:
How do you store and communicate the local time zone in your OS, BDD?

Currently, there is not a formal OS on POC, only the firmware. There is a one byte time zone field in NVRAM that can be read by dumping part of NVRAM into memory. That field has the number of hours east or west of GMT, with bit 7 of the byte set for west. If the time zone byte is zero then the time zone is assumed to be GMT. In my particular POC unit, the time zone value is $86. No information about daylight saving time is carried.

Quote:
In the posix equivalent routines, there's a process-global variable tzname and an inherited env var called TZ which do the job. It's been a useful feature of unix-like systems that each process has its own view of the local timezone, which is inherited into subprocesses: useful when people from around the world are remotely sharing the same machine, as each person's processes can run using that user's preference for timezone.

I will be using the same arrangement when I eventually get to writing a kernel.

Quote:
http://linux.die.net/man/3/mktime
http://linux.die.net/man/3/tzset

Point of note: neither of those calls are part of the kernel API, as the kernel knows nothing about time zones, etc.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Thu Mar 19, 2015 6:22 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
Beware! Timezones are not on an hourly granularity, but a quarter-hour granularity. So, better scale that bytesized value you have. Fortunately it's still a bytesized value. (Example: Nepal.)
Edit: even worse, there's a place where summer time is just a half-hour different.
Source: http://www.worldtimezone.com/faq.html


Top
 Profile  
Reply with quote  
PostPosted: Thu Mar 19, 2015 7:30 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
BigEd wrote:
Beware! Timezones are not on an hourly granularity, but a quarter-hour granularity. So, better scale that bytesized value you have. Fortunately it's still a bytesized value. (Example: Nepal.)

Also Norfolk Island (near Australia) is in an "odd" time zone, GMT+11:30 in their case. Chatham Island (near New Zealand) uses GMT+12:45 when standard time is in effect, which really is odd I suppose. :lol:

In my timekeeping scheme I am concocting, the time zone will be described as the number of minutes east or west of GMT, thus supporting the "odd" time zones like Nepal's. Here's an excerpt from the comment header in my mktime function about the time zone:

Code:
01614    ;   mktime expects that the external variable tz has been defined & set to
01615    ;   represent the number of minutes that the locale is east or west of UTC.
01616    ;   tz is interpreted by mktime as follows:
01617    ;
01618    ;       x00000xxxxxxxxxx
01619    ;       |     ||||||||||
01620    ;       |     ++++++++++———> minutes east/west of UTC
01621    ;       +——————————————————> 0: locale is west of UTC
01622    ;                            1: locale is east of UTC
01623    ;
01624    ;   For example, if tz contains 360 & bit 15 is clear, the locale is 6 hours
01625    ;   west of UTC & 21,600 ($5460) seconds will be added to the computed value
01626    ;   of time_t to align it with UTC.  On the other hand, if tz contains 480 &
01627    ;   bit 15 is set, the locale is 8 hours east of UTC & 28,800 ($7080) sec-
01628    ;   onds will be subtracted from the computed value of time_t to align it
01629    ;   with UTC.  This method accommodates time zones that are not exact hour
01630    ;   multiples east or west of UTC.  Set tz to zero if the local time is UTC.
01631    ;   If tz has not been defined anywhere the time zone will be assumed to be
01632    ;   UTC+0.

The above arrangement will allow the time zone range to be ±1023 minutes. Since UNIX time measures seconds since the epoch, time zone compensation is a simple case of multiplying the time zone minutes by 60 and then adding or subtracting to get to GMT (I interchangeably use GMT and UTC to refer to time at the prime meridian, even though they are technically different references).

Quote:
Edit: even worse, there's a place where summer time is just a half-hour different.
Source: http://www.worldtimezone.com/faq.html

You're referring, of course, to Lord Howe Island, also near Australia. Therein lies a bit of a problem.

In the UNIX tm broken-down time structure, the tm_isdst field that indicates if daylight saving time (DST or summer time) applies is defined as 0 if it does not, 1 if it does or -1 if it is unknown (I'm not sure what that would mean). tm_isdst doesn't tell the conversion functions the offset to apply when summer time is observed. The assumption has always been that the offset is an hour and that assumption was at one time actually hard-coded into the conversion functions (mktime() and localtime()). It is now externally defined when the tzset() function is called.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Mon Mar 30, 2015 1:41 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
Timekeeping is the product of a continuously-running "timepiece", which may be a wall clock, wristwatch, a counter in computer memory, etc. The timepiece keeps time without regard to whether or not someone or something is "watching" it. The process of referring to a timepiece or setting it to the correct time is not part of the timekeeping process. As will be seen, this distinction is important in the realm of computers, as the software used to keep time is not the same as that used to read or set the time, or to convert time between machine-readable and human-readable formats.

In this post, I will focus on the mechanics of timekeeping itself, and will discuss the process of reading or setting the time in a subsequent post. It should be noted that "time" in this sense refers to both the time-of-day (wall clock time or real time) and the calendar date, since the methods described in this topic manipulate a single variable that encapsulates both date and time-of-day.

As previously described, keeping time in UNIX and Linux is a process of incrementing an integer counter at one second intervals, which is an uncomplicated task. Code in an interrupt service routine (ISR) is executed at regular intervals in response to a periodic hardware interrupt request that is referred to as a "jiffy" IRQ. The source of the jiffy IRQ is the "time base," whose stability ultimately determines the degree of long-term timekeeping accuracy that will be attained.

Choosing a jiffy IRQ rate involves a tradeoff: a slower rate will generally produce coarser timekeeping resolution. However, the relatively low IRQ rate will improve perceived foreground process performance. On the other hand, a faster rate will produce finer timekeeping resolution, but at the expense of performance, since the microprocessor (MPU) will be processing more frequent interrupts.

It is recommended that the jiffy IRQ rate be a round number that can produce an even number of milliseconds between each increment of the time counter, such as 50 (20ms resolution), 100 (10ms resolution) or 250 (4ms resolution)—50 is the minimum rate that I recommend. The rate should be such that the corresponding number can fit into a single byte, an important consideration with the eight bit 65xx MPUs. In my POC unit, I use a 100 Hz jiffy IRQ, which is the most commonly used rate in UNIX systems (note that current Linux kernels use 250 Hz).

In the following discussion, UNIX time is maintained in a field defined as uxtime, which is a 48-bit integer. uxtime will be incremented at exactly one second intervals. A separate counter byte, referred to as jiffct—the "jiffy counter"—is used to keep track of how many jiffy IRQs have been serviced since the last update to uxtime.

jiffct is initially set to value hz at boot time, hz being the jiffy IRQ rate that has been defined for your system. Subsequently, jiffct is decremented with each jiffy IRQ. When jiffct reaches zero, it is reset to hz and uxtime is incremented. Of course, the ISR has to determine if the jiffy IRQ timer was responsible for the IRQ—if it wasn't, neither jiffct or uxtime would be touched.

The following flowchart illustrates a procedure that may be employed to keep time:

Attachment:
File comment: Timekeeping Update ISR Flowchart
timekeeping_isr_flowchart.gif
timekeeping_isr_flowchart.gif [ 65.82 KiB | Viewed 3405 times ]

As can be seen, it isn't complicated—in fact, the process is designed to be as simple and succinct as possible. Here's the code that would accomplish the above on a 65C02 or on a 65C816 running in emulation mode:

Code:
;process system time with 65C02...
;
         dec jiffct            ;time to update?
         bne l0000020          ;no
;
         ldx #hz               ;yes (hz = jiffy IRQ rate)
         stx jiffct            ;reset jiffy IRQ counter
         ldx #0                ;time field index
         ldy #s_time_t         ;time field size in bytes
;
l0000010 inc uxtime,x          ;bump system time
         bne l0000020          ;done
;
         inx                   ;bump index
         dey                   ;decrement count
         bne l0000010          ;not done
;
l0000020 ...program continues...

The code for a 65C816 is somewhat different, reflecting that device's ability to handle 16-bit data:

Code:
;process system time with 65C816 in native mode...
;
         sep #%00110000        ;8 bit registers
         dec jiffct            ;time to update?
         bne l0000010          ;no
;
         ldx #hz               ;yes
         stx jiffct            ;reset jiffy counter
         rep #%00100000        ;16-bit accumulator
         inc uxtime            ;bump time least significant word
         bne l0000010          ;done with time
;
         inc uxtime+s_word     ;bump time middle word
         bne l0000010
;
         inc uxtime+s_dword    ;bump time most significant word
;
l0000010 ...program continues...

In the 65C816 code, s_word is 2, the number of bytes in a (16 bit) word, and s_dword is 4, the number of bytes in a double (32 bit) word. Each increment operation processes 16 bits at a time, and looping and indexed addressing are completely avoided, improving execution speed.

Speaking of execution speed, it is worth noting that despite the jiffy IRQ rate, the actual amount of real time expended in processing uxtime is very small. Assuming a 100 Hz jiffy IRQ, 99 of 100 IRQs will only decrement jiffct. When jiffct does reach zero the least significant byte of uxtime (i.e., byte offset $00 in the field) will be incremented, which will occur once per second. At 256 second intervals, byte offset $01 of uxtime will have to be incremented. At 65,536 second intervals, byte offset $02 of uxtime will have to be incremented, and so forth. It should be patent that keeping time in this fashion demands very little in the way of processor resources.

That said, for best performance, jiffct and uxtime should be on page zero (direct page with the 65C816) to take advantage of the performance gain. With the 65C816, direct page can be made to appear anywhere in the first 64 kilobytes of address space (i.e., bank $00), a feature that gives the '816 programmer a lot of flexibility. However, note that a direct page access will incur a one clock cycle penalty if direct page isn't aligned to an even page boundary.

Before moving on, a comment about programming style is in order. "Magic numbers" such as the jiffy IRQ rate (hz) and the size of the time variable (s_time_t) should always be declared in your source code prior to first use, and never embedded in instructions as hard-coded numbers. Burying magic numbers in code tends to decrease understandability and may open the door to obdurate bugs caused by mistyping a number. Also, should something be changed it's much less work to change one declaration in an INCLUDE file than to hunt down multiple magic numbers that might be scattered about in several source files.

If jiffct and uxtime are to be on page zero as recommended they should be defined prior to any code references to them. Many assemblers will assume that if a location has not been defined prior to first reference said location is an absolute address, even though a subsequent definition says otherwise. The result is that instructions that act on the location will be assembled using absolute addressing modes, not zero page modes.

Continuing with programming, an interesting problem arises for foreground processes that need to get or set system time. uxtime never stops incrementing, even when a reference must be made to it. What this means is the possibility exists that an access to uxtime may produce erroneous results because the access was interrupted and uxtime was updated during the interrupt processing, resulting in a "carry" error. Hence there should be a way to prevent a carry error by deferring updates to uxtime when an access is required.

One method is to temporarily halt IRQ processing (SEI) and then immediately resume it after the copy has completed (CLI). Halting IRQ processing is simple and will minimally affect overall system performance. However, doing so may have an adverse effect on any interrupt-driven I/O processing, especially time-critical data reception from a UART that lacks a receiver FIFO (e.g., the MOS6551 or MC6850). Also, good operating system design generally frowns on the suppression of IRQ processing as part of a routine foreground API call, due to the potential for deadlock if something goes awry during the API call.

Another method is to using a semaphore to tell the ISR to defer the update of uxtime for one jiffy period—presumably, the foreground process accessing uxtime can finish in less than one jiffy period (if not, the operating system design needs rethinking, the hardware is in desperate need of an update—or both). Use of a semaphore doesn't disrupt any IRQ-driven activities and hence won't create any conditions that might provoke deadlock. However, some extra code is required in the ISR to manage the semaphore, which means there will be a small but inexorable amount of performance degradation.

Here are examples of how a semaphore, defined as semaphor, could be used to avoid uxtime read/write carry contretemps:

Code:
;process system time with 65C02 using semaphore...
;
;   semaphor: xx000000
;             ||
;             |+————————> 0: previous update not deferred
;             |           1: previous update was deferred
;             +—————————> 0: process time update
;                         1: defer time update
;
;   Bits 6 & 7 of semaphor must never be simultaneously set!
;
         bit semaphor          ;okay to update?
         bpl l0000010          ;yes
;
         lsr semaphor          ;no, tell next jiffy IRQ...
         bra l0000040          ;this update was deferred
;
l0000010 dec jiffct            ;update time field?
         bne l0000030          ;no
;
         ldx #hz               ;yes
         stx jiffct            ;reset jiffy IRQ counter
         ldx #0                ;time field index
         ldy #s_time_t         ;time field size in bytes
;
l0000020 inc uxtime,x          ;bump system time
         bne l0000030          ;done
;
         inx                   ;bump index
         dey                   ;decrement count
         bne l0000020          ;not done
;
l0000030 lda #%01000000        ;deferred update flag bit
         trb semaphor          ;deferred update pending?
         bne l0000010          ;yes, process it
;
l0000040 ...program continues...

The following is for the 65C816 running in native mode:

Code:
;process system time with 65C816 in native mode using semaphore...
;
;   semaphor: xx000000
;             ||
;             |+————————> 0: previous update not deferred
;             |           1: previous update was deferred
;             +—————————> 0: process time update
;                         1: defer time update
;
;   Bits 6 & 7 of semaphor must never be simultaneously set!
;
         sep #%00110000        ;8 bit registers
         bit semaphor          ;okay to update?
         bpl l0000010          ;yes
;
         lsr semaphor          ;no, tell next jiffy IRQ...
         bra l0000040          ;this update was deferred
;
l0000010 dec jiffct            ;update time field?
         bne l0000030          ;no
;
         ldx #hz               ;yes
         stx jiffct            ;reset jiffy counter
         rep #%00100000        ;16-bit accumulator
         inc uxtime            ;bump time least significant word
         bne l0000020          ;done with time
;
         inc uxtime+s_word     ;bump time middle word
         bne l0000020
;
         inc uxtime+s_dword    ;bump time most significant word
;
l0000020 sep #%00100000        ;8 bit accumulator
;
l0000030 lda #%01000000        ;deferred update flag bit
         trb semaphor          ;deferred update pending?
         bne l0000010          ;yes, process it
;
l0000040 ...program continues...

semaphor is a dual purpose flag (ideally, on page zero) that indicates if an update should be deferred and also if the previous update was deferred. In order to tell the ISR to defer the update of time, the foreground process must set bit 7 (and only bit 7) of semaphor prior to accessing uxtime. Before the ISR does anything to the time it checks the state of semaphor. If bit 7 is clear a normal update occurs. If bit 7 is set, the ISR "knows" that it is to skip the update and "reminds" itself that it did so by shifting bit 7 to bit 6.

On the next pass through the ISR, an update will occur and then semaphor will be tested to determine if the previous update had been deferred. The TRB instruction is used to both determine if the previous update was deferred and to clear bit 6 of semaphor. If bit 6 was set in semaphor a second pass will occur through the time update code, thus making up for the deferred update.

Typically, reading or writing the system time is an operating system kernel function (in UNIX and Linux, writing is restricted to the root user). The access procedure in either case would be to set bit 7 of semaphor to tell the ISR to defer updating and then immediately read or write uxtime. This must be accomplished in the time space between successive jiffy IRQs.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Mon Feb 10, 2020 6:03 am, edited 4 times in total.

Top
 Profile  
Reply with quote  
PostPosted: Tue Mar 31, 2015 3:21 pm 
Offline
User avatar

Joined: Sun Jun 30, 2013 10:26 pm
Posts: 1949
Location: Sacramento, CA, USA
Quote:
Whatever the time base, it is customary for it to generate multiple IRQs each second to reduce the effects of jitter caused by hardware and software interrupt processing latency. Jitter can result in both short- and long-term inaccuracies that will cause the system's notion of time to drift.

I am trying to understand what you're saying here, and failing in that attempt. Maybe our personal definitions of jitter are different, or maybe one of us is suffering from a brain hiccup?

Mike B.


Top
 Profile  
Reply with quote  
PostPosted: Tue Mar 31, 2015 4:41 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
barrym95838 wrote:
Quote:
Whatever the time base, it is customary for it to generate multiple IRQs each second to reduce the effects of jitter caused by hardware and software interrupt processing latency. Jitter can result in both short- and long-term inaccuracies that will cause the system's notion of time to drift.

I am trying to understand what you're saying here, and failing in that attempt. Maybe our personal definitions of jitter are different, or maybe one of us is suffering from a brain hiccup?

Mike B.

To quote Wikipedia:

    Jitter is the deviation from true periodicity of a presumed periodic signal in electronics and telecommunications, often in relation to a reference clock source.

The "deviation from true periodicity" in my example timekeeping routine would come from interrupt processing latency, which is a somewhat unpredictable variable. The interval from when a jiffy IRQ occurs to when time is updated will vary with the instruction being executed, where in the instruction cycle the MPU happens to be when the interrupt occurs, and whether or not other pending interrupts must also be serviced when the MPU finishes the current instruction. If the kernel processes other interrupts before it handles the jiffy IRQ, the spacing between successive services of the time base's interrupt will be uneven. For example, my POC unit gives SCSI and serial I/O interrupts priority over timer IRQs. If the only pending interrupt is from the timer, then the servicing of jiffy IRQs will be evenly spaced (disregarding MPU latency). If I/O is occurring when the timer interrupts, then servicing the jiffy IRQ will be delayed by the amount of time required to service I/O interrupts. Hence in the short term, some jitter is bound to occur.

If the jiffy IRQ rate is a substantial multiple of time's periodic rate (one second), the effects of interrupt latency-induced jitter will usually be evened out within in one time period and time's drift will be minimal or even zero (disregarding the time base's drift).

If the jiffy IRQ rate is very low, approaching that of time's periodic rate, the jitter caused by interrupt latency and interrupt processing priorities may cause some short term drift relative to the time base. In theory, a later jiffy IRQ will skew time back the other way, but until that happens, time will slightly lag the time base. If the interrupt latency continues to occur at somewhat regular intervals, drift may become permanent or even increase.

Something else to consider is small amounts of short-term drift in the time base itself. If the jiffy rate is the same as time's periodic rate, then time base drift becomes time's drift. If the jiffy rate is, say, 100 times that of time's periodic rate, as it is in my POC unit, then the effect of time base drift is greatly reduced.

I hope I made some sense with this. :)

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Tue Mar 31, 2015 7:01 pm 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8544
Location: Southern California
I have a short discussion on jitter at http://wilsonminesco.com/6502primer/potpourri.html#JIT, relating to its effects on getting accurate A/D conversion results when sampling a waveform. The amount of jitter is measured in time (like µs, ns, or ps), unrelated to the sampling frequency. The overall frequency accuracy is unaffected, but the exact timing of each sample rattles around within a window centered on the ideal time to take the sample. We deal with this in digital audio recording.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Tue Mar 31, 2015 9:11 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
GARTHWILSON wrote:
I have a short discussion on jitter at http://wilsonminesco.com/6502primer/potpourri.html#JIT, relating to its effects on getting accurate A/D conversion results when sampling a waveform. The amount of jitter is measured in time (like µs, ns, or ps), unrelated to the sampling frequency. The overall frequency accuracy is unaffected, but the exact timing of each sample rattles around within a window centered on the ideal time to take the sample. We deal with this in digital audio recording.

The subject of jitter is interesting because achieving the ideal almost never happens. Your discussion on your site relates to jitter caused by sampling one signal at periodic intervals that are determined by another signal. In the case of keeping time, the timer is present, as it is in A/D conversion, but jitter comes from the fact that the update to time will always lag the timer by an extent that will vary from period to period.

In the UNIX or Linux environment, the update to time ideally occurs at precise 1.00000000000... second intervals. If only one update is delayed, recovery is possible on the next update, if that next update occurs in less than one second—the "clock" will effectively be skewed forward by some fraction of a second. Otherwise, backward drift relative to the time base will become permanent.

If the jiffy IRQ periodic rate is substantially faster than time's periodic rate, the jitter related to interrupt processing latency can even out before the next scheduled update to time, producing no drift attributable to processing delays. As I previously noted, UNIX has long used 100 Hz and current Linux kernels use 250 Hz.

None of this, of course, accounts for drift in the time base itself. That's a hardware matter.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 48 posts ]  Go to page 1, 2, 3, 4  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 25 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: