6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sat Nov 23, 2024 4:45 am

All times are UTC




Post new topic Reply to topic  [ 48 posts ]  Go to page Previous  1, 2, 3, 4  Next
Author Message
PostPosted: Sun Dec 19, 2021 4:19 am 
Offline

Joined: Tue Feb 24, 2015 11:07 pm
Posts: 81
Thank you for writing all this BDD. A fascinating subject, exhaustively documented and elegantly explained. Bravo!


Top
 Profile  
Reply with quote  
PostPosted: Sun Dec 19, 2021 6:22 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
unclouded wrote:
Thank you for writing all this BDD. A fascinating subject, exhaustively documented and elegantly explained. Bravo!

Thanks!  Dunno about the elegant part, but I tried to get the speeling and gramurr korect.  :D

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Wed Mar 13, 2024 5:39 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 03, 2022 5:17 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
BigDumbDinosaur wrote:
Following a long period of dormancy, I have resumed working on this topic and should have something ready to post in a few days.

Nearly three years after I posted that, I really do have some new material.  :shock:

Past discussion has been on the principle of maintaining the time-of-day and calendar date as an integer count of the number of seconds that have elapsed from a distant point in the past referred to as the “epoch.”  This timekeeping method is conventionally referred to as “POSIX time” or “UNIX time,” and is in widespread use.  The following discussion will elaborate on methods for converting between human-readable and machine-readable time formats.

On UNIX and Linux systems, the operating system kernel only works with the machine-readable date and time, which is often referred to as time_t (pedantically speaking, time_t defines the data type, not the data itself).  All internal timekeeping operations, such as time-stamping files, are performed using only the date and time as defined by time_t.  The current time_t value is always assumed to be set to coordinated universal time (UTC or “Zulu” time).  The kernel timekeeping functions know nothing about time zones or daylight saving time (DST, aka “summer time”).

An issue with the original UNIX timekeeping method, which defined time_t as a 32-bit signed integer, is its limited range.  The maximum positive value that can be represented with such a data type is 2^31.  Incrementing past 2^31 will result in time_t becoming a negative value.  As UNIX and Linux both define the epoch as January 1 00:00:00 GMT 1970, the date will regress from January 19, 2038 to December 13, 1901, creating what is referred to as the “year 2038” problem.  This problem has been addressed by expanding time_t to a 64-bit signed integer in modern implementations, producing a usable range of billions of years...as well as an incompatible time_t type.

In my version of POSIX time, I use an unsigned integer in place of the POSIX time_t format.  Among other things, this change facilitates the use of “four-function” integer arithmetic in performing conversions.  Since my notion of time is unsigned and doesn't conform to the POSIX data type in size, I decided to refer to it as “binary sequential time” or BST.  It works on the same principle as POSIX time, but without the need for signed arithmetic (with one exception).

As previously discussed, BST is updated at periodic intervals by code within the kernel's interrupt request service routine (ISR) while responding to a jiffy IRQ generated by the time base, which is typically a hardware timer.  Since BST is just an integer, updating it requires little code, which means few processor cycles are required to keep time.  A kernel call is used to obtain the current value of BST, and a different kernel call is used to set BST.  During system startup, it is customary for the real-time clock (RTC) hardware to be read and its output converted to BST format to set the system's notion of the current date and time.

The human-readable date and time format is referred to by POSIX as “broken-down time” (BDT), which term I will continue to use.  The date in BDT is assumed to conform to the rules of the Gregorian calendar.  Conversion between BDT and BST is handled by library functions that are not part of the kernel and are run ad hoc.  These functions understand time zones and DST, and hence are able to produce a BDT that has been “localized.”

Interesting issues can arise with the use of the Gregorian calendar.  The first official implementation was on what would have been October 5, 1582.  At that time, the date was advanced to October 15 to compensate for errors that had accumulated in the ancestral Julian calendar.  For political reasons, only nations that were officially aligned with the Holy Roman Empire made the switch at that time.  Other nations didn't switch until much later.  For example, the British Empire didn't make the switch until September 3, 1752, at which time 12 days had to be excised from September to compensate for errors.

Code:
   September 1752
   ——————————————
Su Mo Tu We Th Fr Sa
       1  2 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30

Due to month truncation, I had a dilemma in defining the epoch, as well as in devising conversion algorithms.  Dates that are earlier than October 15, 1582 in the Gregorian calendar technically have never existed.  Ergo references to dates prior to the switch give rise to something called the “proleptic” Gregorian calendar, which is an approximation (and something that has given historians occasional headaches over the years).  Not wanting to complicate things beyond what they already were, I chose to ignore the proleptic range.  As the USA was a British colony at the time the British Empire made the switch, I decided to use Sunday October 1 00:00:00.00 GMT 1752 as the epoch.  Use of that epoch avoided having to muck about with September's 12-day truncation.

A time zone definition (TZD) is an offset in seconds that must be added to local time to get the UTC equivalent.  Hence if the offset is non-zero and positive, the locale is west of UTC.  On the other hand, if the offset is negative, the locale is east of UTC.  In order to handle the full range of possible time zone offsets, TZD must be larger than the 16-bit type usually used in 6502 assembly language to represent signed integers.  For convenience, I have defined TZD to be a 32-bit signed integer, with negative offsets stored in twos-complement format.  For example, the standard time zone in which I live is UTC - 6.  Therefore, TZD would be 21,600 ($00005460).  As another example, New Zealand's standard time zone is UTC + 12.  Therefore,TZD would be -43,200 or $FFFF5740.

Along with a TZD for standard time, there may be a separate one for when a locale observes DST—I refer to said field as DTZD.  If DST is observed then some additional information is needed to tell the conversion functions when DST is in effect.  Implementing that is an exercise I will leave to the reader.

In POSIX-compliant systems, broken-down time (BDT) is conventionally defined in ANSI C as a data structure referred to as tm.  The fields in tm are:

Code:
tm_sec   - seconds (0-59)
tm_min   - minutes (0-59)
tm_hour  - hour (0-23)
tm_mday  - day of the month (1-31, depending on month and year)
tm_mon   - month (1-12)
tm_year  - year (1901-2038 if BST is a 32-bit integer)
tm_wday  - day of week (0-6, with Sunday being 0)
tm_yday  - day of the year (0-365 in common years; 0-366 in leap years)
tm_isDST - daylight saving time flag; see text

All fields are defined as integers, at least 16 bits in size, as that size is required to accommodate the full range of tm_year and tm_yday.

The above structure is readily transferable to 6502 assembly language:

Code:
tm_sec   =0                    ;seconds     : 0-59
tm_min   =tm_sec+s_int         ;minutes     : 0-59
tm_hour  =tm_min+s_int         ;hour        : 0-23
tm_mday  =tm_hour+s_int        ;day-of-month: 1-31
tm_mon   =tm_mday+s_int        ;month       : 1-12
tm_year  =tm_mon+s_int         ;year        : 1752-9999
tm_wday  =tm_year+s_int        ;day-of-week : 1-7
tm_yday  =tm_wday+s_int        ;day-of-year : 1-366
tm_isdst =tm_yday+s_int        ;DST flag...
;
;   x0000000x0000000
;   |       |
;   +———————+——————————> 0: standard time in effect
;                        1: daylight saving time in effect
;
s_tm     =tm_isdst+s_int       ;size of tm structure

In the above, s_int is 2, which defines the size of an integer as a 16-bit quantity.

In the POSIX definition of the tm structure, tm_isdst is set to 0 to indicate DST is not in effect, set to a positive value if DST is in effect, or set to a negative value to indicate that the DST status is unknown.  To this day, I don't understand the point of this last definition—DST either applies or it doesn't.  For convenience in assembly language, I changed tm_isdst so bits 7 and 15 are set when DST is in effect.  Doing to makes testing easy with the BIT instruction, regardless of the accumulator size on the 65C816.

Refocusing on BST, I defined it as a 48-bit, unsigned integer in customary 6502 little-endian format.  Bits 0-39 are the number of seconds that have elapsed since the epoch and bits 40-47 are padding to align the field size to an even number of bytes.  Such alignment simplifies handling with the 65C816, but may be omitted with the 65C02 if memory consumption is a concern—of course, doing so makes the 65C02 version of BST partially incompatible with the 65C816 version.

As all dates are unsigned, any non-zero BST value represents some point in time after the epoch.  Although the redefinition of BST gives a theoretical date range of approximately 17,420 years, the conversion functions that I developed are limited in how far into the future they can go.  Hence the maximum practical date and time with this scheme is Friday December 31 23:59:59.99 UTC 9999, which is 8247 years after the epoch and 7977 years from the date of this post.  If I'm still around at that time, I'll revisit the algorithm.  :D

Conversion from BST to BDT involves a series of operations that successively extract each BDT field.  My algorithm is loosely based upon a Julian date conversion algorithm, but implementable with integer arithmetic (Julian dates, which are used by astronomers, contain fractional content).  In the following, all terms are positive integers.  Due to the range that the BST date encompasses, 64-bit arithmetic operations are necessary to avoid overflow.  The quotient of each division operation is as if the floorl() C function has been applied to a positive value.  The usual rules of algebraic precedence apply.

Code:
p  = (BST + 2393283) ÷ 86400
q  = 4 × p ÷ 146097
r  = p - (146097 × q + 3) ÷ 4
s  = 4000 × (r + 1) ÷ 1461
t  = r - 1461 × s / 4 + 31
u  = 80 × t ÷ 2447
v  = u × 11

Y  = 100 × (q - 49) + s + v   (broken-down year)
M  = u + 2 - 12 × v           (broken-down month)
D  = t - 2447 × u ÷ 80        (broken-down day-of-month)

The above returns the calendar date.  Continuing, the following steps break down the time-of-day:

Code:
ds = BST ÷ 86400              (elapsed days since epoch)
ds = ds × 86400               (elapsed seconds to start of current day)
ds = BST - ds                 (elapsed seconds since midnight of current day)

H  = ds ÷ 3600                (broken-down hour)

ds = ds - H × 3600            (elapsed seconds to broken-down hour)

M  = ds ÷ 60                  (broken-down minutes)
S  = ds - M ÷ 60              (broken-down seconds)

The algorithm I devised to convert from BDT to BST is also loosely based upon Julian dates. It uses the notion that the first month of the year is March, not January, which simplifies the handling of February:

Code:
y   = Y — ((14 — M) ÷ 12)
m   = ((M + 12 × ((14 — M) ÷ 12) — 3) × 153 + 2) ÷ 5
y1  = y × 365
y2  = y ÷ 4
y3  = y ÷ 100
y4  = y ÷ 400
BST = (D + m + y1 + y2 — y3 + y4 — 2393284) × 86400

The input date to the above is in D (day-of-month, same range as D in the previous algorithm), M (month, 1-12) and Y (year, 1752-9999).  Garbage in will produce garbage out—the function calling the conversion is responsible for qualifying input values.

The time-of-day can be added to BST as follows:

Code:
BST = BST + HOUR × 3600 + MIN × 60 + SEC

where the time-of-day value represented by HOUR, MIN and SEC is 24-hour format (00:00:00 — 23:59:59).

Sixty-four-bit addition and subtraction on the 65C02 is straightforward.  64-bit multiplication and division is not as trivial to implement but can be accomplished by scaling up existing algorithms.  Native-mode, 16-bit 65C816 implementations will be simpler, smaller and substantially faster than the eight-bit equivalents.  In the case of the 65C02, the math won't be very speedy, but it only has to be carried out when a conversion is needed, not as a matter of routine kernel processing.

I have written and tested native-mode 65C816 assembly language versions of both algorithms and given them plenty of testing on several of my POC units.  If anyone expresses an interest I will post the source code, complete with the 64-bit integer arithmetic functions I wrote to support the conversions.

————————————————————
In private conversation with someone concerning this topic, I used the term “binary sequential time” (BST) to refer to what I've previously described as “UNIX” or “POSIX” time.  My timekeeping implementation intentionally doesn't conform to the POSIX time_t definition, so I thought it would be better to devise a new term for the 48-bit integer that represents the date and time.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Wed Mar 13, 2024 5:22 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 03, 2022 7:46 am 
Offline

Joined: Thu Mar 12, 2020 10:04 pm
Posts: 704
Location: North Tejas
Leap seconds?

https://en.wikipedia.org/wiki/Leap_seco ... eap_second


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 03, 2022 9:43 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
The first day of the year was also redefined in 1752, or to be careful, on the day after 31st December 1751. (Scotland made this change much earlier.) I'm sure I've seen some reference to Old Style and New Style dates in some nineteenth century literature but I can't find it. Wikipedia.

(Edit: old and new style dates are apparently concealed (as a hidden puzzle for the careful reader) in Austen's Emma, which takes place in 1814/15 - some time after the official calendar change.)

(Edit: Some people in Ulster and Appalachia held on to Old Christmas into the twentieth C, it seems.)


Last edited by BigEd on Mon Jan 03, 2022 9:57 am, edited 2 times in total.

Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 03, 2022 9:48 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
BillG wrote:

What about them?

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 03, 2022 9:57 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
BigEd wrote:
The first day of the year was also redefined in 1752, or to be careful, on the day after 31st December 1751. (Scotland made this change much earlier.) I'm sure I've seen some reference to Old Style and New Style dates in some nineteenth century literature but I can't find it. Wikipedia.

(Edit: old and new style dates are apparently concealed (as a hidden puzzle for the careful reader) in Austen's Emma, which takes place in 1814/15 - some time after the official calendar change.)

Old style calendars are primarily of historical interest. My presentation was about a practical date-and-time reckoning method that works with the current calendar.

As for the calendric gyrations that occurred in the UK during the 18th century, they were why I chose October 1, 1752 as the epoch.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 03, 2022 1:36 pm 
Offline

Joined: Thu Mar 12, 2020 10:04 pm
Posts: 704
Location: North Tejas
BigDumbDinosaur wrote:
BillG wrote:

What about them?


I was wondering how the POSIX/UNIX/Linux world dealt with them.

Would a file created at exactly midnight today show the same on all systems? Unlikely since leap seconds can be added whenever "the science" decides they are needed.


Top
 Profile  
Reply with quote  
PostPosted: Mon Jan 03, 2022 7:20 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
BillG wrote:
BigDumbDinosaur wrote:
BillG wrote:

What about them?

I was wondering how the POSIX/UNIX/Linux world dealt with them.

Would a file created at exactly midnight today show the same on all systems? Unlikely since leap seconds can be added whenever "the science" decides they are needed.

Adding or, more rarely, subtracting a leap second is an ad hoc sort of thing that isn't readily programmed. In fact, under current timekeeping policies, the maximum period of notification for a pending leap second is six months.

In the case of Linux, UNIX, Window$, etc., access to a stratum-2 or stratum-3 Internet time server would be the most reliable means of dealing with leap seconds. Those time servers will “know” about a leap second within microseconds of one being inserted into or removed from the time stream.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Tue Jan 04, 2022 8:34 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
There is perhaps a possible and interesting wrinkle. Positive leap seconds (the only kind we've yet seen used) make for a 61 second minute, and therefore a slightly longer day and a slightly longer year. So, if a program aimed to report differences by assuming days are always of standard length, it could end up with an out-by-one compared to a program which had a record of the historical leap seconds. That is, if there are any such programs.

It does feel like treating all minutes as being 60 seconds is not going to get you in trouble, except for astronomical purposes.

There's a great deal more scope for fun and games when you try to account for summer time and time zones. Both of them have definitions which change as legislation changes.

There was also the amusing period during my working life when BST was taken to be Bering Strait Time rather than British Summer Time, by at least some software, giving us both a time and a date confusion.

Edit: another gotcha, perhaps slightly more likely for homebrew setups, is that GPS time has no leap seconds, instead having an offset from UTC, which is presently 18 seconds AFAICT. GLONASS, Galileo, and BeiDou are potentially all different, although I think presently Galileo and GPS have the same tactics.

Edit: also, some NTP sources will smear a leap second, to avoid a sudden change, which means a period of many hours when various NTP sources may differ by up to a second.


Top
 Profile  
Reply with quote  
PostPosted: Tue Jan 04, 2022 8:47 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8544
Location: Southern California
Also interesting is that quite a few countries didn't adopt the Gregorian calendar until the 20th century. (What made me think of it was that our athletes were almost late reaching Greece for the Olympics in 1896, as they missed the part about Greece still not being on the same calendar yet. Greece got onboard in 1923, according to https://en.wikipedia.org/wiki/Adoption_ ... n_calendar ).

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Tue Jan 04, 2022 10:02 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
BigEd wrote:
There is perhaps a possible and interesting wrinkle. Positive leap seconds (the only kind we've yet seen used) make for a 61 second minute, and therefore a slightly longer day and a slightly longer year.

Since the earth's axial rotation is irregular and also slowing down, the solar day is lengthening at a variable rate of around 1.5 milliseconds per century. So we can expect leap seconds to be a permanent part of the timekeeping landscape.

Quote:
So, if a program aimed to report differences by assuming days are always of standard length, it could end up with an out-by-one compared to a program which had a record of the historical leap seconds. That is, if there are any such programs.

In practice, I've observed from years of writing database software that that isn't likely to be a problem. When a leap second is needed, it is inserted at 23:59:59. Since database searches seldom narrow down to an exact moment in time, the seeming day drift doesn't affect mosts searches.

Leap seconds are used to correct the difference between observed solar time and international atomic time (TAI). TAI, of course, is extremely stable—current cesium-beam standards have no known drift, whereas observed solar time fluctuates. Since UTC is derived from TAI, there is a tendency for UTC to run a little fast relative to solar time.

Ultimately, the Gregorian calendar will start to get out of sync with the seasons, just as the Julian calendar did and a new definition of the length of a day and a new set of leap year rules will be required.

Quote:
It does feel like treating all minutes as being 60 seconds is not going to get you in trouble, except for astronomical purposes.

I suppose there are some scientific activities where being off a femtosecond or two might give you grief. :D

Quote:
There's a great deal more scope for fun and games when you try to account for summer time and time zones. Both of them have definitions which change as legislation changes.

Here in the USA, the daylight saving (summer) time (DST) mess is just that. Individual states are not legally required to observe DST, but if they do, they are obligated to use the national definition of when DST starts and ends. That definition has change three times in my lifetime, the last one being in 2007.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Tue Jan 04, 2022 10:24 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
Indeed...

BigDumbDinosaur wrote:
...we can expect leap seconds to be a permanent part of the timekeeping landscape.


Not necessarily! One alternative is to use leap hours, very much less often. There are pros and cons, and it's a current topic of (slow) discussion:
Quote:
The scientific community has so far failed to reach an agreement on this topic.

In 2003, a meeting named “ITU-R SRG 7A Colloquium on the UTC timescale” took place in Torino, Italy, where it was suggested that time be decoupled from the Earth’s rotation and leap seconds be abolished. No decision was reached.
In 2005, US scientists proposed to eliminate leap seconds and replace them with leap hours. The proposal was criticized for its lack of consistent public information and adequate justification.
In 2012, delegates of the World Radiocommunication Assembly in Geneva, Switzerland, decided once more to postpone the decision to abolish leap seconds and scheduled a new vote for 2015.
In 2015, the decision was again deferred to 2023.


Edit: another amusing phenomenon, "Leap seconds: Causing Bugs Even When They Don’t Happen" which also contains a hint that we might yet see the first negative leap second within a decade.


Top
 Profile  
Reply with quote  
PostPosted: Sun Mar 12, 2023 1:03 am 
Offline

Joined: Sun Feb 22, 2004 9:01 pm
Posts: 109
At DayOfTheWeek there is a useful snippet of 6502 code to calculate the day of the week for an 8-bit year offset from 1900. As 1900 is not a leap year it only gives correct results from 01-March-1900 onwards. Some time ago I tweeked the code to extend it back to 01-Jan-1900.

I also ported the code to various other CPUs. While snowed in over the last few days I've been tidying up my documentation and doing a few bits of optimisation. The results so far are at: Dates.

On some other CPUs the 1900 tweek is fairly simple (especially if the registers are larger than 8 bits). On the 6502 I worked out three different methods, all of which took six bytes, so in the published code I used the tweek that places it logically in the flow of the code.

_________________
--
JGH - http://mdfs.net


Top
 Profile  
Reply with quote  
PostPosted: Sun Mar 12, 2023 6:00 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8507
Location: Midwestern USA
jgharston wrote:
At DayOfTheWeek there is a useful snippet of 6502 code to calculate the day of the week for an 8-bit year offset from 1900. As 1900 is not a leap year it only gives correct results from 01-March-1900 onwards. Some time ago I tweeked the code to extend it back to 01-Jan-1900.

Have you looked at this?

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 48 posts ]  Go to page Previous  1, 2, 3, 4  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 35 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: