6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Sun Nov 10, 2024 4:59 pm

All times are UTC




Post new topic Reply to topic  [ 26 posts ]  Go to page Previous  1, 2
Author Message
PostPosted: Wed Feb 20, 2019 8:58 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10976
Location: England
The ISR then would need a table of month lengths, and to account for leap years?


Top
 Profile  
Reply with quote  
PostPosted: Wed Feb 20, 2019 9:14 am 
Offline
User avatar

Joined: Tue Nov 16, 2010 8:00 am
Posts: 2353
Location: Gouda, The Netherlands
Right, I have two 12-byte tables with month lengths, one each for leap/non-leap years. The same tables are also used in the conversion to time_t.

Detecting of leap year is done by checking divisibility by 4, which works from 1901-2099.

Here is C code for conversion
Code:
long mktime( struct time *time )
{
    uint8_t i, is_leap = !(time->year & 3);
    long result = ((time->year - 2000) * 1461 + 3) / 4;

    for( i = 1; i < time->month; i++ )
        result += days_per_month[is_leap][i - 1];
    result += time->day - 1;
    result = result * 24 + time->hr;
    result = result * 60 + time->min;
    result = result * 60 + time->sec;
    return result;
}

Instead of looping over days_per_month table, you could also use a table with pre-calculated cumulative values, but timing was not critical for me.


Last edited by Arlet on Wed Feb 20, 2019 9:18 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
 Post subject: OT: time epochs
PostPosted: Wed Feb 20, 2019 9:16 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10976
Location: England
[Sorry, I went very off-topic there - new thread here.]


Top
 Profile  
Reply with quote  
PostPosted: Wed Feb 20, 2019 2:24 pm 
Offline

Joined: Mon May 21, 2018 8:09 pm
Posts: 1462
I think it makes most sense to have dedicated RTC hardware tracking the wall-clock date and time, rather than relying on a periodic ISR. That typically provides the data in a human-accessible format, and handles month lengths and leap years autonomously (except, curiously, for the century rule). If you maintain the hardware at UTC, correcting for timezone offset from there is not hugely complicated.

For time-since-boot, you could increment a 16-bit counter at 32kHz and run an ISR when it overflows, every 2 seconds, to maintain a software extension word - or run a 32-bit counter which overflows at roughly 36-hour intervals. For many purposes, 32kHz is fine enough resolution. Readings from this timer should not ordinarily be converted into dates and times of day, but used for measuring relatively short intervals of time - milliseconds, seconds, minutes, hours.

*Processing* dates consistently, especially dates which may be far in the future or past, in strange and inconsistent calendars, etc - that is known to be a Hard Problem, and should be discussed separately.


Top
 Profile  
Reply with quote  
PostPosted: Wed Feb 20, 2019 3:56 pm 
Offline
User avatar

Joined: Tue Mar 05, 2013 4:31 am
Posts: 1385
Well, this has certainly been an interesting thread regarding the RTC and more. The information and ideas presented are excellent. Once I implement actual RTC hardware on an adapter board, much of this will be considered for a new BIOS release.

For now, the same goal exists... a simple clock that keeps track of time since a cold start (reset) or initial boot-up if one prefers. The short term goals are the same, short, fast and simple code to conserve space and provide a lot of useful functions and commands in the Monitor.

For a high level memory map, I've set design goals for a maximum of 2KB for the BIOS and I/O space and another 6KB for the Monitor itself. I currently have over 2KB remaining for future hardware and enhancements. I'm down to one more function in the Monitor, a simple assembler. This will be yet another release for both Monitor and BIOS and include an updated Enhanced Basic with integration for load/save. Hopefully all completed before end of 2019, as time allows.

_________________
Regards, KM
https://github.com/floobydust


Top
 Profile  
Reply with quote  
PostPosted: Wed Feb 20, 2019 5:17 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8479
Location: Midwestern USA
Chromatix wrote:
I think it makes most sense to have dedicated RTC hardware tracking the wall-clock date and time, rather than relying on a periodic ISR.

Actually, not.

Internally maintaining the date and time in broken-down format, as much RTC hardware does, is inefficient. For one thing, computing some time in the past or future relative to the RTC's notion of time becomes very non-trivial. Another consideration is that the accuracy of timekeeping is strongly tied to the accuracy and stability of the RTC itself. As my own experience with POC V1.1 highlighted, that accuracy and stability is not something that can be taken for granted.

Furthermore, if time is maintained in software via a jiffy IRQ generated by an interval timer (as is done on virtually all modern computers), a fetch of the time is a compute-bound operation that can be very fast. If the time has to be fetched from an RTC every time it is needed (and that could be a lot more often than you think) now an I/O-bound operation has to occur on multiple device registers. Much slower, especially if the RTC hardware requires wait-stating.

Quote:
That typically provides the data in a human-accessible format, and handles month lengths and leap years autonomously (except, curiously, for the century rule). If you maintain the hardware at UTC, correcting for timezone offset from there is not hugely complicated.

Many RTCs only devote one (8 bit) register to the year, which means end-of-century leap year rules are not correctly implemented. The Maxim DS1511Y in my POC units is guilty of this.

Another consideration is that many RTCs read out the date and time in binary-code decimal (BCD) format, which is convenient from a display perspective, but not so much when doing arithmetic as quickly as possible. That especially becomes a consideration in accommodating time zone offset with broken-down time. Consider that not all locales are in a time zone that is an exact hour multiple offset relative to UTC. For example, how would you handle, say, Chatham Island, where standard time is UTC+12:45?

Quote:
For time-since-boot, you could increment a 16-bit counter at 32kHz and run an ISR when it overflows, every 2 seconds, to maintain a software extension word - or run a 32-bit counter which overflows at roughly 36-hour intervals. For many purposes, 32kHz is fine enough resolution. Readings from this timer should not ordinarily be converted into dates and times of day, but used for measuring relatively short intervals of time - milliseconds, seconds, minutes, hours.

You're making it too convoluted, in my opinion. A single hardware timer (I use the 28L92 counter/timer in POC for that), generating a jiffy IRQ at 100 Hz will produce centisecond resolution with very succinct code.

Quote:
*Processing* dates consistently, especially dates which may be far in the future or past, in strange and inconsistent calendars, etc - that is known to be a Hard Problem, and should be discussed separately.

For a discussion of calendric and temporal computing see here. The topic isn't complete, however.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Thu Feb 21, 2019 5:30 am 
Offline

Joined: Mon May 21, 2018 8:09 pm
Posts: 1462
Quote:
…generating a jiffy IRQ at 100 Hz…

That is *precisely* something I'd prefer to avoid if not explicitly required, not least because any lost interrupts would make the clock lose time. A hardware RTC maintains time without constant software attention; the extra effort needed to convert it to machine representation for processing can be tolerated. In that context, I think I'd prefer to select hardware which uses binary representation rather than BCD - but the latter only *really* complicates conversion to epoch time. To handle fractional-hour timezones is not really difficult either; add to the minutes field, carry into the hours, and work from there.

NB: converting a current date to epoch time is considerably easier than converting an arbitrary date to epoch time, because you can implicitly assume convenient things about the calendar (simplified Gregorian) and timezone (UTC).

Applications which require high-frequency access to time information are likely to be best expressed in terms of time intervals, which is where the time-since-boot counter comes in. They can also quite reasonably establish a "local epoch" valid for, say, an hour at a time, and thus read the RTC hardware only infrequently while still successfully referring to wall-clock time. It's relatively easy to set up a 32-bit 32kHz counter that can be read without wait-states (trivially, a 6522 can do it), and only slightly more difficult to engineer some protection against carry effects during readout (at 32kHz increments, you might as well do it in software).

No jiffy interrupts needed at any stage.


Top
 Profile  
Reply with quote  
PostPosted: Thu Feb 21, 2019 5:42 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8479
Location: Midwestern USA
Chromatix wrote:
Quote:
…generating a jiffy IRQ at 100 Hz…

That is *precisely* something I'd prefer to avoid if not explicitly required, not least because any lost interrupts would make the clock lose time.

Why would you lose interrupts?

Quote:
A hardware RTC maintains time without constant software attention; the extra effort needed to convert it to machine representation for processing can be tolerated. In that context, I think I'd prefer to select hardware which uses binary representation rather than BCD - but the latter only *really* complicates conversion to epoch time. To handle fractional-hour timezones is not really difficult either; add to the minutes field, carry into the hours, and work from there.

NB: converting a current date to epoch time is considerably easier than converting an arbitrary date to epoch time, because you can implicitly assume convenient things about the calendar (simplified Gregorian) and timezone (UTC).

Applications which require high-frequency access to time information are likely to be best expressed in terms of time intervals, which is where the time-since-boot counter comes in. They can also quite reasonably establish a "local epoch" valid for, say, an hour at a time, and thus read the RTC hardware only infrequently while still successfully referring to wall-clock time. It's relatively easy to set up a 32-bit 32kHz counter that can be read without wait-states (trivially, a 6522 can do it), and only slightly more difficult to engineer some protection against carry effects during readout (at 32kHz increments, you might as well do it in software).

No jiffy interrupts needed at any stage.

I think you are swimming against the tide on this one. As far as I know, all mainstream operating systems in use today do their timekeeping using a jiffy IRQ to increment a "seconds since epoch" data structure.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Thu Feb 21, 2019 6:10 am 
Offline

Joined: Mon May 21, 2018 8:09 pm
Posts: 1462
Often that's because they need a jiffy interrupt for other reasons, such as pre-emptive process scheduling. Or because the date-handling routines were written decades ago and only minimally updated. Certainly I think that's what Windows does (along with the hopeless malpractice of assuming the hardware RTC keeps local time, not UTC). That's not how I would build it myself, though.

But I think what Linux now does (probably depending on available hardware) is to maintain a hardware time-since-boot counter (eg. HPET) and refer wall-clock time to that as required. Meanwhile a hardware RTC ticks in the background as a boot-time reference, and is adjusted when external time information (ie. NTP) becomes available. There's been a big push to *eliminate* unnecessary CPU wakeups for power efficiency reasons; a jiffy IRQ every centisecond or millisecond is now anathema to modern laptops.

Quote:
Why would you lose interrupts?

Any time you need to disable interrupts for an extended period, for example to update the main EEPROM in which the IRQ vector and ISRs are stored. The write-cycle time of an EEPROM is on the order of a centisecond per page. Or maybe you're trying to debug an ISR, so the CPU spends multiple seconds or even minutes halted with interrupts masked. In that context, I'd be very happy to have the time-of-day remain consistent without needing realtime response from the CPU.


Top
 Profile  
Reply with quote  
PostPosted: Thu Feb 21, 2019 2:43 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8479
Location: Midwestern USA
Chromatix wrote:
BigDumbDinosaur wrote:
Why would you lose interrupts?

Any time you need to disable interrupts for an extended period, for example to update the main EEPROM in which the IRQ vector and ISRs are stored. The write-cycle time of an EEPROM is on the order of a centisecond per page. Or maybe you're trying to debug an ISR, so the CPU spends multiple seconds or even minutes halted with interrupts masked. In that context, I'd be very happy to have the time-of-day remain consistent without needing realtime response from the CPU.

Both of the above cases represent machine development activities, not usage. Most of the time, the machine will be in use, as opposed to having its firmware updated or its kernel undergoing debugging (in the latter case, a small drift in the time is the least of the problems that could arise). So the idea that IRQs would have to be temporarily masked to undertake significant firmware or kernel revisions is not a problem from where I'm sitting. Pardon me if I seem like a gruff, old curmudgeon, but I think you are creating a strawman argument here.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Thu Feb 21, 2019 4:58 pm 
Offline
User avatar

Joined: Tue Mar 05, 2013 4:31 am
Posts: 1385
I would also note that doing system level changes to the EEPROM insitu do not really count as a factor with maintaining time using the processor via an ISR. Anytime I update the Monitor code with the EEPROM in the SBC, I force a restart of the board, which resets the RTC function.

I would also note that, depending on your EEPROM and code, you can alter EEPROM data without affecting the RTC functions within the ISR. My current board uses a 10ms jiffy clock. I also use a flash routine (using byte write mode, not page write mode) that replaces the 6KB Monitor code from RAM to EEPROM, which completes in just under 36 seconds. This calculates to an average time of under 6ms per byte write on a standard Atmel 28C256 EEPROM. Using the Program EEPROM function built into the Monitor has the same timing as the (same) core routine is copied into Page Zero and called as a subroutine. The core routine also disables and enables the IRQ mask for the byte write routine.

Of course, there are times when I don't restart the board after updating the EEPROM data, such as the EhBasic code which is just under 10KB. In any case, to date, I've not had any adverse affect on the RTC timing while updating the EEPROM insitu. Lastly, Atmel did make some EEPROMs with a maximum of 3ms update timing, while their standard timing is 10ms.

_________________
Regards, KM
https://github.com/floobydust


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 26 posts ]  Go to page Previous  1, 2

All times are UTC


Who is online

Users browsing this forum: GARTHWILSON and 8 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: