6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Thu May 09, 2024 10:11 am

All times are UTC




Post new topic Reply to topic  [ 48 posts ]  Go to page Previous  1, 2, 3, 4  Next
Author Message
PostPosted: Wed Apr 01, 2015 3:06 am 
Offline
User avatar

Joined: Sun Jun 30, 2013 10:26 pm
Posts: 1928
Location: Sacramento, CA, USA
I understand Garth's explanation of the effects of jitter, but I have resigned myself to the fact that I don't understand BDD's explanation of the same, and possibly never will ... I have little doubt that it is a fault of my own, not a fault of his, but I felt the need to share anyway.

Mike B.


Top
 Profile  
Reply with quote  
PostPosted: Wed Apr 01, 2015 3:46 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8177
Location: Midwestern USA
barrym95838 wrote:
I understand Garth's explanation of the effects of jitter, but I have resigned myself to the fact that I don't understand BDD's explanation of the same, and possibly never will ... I have little doubt that it is a fault of my own, not a fault of his, but I felt the need to share anyway.

Mike B.

I'm probably not doing a good job with my explaining. I understand the concept but can't seem to articulate it very well.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Wed Apr 01, 2015 7:18 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10800
Location: England
I'm in the same boat. I don't see how a less frequent system tick would make any difference to a system clock: it can't be that system seconds would be lost, it can only be that the update time of a system second will lag a little from the ideal time. The amount of lag won't be much more than a single system tick. So, anything faster than 10Hz is going to be fine for timestamps.

Garth is more concerned with very regular sampling for DSP purposes: for audio the samples would need to be much more regular than an 10Hz system tick could provide, but they'd also need to be much more frequent. With system ticks generally only going up to the hundreds of Hz, they'd be too slow anyway: an audio system would have its own regular system interrupt at a higher rate, and at a higher priority.

Ed


Last edited by BigEd on Wed Apr 01, 2015 12:38 pm, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Wed Apr 01, 2015 12:06 pm 
Offline

Joined: Sun Apr 10, 2011 8:29 am
Posts: 597
Location: Norway/Japan
BigDumbDinosaur wrote:
If I/O is occurring when the timer interrupts, then servicing the jiffy IRQ will be delayed by the amount of time required to service I/O interrupts.
If I understand correctly then in that example I/O interrupts have higher priority than timer interrupts? I'm asking, because on a real-time minicomputer I worked with in the past this was done differently (it implemented an efficient time-sharing system as well, as a variable-priority list of tasks managed by a real-time process).
This system had 16 interrupt levels (each supported by a full register set, on later, faster models by a register file), with level 0 the lowest and level 15 the highest. Level 0 was the idle loop. Hard disk interrupts were managed by level 11. The real-time clock was level 13, and the only higher one on a normal system was internal interrupts on level 14 (to handle things like bus errors etc.).
(Level 15 was only ever used by CERN and was thus called the 'Cyclotron level' - it had only one interrupt source).
The thing about interrupts from real-time clocks is that you know exactly how long it will take to serve.. unlike I/O. So you can accomodate for that when designing the I/O system. The other way around makes the timer interrupt handling unpredictable.

-Tor


Top
 Profile  
Reply with quote  
PostPosted: Tue May 12, 2015 5:15 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8177
Location: Midwestern USA
I've fallen behind on this topic due to work pressures. I hope to soon resume on it.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Sun Feb 24, 2019 7:12 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8177
Location: Midwestern USA
Following a long period of dormancy, I have resumed working on this topic and should have something ready to post in a few days.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Thu Dec 16, 2021 7:18 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8177
Location: Midwestern USA
BigDumbDinosaur wrote:
Following a long period of dormancy, I have resumed working on this topic and should have something ready to post in a few days.

Nearly three years after I posted that, I really do have some new material. :shock:

NOTE: I have reposted this with additional material. I've removed the original content from this post.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Mon Jan 03, 2022 3:07 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Thu Dec 16, 2021 8:10 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10800
Location: England
(Will I mention leap seconds? Oh, I did! Just teasing, but it does mean some minutes have 61 seconds... and future dates are not readily resolvable to seconds since epoch.)


Top
 Profile  
Reply with quote  
PostPosted: Fri Dec 17, 2021 2:02 am 
Offline
User avatar

Joined: Tue Mar 05, 2013 4:31 am
Posts: 1373
Nice to see new info on this... I was recently looking at one of the Maxim clock chips that basically keep time as a 32-bit integer... the DS-1318. Using this chip to replace a more common RTC could be interesting... and the ISR to increment the 4 bytes would certainly be simpler. However, you still need some code to convert to normal date/time.

Any chance you (BDD) have code for the 65C02 as well as the 65C816?

Another thought..... instead of implementing the updated 64-bit integer, one might take an alternate approach of adding a single bit that gets set once the 2038 date hits. This would double the range and would hopefully result in smaller code. Just a thought of course.

2038 doesn't seem that far away...

_________________
Regards, KM
https://github.com/floobydust


Top
 Profile  
Reply with quote  
PostPosted: Fri Dec 17, 2021 6:44 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8177
Location: Midwestern USA
BigEd wrote:
(Will I mention leap seconds? Oh, I did! Just teasing, but it does mean some minutes have 61 seconds... and future dates are not readily resolvable to seconds since epoch.)

Actually, future dates are resolvable because leap seconds are applied to the timekeeping base, i.e., the interrupt-driven POSIX seconds count, not the calendar conversion library.

If, for example, a bank loan states the note is due and payable on a specific date at a specific time, e.g., 12:01 AM CST, and two leap seconds are applied prior to that date and time, it simply means the due date has arrived two seconds later than a simple time progression would suggest. The computation that determined the POSIX seconds count remains correct.

Leap seconds are applied to compensate for the drift between atomic time, which is remarkably stable over the long term, and observed solar time, which is imprecise. Since that doesn't occur on a fixed schedule, timekeeping software can't account for it, necessitating a “manual” adjustment when a leap second has to be applied.

————————————————————
Edit: Fixed an egregious typo.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Fri Dec 17, 2021 10:35 am, edited 2 times in total.

Top
 Profile  
Reply with quote  
PostPosted: Fri Dec 17, 2021 8:00 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8177
Location: Midwestern USA
floobydust wrote:
Nice to see new info on this... I was recently looking at one of the Maxim clock chips that basically keep time as a 32-bit integer... the DS-1318. Using this chip to replace a more common RTC could be interesting... and the ISR to increment the 4 bytes would certainly be simpler. However, you still need some code to convert to normal date/time.

A 32-bit time field gives you a 136-year maximum reference.  If you plan to account for dates in the past as well as the future, 32 bits is going to produce the aforementioned year-2038 problem, or some variation thereof.  In any case, the DS-1318 is not a true RTC.  It's an elapsed time counter—and also a 3.3 volt device.  Furthermore, you can't just read it on the fly and expect a correct result every time.  It's a three-step process and given the relative slowness of the device, a process that would hurt performance if your Ø2 rate is sufficient to require wait-stating.

However, I'm not sure you are fully understanding what has been presented in this topic.

The RTC, if present, merely acts as a reference time source for initially setting POSIX (time_t) time to a rational value.  Once time_t time has been set no further reference is made to the RTC, except to adjust it if a permanent time correction has to be made.  Maintaining the date and time becomes a process of incrementing the time_t field once per second, which is absurdly easy to do, even with the NMOS 6502.  Very simple code in your interrupt service routine, which was earlier presented in this topic, takes care of that.

Another concept that must be understood is your kernel doesn't know anything about the human-readable form of the date and time.  The kernel's job is to merely increment the time_t field once per second.  The human-readable aspects of calendars and clocks are addressed in an application library function that has the code required to perform the time_t to broken-down time (BDT) (and back) conversion.  That conversion is performed on a “demand” basis, if an application needs it.  If an application has no need to generate BDT, there is no reason to expend thousands of clock cycles doing so, which is why time_t-to-BDT conversion is not part of the kernel's timekeeping regimen.

Similarly, if your application doesn't need to convert BDT-to-time_t, why embed such a function into the kernel?  Again, a library function can be called from your application should the need arise to make such a conversion.

The common theme here is to offload all non-essentials from the kernel to keep its code small and fast.

Furthermore, any field in your data that is to represent the date and time (current, future or past) should be a time_t representation, not BDT.  Use of time_t representation means the field is a fixed size of five (minimum) or six bytes, versus seven bytes minimum to store the BDT equivalent (eight bytes if day-of-year is also stored).  Importantly, a date/time in time_t format is a fixed-size positive integer that may be used in integer arithmetic and comparison operations.  For example, to calculate a date that is 30 days from now, one merely needs to add 86400 × 30 (2,592,000) seconds to the current time_t date field.  Just how would you do something like using BDT?  Or, how would you determine how many days in the past a date was if it's in BDT?

Quote:
Any chance you (BDD) have code for the 65C02 as well as the 65C816?

The bulk of the 65C816 code can be adapted to the 65C02 with simple changes, mostly involving the switch from word-at-a-time data fetches and stores to byte-at-a-time operations.  The 65C816 renditions of multiplication and division are where most of the word-to-byte conversion would have to be performed.  The interrupt-driven code that would apply to the 65C02 was presented early in this topic.

Quote:
Another thought..... instead of implementing the updated 64-bit integer, one might take an alternate approach of adding a single bit that gets set once the 2038 date hits. This would double the range and would hopefully result in smaller code. Just a thought of course.

Have you read this entire topic from start to finish?  At no point was it said time is maintained as a 64-bit integer.  :D Also, making the time field variable in format is going to complicate anything that has to use it.  I chose a fixed-sized, 48-bit field (in which bits 0-39 are significant—practical with the 65C02) to avoid such gyrations, as well as to give time_t enough range to avoid any “hit-the-wall” situations.

In any case, the size of the code required to process the time_t field is not affected all that much by the number of bits in the field, as arithmetic processing is done in loops.  More bits simply means more iterations in the loops.

Quote:
2038 doesn't seem that far away...

It isn't...it's only 16 years from now.  From my perspective, 16 years is not much anymore.  Must have something to do with being a septuagenarian.  :shock:

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Wed Mar 13, 2024 5:33 am, edited 2 times in total.

Top
 Profile  
Reply with quote  
PostPosted: Fri Dec 17, 2021 9:03 am 
Offline
User avatar

Joined: Tue Mar 05, 2013 4:31 am
Posts: 1373
Thanks for the detailed info.... I have the read the entire thread, albeit over time when it first appeared. As you mentioned the 64-bit integer in your recent post, I probably assumed that you might be revisiting your code to implement... my bad. Perhaps I wasn't very clear in my post.

The elapsed timer seems an interesting fit for setting the time variable. With my existing (DS-1511) RTC code, it's only read once during boot time. Once that's done, the ISR in my BIOS maintains the Date/Time. Using the DS-1318, it should be capable of running continuously (having scanned through the datasheet) and it's current draw while running on a battery source (when the system is off) seems about the same as a standard RTC. The idea is to switch the ISR to a simpler (32-bit or 48-bit) integer variable for timekeeping, using the DS-1318 to set it initially. This would be less code size (in the BIOS ISR) and have a shorter code path.

I have looked at the routines you posted earlier... but those are what I would consider the ISR routine. What I was attempting to ask about was the code that takes the BIOS incremented integer and translates that into standard Date/Time.... and vice-versa. Hopefully I was more clear on this post :roll:

_________________
Regards, KM
https://github.com/floobydust


Top
 Profile  
Reply with quote  
PostPosted: Fri Dec 17, 2021 10:25 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8177
Location: Midwestern USA
floobydust wrote:
Thanks for the detailed info.... I have the read the entire thread, albeit over time when it first appeared. As you mentioned the 64-bit integer in your recent post, I probably assumed that you might be revisiting your code to implement... my bad. Perhaps I wasn't very clear in my post.

The 64-bit part has to do with the arithmetic functions that are involved in the conversion between POSIX (aka “sequential time” or time_t) time and broken-down time (BDT). Nothing outside of the arithmetic functions refers to or is sized to 64-bits.

The required number of significant bits in a time_t field is 40, which is sufficient to represent all dates from October 1, 1752 to December 31, 9999, inclusive. In my 65C816 rendition, I used a 48-bit time_t field due to it being more convenient to process and copy (it's three 16-bit chunks). In a 65C02 version, a 40-bit field may be used without any loss of code efficiency or resolution, since the C02 can only handle a byte at a time.

Quote:
The elapsed timer seems an interesting fit for setting the time variable.

It will not work with the algorithms I've presented without applying corrective adjustment to the timer's output. At 32 bits, the range is too small.

The transformation of the DS-1511's BCD output to the time_t format is not a big deal—it's four-function integer arithmetic and a BCD-to-binary conversion sub. The result is the time_t format that is needed. That said, it wouldn't be something I'd put in the firmware. That sort of thing belongs in software loaded from mass storage (disk, CF card, etc.). You could run that as part of the kernel initialization functions.

Quote:
The idea is to switch the ISR to a simpler (32-bit or 48-bit) integer variable for timekeeping, using the DS-1318 to set it initially. This would be less code size (in the BIOS ISR) and have a shorter code path.

Your comment again prompts me to ask if you have read this topic from the top, mainly the concepts (the implementation necessarily follows the concepts). As I described in my earlier post, the “integer variable” is a 40-bit field, the time_t field. The code required to increment it at one second intervals is minimal and has negligible effect on processing load.

Code:
;   increment system date & time, 65C02 version...
;
         ldx #0                ;time_t “register” index
         ldy #40/8             ;bytes in 40-bit time_t “register”
;
loop     inc systod,x          ;SYSTOD is the 40-bit time “register”
         bne done              ;no carry, so exit loop
;
         inx                   ;handle carry
         dey
         bne loop
;
done     ...done with the date & time...

Just how much load do you think the above, which only runs once per second, will put on the MPU? :D

Quote:
What I was attempting to ask about was the code that takes the BIOS incremented integer and translates that into standard Date/Time.... and vice-versa. Hopefully I was more clear on this post :roll:

I gave the equations for doing so in my earlier post. I have not developed a 65C02 implementation, only a 65C816 version.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Fri Dec 17, 2021 6:09 pm 
Offline

Joined: Mon Feb 15, 2021 2:11 am
Posts: 100
The recent replies in this topic do much to explain a few of the bullet points I saw in the FUZIX documentation such as "Proper sane time_t" and "6809 gcc and cc65 don't have long long 64bit (for sane time_t)". Thank you!


Top
 Profile  
Reply with quote  
PostPosted: Fri Dec 17, 2021 6:48 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8177
Location: Midwestern USA
Sean wrote:
The recent replies in this topic do much to explain a few of the bullet points I saw in the FUZIX documentation such as "Proper sane time_t" and "6809 gcc and cc65 don't have long long 64bit (for sane time_t)". Thank you!

The 40-bit version of time_t that I chose is good for more than 8700 years, using an epoch of October 1, 1752 and only positive integers.  The only significant requirement to implement my conversion algorithms is the ability to do 64-bit arithmetic, which is easy with the 65C816 and manageable with the 65C02.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Wed Mar 13, 2024 5:38 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 48 posts ]  Go to page Previous  1, 2, 3, 4  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 11 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron