6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Fri Nov 22, 2024 8:34 pm

All times are UTC




Post new topic Reply to topic  [ 89 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6  Next
Author Message
PostPosted: Sun Mar 15, 2015 2:17 pm 
Offline

Joined: Sun Apr 10, 2011 8:29 am
Posts: 597
Location: Norway/Japan
BigDumbDinosaur wrote:
There are any number of programs that won't work right if atime is not maintained. The one that immediately comes to mind is the make utility. Also, SCCS is strongly dependent on accurate atime data.
Are you sure about that? When atime alone is updated it is equivalent to logging the last time a file was read, which is not interesting to Make, at least. Make only wants to know when a file was last written to, which is mtime. A look at GNU Make confirms that it uses mtime, and not atime (it uses ctime for some #ifdef Windows operations, for some reason). SCCS I'm not sure about - I only have a copy of an old Tahoe 4.3BSD sccs.c, and I can't find any time field access at all.
The one use for atime I'm aware of (by applications) is for backups, backup systems will sometimes use atime to check if a particular file has been read already. But I have a hard time actually finding source which use atime..

-Tor


Top
 Profile  
Reply with quote  
PostPosted: Sun Mar 15, 2015 7:05 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8506
Location: Midwestern USA
Tor wrote:
BigDumbDinosaur wrote:
There are any number of programs that won't work right if atime is not maintained. The one that immediately comes to mind is the make utility. Also, SCCS is strongly dependent on accurate atime data.
Are you sure about that? When atime alone is updated it is equivalent to logging the last time a file was read, which is not interesting to Make, at least. Make only wants to know when a file was last written to, which is mtime. A look at GNU Make confirms that it uses mtime, and not atime (it uses ctime for some #ifdef Windows operations, for some reason). SCCS I'm not sure about - I only have a copy of an old Tahoe 4.3BSD sccs.c, and I can't find any time field access at all.
The one use for atime I'm aware of (by applications) is for backups, backup systems will sometimes use atime to check if a particular file has been read already. But I have a hard time actually finding source which use atime.

Oops! Yer right! :oops: I must've been thinking about something else when I wrote that. The version of SCCS I used for many years looks at mtime, as does make, as you note. Too many time things running through my head, I guess.

Also, as you note, atime is minimally used. The backup utility I use on both of our servers (Microlite Backup Edge) looks at ctime and mtime to determine if a file should be incrementally backed up to tape.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Tue Mar 15, 2022 5:05 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Sun Mar 15, 2015 8:06 pm 
Offline
User avatar

Joined: Sun Jun 30, 2013 10:26 pm
Posts: 1949
Location: Sacramento, CA, USA
BigEd wrote:
Just a point on time resolution in the filesytem: I once had reason to dig into the source of make, and was obliged to make a patch for our installation. As I recall, make assumes that local filesystems have a finer resolution of timestamp, or might do, whereas network filesystems don't, or might not. I admit, my memory is fading! But as it's true that a filesystem can perform very many operations within a second, it is inconvenient that NFS (for example) only relays timestamps to a one second resolution. It might be worth considering holding a count of milliseconds or microseconds in another integer, as the fractional part of the timestamp, as 'make' is very useful!

Cheers
Ed

I was certainly thinking the same thing; one second time-stamp resolution could be inadequate in some cases. It could be argued that any reasonable resolution could become inadequate in extreme situations, but it couldn't hurt to borrow a few of those "thousands-of-years" bits from the 64-bit stamp and move them to the other side of the scale, like "thousandths-of-seconds", where they could actually convey important information for the here-and-now.

Mike B.


Top
 Profile  
Reply with quote  
PostPosted: Mon Mar 16, 2015 2:53 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8506
Location: Midwestern USA
barrym95838 wrote:
BigEd wrote:
Just a point on time resolution in the filesytem: I once had reason to dig into the source of make, and was obliged to make a patch for our installation. As I recall, make assumes that local filesystems have a finer resolution of timestamp, or might do, whereas network filesystems don't, or might not. I admit, my memory is fading! But as it's true that a filesystem can perform very many operations within a second, it is inconvenient that NFS (for example) only relays timestamps to a one second resolution. It might be worth considering holding a count of milliseconds or microseconds in another integer, as the fractional part of the timestamp, as 'make' is very useful!

I was certainly thinking the same thing; one second time-stamp resolution could be inadequate in some cases. It could be argued that any reasonable resolution could become inadequate in extreme situations, but it couldn't hurt to borrow a few of those "thousands-of-years" bits from the 64-bit stamp and move them to the other side of the scale, like "thousandths-of-seconds", where they could actually convey important information for the here-and-now.

Mike B.

In my filesystem design I settled on a 48 bit time_t field because I want to limit the size of an inode to 128 bytes for efficiency reasons (exactly eight inodes can fit into a single logical disk block). Of the 48 bits, 40 are actually required to hold the largest expected time value, equivalent to 23:59:59 UTC on December 31, 9999, with the most significant byte set to zero. However, I use 48 bits instead of 40 for this field, again, for efficiency, as the field can be processed as a set of three 16 bit words, not two 16 bit words and a byte. The resulting 65C816 code required to increment the time on a second-by-second basis is very succinct:

Code:
;process system time
;
         rep #%00100000        ;16 bit accumulator & memory
         sep #%00010000        ;8 bit index
         ldx jiffyct           ;jiffy IRQ counter
         inx                   ;bump count
         cpx #hz               ;time to update?
         bcc .0000010          ;no
;
         ldx #0                ;will reset jiffy counter
;
;
;   update system clock...
;
         inc sysclk            ;least significant word
         bne .0000010
;
         inc sysclk+s_word     ;middle word
         bne .0000010
;
         inc sysclk+s_dword    ;most significant word
;
;
;   update jiffy counter...
;
.0000010 stx jiffyct

   ...program continues...

In the above, hz is 100 (the jiffy IRQ rate), s_word is a constant of 2 and s_dword is a constant of 4.

If the time field were shifted left by a byte and the least significant byte used used for sub-second purposes the finest possible resolution would be approximately 3.92 milliseconds. However, POC's jiffy IRQ is 100 Hz, a limitation of the timer in the DS1511Y real-time clock that is in charge of generating jiffy IRQs, so no time-stamp could ever be resolved to anything finer than 10ms. I could use the counter/timer in the 26C92 DUART as my jiffy IRQ source, with the interrupt rate increased to 250 Hz. That would resolve to 4ms, the smallest integer that can be accounted for in eight bits. Of course, a more frequent interrupt rate would have an effect on overall performance. Also, if I repurpose the LSB of the time field to be a millisecond count I can no longer increment the remaining 40 bits with the simple code above, as the most significant byte is now a word and requires that I fiddle with the m bit in the status register or use .Y to read and update it.

In any case, I'm trying to model the UNIX equivalent, which sees time_t as an integer seconds count. So it would be best to keep the filesystem design that way and make it possible for a user application to request a more fine-grained time value from the operating system if needed. That's how it's done to this day in UNIX-like operating systems.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Tue Mar 15, 2022 5:07 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Mon Mar 16, 2015 3:48 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8506
Location: Midwestern USA
BigDumbDinosaur wrote:
In any case, I'm trying to model the UNIX equivalent, which sees time_t as an integer seconds count. So it would be best to keep the filesystem design that way and make it possible for a user application to request a more fine-grained time value from the operating system if needed. That's how it's done to this day in UNIX-like operating systems.

I should clarify that the time_t model isn't just suitable for filesystem use. It's a succinct way to encode a point in time in almost any application.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Tue Mar 15, 2022 5:09 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Tue Mar 17, 2015 3:43 am 
Offline
User avatar

Joined: Sun Dec 29, 2002 8:56 pm
Posts: 460
Location: Canada
I've been following along this topic. Found it very informative.

I use a millisecond interrupt in my system, but now I see it'd be better to use 1024 Hz. That way the time in seconds could be determined with a simple right shift of ten bits. This would give me 54 usable bits for system time - it could be truncated to 48 bits. I use the millisecond interrupt for timing some I/O devices. My system also uses a 100Hz interrupt for task switching.

Being hardware oriented, it seems to me that updating the system time is a fairly common task. It could very easily be done with hardware as part of a timekeeping device, saving some cycles in an interrupt routine.

_________________
http://www.finitron.ca


Top
 Profile  
Reply with quote  
PostPosted: Tue Mar 17, 2015 5:15 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8506
Location: Midwestern USA
Rob Finch wrote:
Being hardware oriented, it seems to me that updating the system time is a fairly common task. It could very easily be done with hardware as part of a timekeeping device, saving some cycles in an interrupt routine.

Ideally that would be the way to go. However, I'm not aware of a timekeeping device that can count time_t style out to 40 or more bits. The Linux kernel uses a 250 Hz jiffy IRQ to drive system's notion of the date and time, along with other counters (e.g., the system uptime count). Traditional UNIX has used a 100 Hz jiffy IRQ for many years. All of that counting and timing is done in software.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Tue Mar 15, 2022 5:10 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Tue Mar 17, 2015 9:33 am 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
TBH I prefer the POSIX approach of two binary fields: one for seconds, and one for a power-of-ten fractions of seconds. Using 1024Hz or another power-of-two fraction (or indeed 60Hz or something related) feels like it's heading for conversion difficulties - userspace is likely to want a decimal view of time, but of course binary numbers feel better in the kernel. Two binary fields seems good to me: not too difficult to increment the time or to subtract times, not too difficult to convert to userspace.


Top
 Profile  
Reply with quote  
PostPosted: Tue Mar 17, 2015 11:10 am 
Offline

Joined: Sun Apr 10, 2011 8:29 am
Posts: 597
Location: Norway/Japan
Over on vintage-computer I suggested using Planck time as the tick.. and epoch time would be the Big Bang, of course (~13.75 billion years ago). It seems 202 bits would be enough to cover from epoch to about now, and with 256 bits you would be covered until more than three hundred billion billion years into the future, which must (for once) be enough for everybody. The good thing is that you won't need a signed integer, unsigned will do fine according to common theory. And actual resolution can of course be implementation dependent, just as now - e.g. as with the nanosleep() function, which rarely has actual nano second resolution. This would be the final computer time method, never to be superseeded. At least as far as the resolution is concerned. If there is a change of theory you could always go to signed integer (suggesting that there was actually something like physical time before the Big Bang), and still have way more than enough bits to cover for the expected lifetime of the universe. According to some. If, on the other hand, the lifetime is 10^10^56 years (according to another theory) then a revision may be needed, but at that time they can move to 512 or 1024 bits for sure.
256 bits isn't really that much more than the current 64 bits of the Linux kernel, so this isn't as silly as it may look..

-Tor


Top
 Profile  
Reply with quote  
PostPosted: Tue Mar 17, 2015 3:23 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8506
Location: Midwestern USA
Tor wrote:
Over on vintage-computer I suggested using Planck time as the tick.. and epoch time would be the Big Bang, of course (~13.75 billion years ago). It seems 202 bits would be enough to cover from epoch to about now, and with 256 bits you would be covered until more than three hundred billion billion years into the future, which must (for once) be enough for everybody...256 bits isn't really that much more than the current 64 bits of the Linux kernel, so this isn't as silly as it may look.

Uh, I think a 40 bit time_t field will be more than adequate for my modest needs. :lol: As I've worked out it, it's good from October 1, 1752 to the end of 9999, at which time I'm sure none of us will be around. That's better than what UNIX and Linux could achieve with a 32 bit time_t, although not quite the overkill of 64 bit Linux's time_t.

BigEd wrote:
TBH I prefer the POSIX approach of two binary fields: one for seconds, and one for a power-of-ten fractions of seconds.

As do I. If a high precision time period is needed it can be used. Otherwise, it can be ignored. Most of the time, resolution to the nearest second is all that is required, especially in directory listings. So why drag along the baggage of unused time precision?

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Tue Mar 15, 2022 5:11 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Tue Mar 17, 2015 3:44 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
There's another possible tactic, which I'm sure I've read about somewhere. The fractional part of the time spec is incremented at least every time it's read, which gives a kind of transaction count. No matter how fast your filesystem or database accesses occur, each one gets a unique and biggest-yet timestamp. Might take a little care to account for the reported time getting ahead of the actual time, but that seems possible so long as the average rate of reads isn't greater than the incremental precision.


Top
 Profile  
Reply with quote  
PostPosted: Tue Mar 17, 2015 5:24 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8506
Location: Midwestern USA
BigEd wrote:
There's another possible tactic, which I'm sure I've read about somewhere. The fractional part of the time spec is incremented at least every time it's read, which gives a kind of transaction count. No matter how fast your filesystem or database accesses occur, each one gets a unique and biggest-yet timestamp. Might take a little care to account for the reported time getting ahead of the actual time, but that seems possible so long as the average rate of reads isn't greater than the incremental precision.

You'd have to maintain some state information somewhere so two back-to-back accesses that occur microseconds apart don't get the same time-stamp, since at least on 65xx hardware, resolution to a single microsecond is probably not realistic.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Tue Mar 15, 2022 5:12 am, edited 2 times in total.

Top
 Profile  
Reply with quote  
PostPosted: Tue Mar 17, 2015 5:53 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
You need to know the true time, and the last issued timestamp. If the time has advanced since last request when it's requested, you issue it and update the last issued. If it hasn't, because you've been asked the time twice in succession, you increment the last issued timestamp and issue that. So, in a sense, you keep two copies of the time, which both move forward.


Top
 Profile  
Reply with quote  
PostPosted: Tue Mar 17, 2015 6:09 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8506
Location: Midwestern USA
BigEd wrote:
You need to know the true time, and the last issued timestamp. If the time has advanced since last request when it's requested, you issue it and update the last issued. If it hasn't, because you've been asked the time twice in succession, you increment the last issued timestamp and issue that. So, in a sense, you keep two copies of the time, which both move forward.

I use a method similar to that in Thoroughbred BASIC (a timesharing BASIC designed for UNIX and Linux) for generating temporary filenames by using time values from a language variable called CDN. CDN (Current Date Number) is a number with 6 place precision that is ultimately generated by a call to the kernel's clock_gettime() function. On a really fast system and in a given task, two successive accesses of CDN may return the same value, giving rise to the possibility that an attempt would be made to create two temporary files with the same filename. So my tactic is to compare the most recently retrieved value of CDN with the previous value and if the same, increment the new value by 0.000001 to produce a unique value.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Tue Mar 15, 2022 5:13 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Tue Mar 17, 2015 6:21 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10986
Location: England
Sounds good, but you'd need to be sure that such a small increase has actually changed the value... looks like it's a 14-digit capable system so that should be ok.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 89 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 13 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: