6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Thu Nov 14, 2024 5:30 pm

All times are UTC




Post new topic Reply to topic  [ 89 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6  Next
Author Message
PostPosted: Thu Nov 08, 2012 6:34 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8485
Location: Midwestern USA
Like other POC activities, this has languished as other things consume my time. As some of the basic work has already been done and tested, getting back to it shouldn't be too difficult once I restart.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Tue Mar 15, 2022 1:49 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Sun Mar 31, 2013 5:59 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8485
Location: Midwestern USA
I've been very busy on other (revenue producing) things, but this filesystem thingie is still stewing somewhere in my cerebrum. I've actually had a little time to work out some algorithms for low-level functions that would be needed in a practical filesystem implementation. One of the more difficult functions is that of translating a filename into an inode number so the file can be accessed. Such a task is complicated by the fact that in a hierarchical filesystem the filename may be just a simple filename, a relative pathname or an absolute pathname:

Code:
  simple filename → inventory.idx
relative pathname → data/inventory.idx
absolute pathname → /bcs01/data/inventory.idx

UNIX and Linux documentation that describes the kernel APIs that access the filesystem (e.g., the open() system call) refers to a filename as a pathname, even though it may not include a directory reference. I tend to use the word filename when talking about a file and pathname when talking about the sequence of directories that must be traversed to get to the filename.

In my hybrid S51K filesystem that I am developing, a filename is a character string that is cataloged into a directory (itself a file that is managed by the kernel), along with an inode number. A directory could have hundreds or thousands of filename entries, each with a corresponding non-zero inode number—an inode number of zero indicates a deleted file. The inode number is used to fetch the file's inode from the disk so the file can be accessed. In reality, inodes are the files, as they contain all of the administrative data needed to locate the file's data blocks on the disk, as well as other information indicating who owns the file, read-write permissions, time of last access, etc.

In addition to the filename entries that have been inserted by creating new files, a directory will have two special entries: . (a period, pronounced "dot") and .. (two consecutive periods, pronounced "dot-dot"). The . entry refers to the directory itself and the .. entry refers to the directory's parent. For example, if . refers to /bcs01/data then .. will refer to /bcs01. Within /bcs01, .. will refer to /, which is the root directory. As with other active filenames in the directory, associated with . and .. are non-zero inode numbers. The presence of these special filenames allows the kernel to find its location in the filesystem without actually knowing directory names.

Before continuing, and for the benefit of anyone reading this who is not familiar with the UNIX operating system (or Linux—the two are 95 percent functionally identical), I should mention something called the u-area (as in user area). The u-area is a contiguous section of RAM in the executing process' workspace that the kernel manages. The u-area will contain, among other things, a table listing the files that the process has opened, and will also contain two inode numbers, one corresponding to the process' notion of where the root of the filesystem begins (it doesn't have to be the actual root, as it can be changed with the chroot command), and the other being the inode number of the current (working) directory. The shell command pwd (print working directory) takes the working directory inode number and convert it into a human-readable absolute path starting at the root directory, producing, for example, /bcs01/data. Storing the root and working directory information as inode numbers keeps RAM consumption to a minimum—only four bytes are needed for this information, plus two additional byte to indicate to which filesystem each inode belongs. The alternative, storing this information as actual directory names, would consume a lot of static space, as the directory fields would have to be large enough to store the longest possible pathname (typically 512 bytes).

Getting back to the problem of translating a filename to an inode number, the above filename examples indicate that there are three possible scenarios to consider. The first one is the simplest: it refers to a file in the current working directory. If the working directory is /bcs01/data then it is inferred that reference to the simple filename inventory.idx really means /bcs01/data/inventory.idx. Let's suppose I want to open that file. I have a macro that expands to the open file assembly language mumbo-jumbo required by the kernel—the details of what the macro expands into are of no concern at this time:

Code:
         open "inventory.idx",o_read

When the kernel processes an open request it calls upon the function namei to perform the task of converting any pathname into a filesystem number and corresponding inode. If namei succeeds, the matching inode will be loaded into workspace and locked pending further processing. A flowchart that summarizes what namei does to search for a filename is presented later on.

The core of namei's search is a parsing and comparison process, in which the desired filename character string is compared to each entry in the directory. As the above example is a simple filename, no virgules (/) are present and namei knows that the current working directory is to be searched, and that the filename delimiter will be a null ($00). As read into workspace, the desired directory entry would appear thusly in a machine language monitor memory dump:

Code:
16 3F 69 6E 76 65 6E 74 6F 72 79 2E 69 64 78 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00:inventory.idx..................

In my hybrid S51K filesystem, the maximum filename length is atomically defined as 30 bytes and an inode is atomically defined to be an unsigned 16 bit integer. Therefore, each "slot" in a directory is exactly 32 bytes in length, and as the filesystem uses 1KB logical disk blocks, exactly 32 directory slots can fit into a single block. Hence the above workspace is 33 bytes in length so that a full length filename can be terminated with a null for easier searching.

In the above example, 16 3F would be the inode number in little-endian format ($3F16), and 69 6E 76 65 6E 74 6F 72 79 2E 69 64 78 would be the filename (inventory.idx)—the kernel pads short filenames with nulls to achieve consistent lengths, 17 nulls in this example.

Since the kernel knows that it can confine its search to the working directory, the search conceptually devolves to a two step affair:

  1. A byte-for-byte filename comparison is made to each directory entry with a non-zero inode number. A match is deemed to have occurred if all non-null characters in the filename field of the directory entry match the desired filename.

  2. If a match didn't occur and more filenames remain in the directory repeat step 1 for the next filename. Otherwise, return a "file not found" error.

The search ends when the filename has been located or the full extent of the directory has been read. As noted, the search process skips directory entries whose inode number is zero. Searching is not as disk-intensive as it may seem, as the kernel caches disk blocks into buffers and then works on the buffered block image.

More complicated is the use of a partial or full (absolute) pathname, e.g., /bcs01/data/inventory.idx. If the first character in a pathname is / then the kernel starts at the root of the filesystem hierarchy, as defined by the root directory inode number in the process' u-area, which may or may not be the system's actual root directory. Otherwise, the search commences at the current location in the filesystem hierarchy, as defined by the current directory inode number in the process' u-area. The process becomes an iterative one, as depicted by the following flow chart, which shows a search commencing at the root directory:

Attachment:
File comment: namei algorithm
flowchart_filename_search.gif
flowchart_filename_search.gif [ 127.8 KiB | Viewed 1521 times ]

The above algorithm, with a minor change, will work for all three search cases. If the search involves a partial pathname or a simple filename, the current working directory is initially set as the next pathname component in the first step of the algorithm. The rest of the search proceeds as shown.

During namei's search, intermediate pathname components (e.g., bcs01/ and data/) are expected to be directories (the trailing / indicates that this is the case), so not only must a name character string match occur, the matched entry's file type must be a directory, which is determined by loading its inode and examining the file type bit field. If the entry is not a directory then the search immediately terminates and a "file not found" error is returned to the caller. Otherwise, the inode becomes the next pathname component and the search reiterates.

It is clear from studying the above that namei has to scan each pathname component to see if it is terminated by /, as that is how namei "knows" that the path to the desired filename traverses yet another directory.* Furthermore, if namei determines that it has reached the end of the pathname string (determined when the terminating null is encountered) it must ignore the file type of that last pathname component, as that component could be a file or yet another directory (it will be up to the higher level function that instigated the search to decide if the file type is appropriate). Not shown are the steps that must be executed to actually read the directory contents from the disk—various kernel primitives take care of that, or the functions that parse the pathname into individual components, e.g., bcs00/, data/ and inventory.idx.

Aside from the mechanics of searching are those of security. My namei algorithm doesn't show the procedures where the kernel determines if the user has access rights to any component in the pathname. So part of the loop would be checks to examine the ownership and permissions of each pathname component and compare them to the user's. These checks are actually fairly simple, as every user and user group is identified by a unique word-sized (16 bit) number, and permissions are defined in a word-sized bit field. If the user doesn't have permission then the search will abort and return a "permission denied" error.

*Obviously, / can never be allowed in a file or directory name. If I attempt to create a file with the name data/customers.idx, the kernel will create the file customers.idx in the data/ subdirectory, assuming data/ exists—a "can't create" error would occur if data/ doesn't already exist.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Tue Mar 15, 2022 2:10 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Sun Mar 31, 2013 6:08 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10977
Location: England
Off topic note: there's an interesting story from Rob Pike about how the convention of hidden dotfiles was an accidental side-effect of suppressing . and .. from a directory listing.


Top
 Profile  
Reply with quote  
PostPosted: Sun Mar 31, 2013 6:57 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8485
Location: Midwestern USA
BigEd wrote:
Off topic note: there's an interesting story from Rob Pike about how the convention of hidden dotfiles was an accidental side-effect of suppressing . and .. from a directory listing. (Edit: Link no longer valid.)

Well, I wouldn't say it's off-topic, as it is a key aspect of UNIX filesystem design. Current versions of ls (directory lister) in Linux do display the . and .. entries, as well as other dot-files, e.g., .profile. "Traditional" UNIX versions of ls have always suppressed dot-files for everyone but root, which feature could be overridden by using the -a option to ls.

I found Pike's comment about dot-file clutter to be amusing:

    I don't have all that much stuff installed on the machine I'm using to type this, but my home directory has about a hundred dot files and I don't even know what most of them are or whether they're still needed. Every file name evaluation that goes through my home directory is slowed down by this accumulated sludge.

His point about directory search times is well-taken, since directory searches are linear in nature (aside, a B-tree directory organization could greatly reduce search times, at the expense of file creation times). Present-day Linux distros tend to install some of that "sludge" when a new user's home directory is created. The file /etc/default/useradd defines a "skeleton" directory (typically /etc/skel) where dot-file templates can be found. Judicious editing can eliminate some of this stuff, but some are valuable, e.g., the aforementioned .profile. I counted 11 dot-files in /etc/skel on my main Linux box. There is also a bin directory in there, presumably for giving the user a place to store his custom written programs.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Tue Mar 15, 2022 2:15 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Sun Mar 31, 2013 7:15 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10977
Location: England
Ah yes, watch out for linear search in directories. Sooner or later it will bite as a performance problem.

I'm not sure about your statement on 'ls' - the Linux behaviour I'm familiar with is as you describe for Linux. Unless perhaps you have some alias or envvar adding the '-a' option?

For bonus points, 'ls' stands for 'list segments' in some deeply historical sense!

Cheers
Ed


Top
 Profile  
Reply with quote  
PostPosted: Sun Mar 31, 2013 7:50 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8485
Location: Midwestern USA
BigEd wrote:
Ah yes, watch out for linear search in directories. Sooner or later it will bite as a performance problem.

It doesn't seem to be much of a problem on modern systems unless the directory gets so large that indirect addressing is required.

Quote:
I'm not sure about your statement on 'ls' - the Linux behaviour I'm familiar with is as you describe for Linux. Unless perhaps you have some alias or envvar adding the '-a' option?

I'm not using any environment variables to change the behavior of ls.

Quote:
For bonus points, 'ls' stands for 'list segments' in some deeply historical sense!

I believe that reference may have come out of the Multics project.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Tue Mar 15, 2022 2:18 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Sun Mar 31, 2013 7:56 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10977
Location: England
Very odd. This linux 'ls' hides dotfiles (and dot dirs) by default:
Code:
$ ls --version
ls (GNU coreutils) 8.5
(OK, actually GNU 'ls' on glibc on linux kernel, but it's the 'ls' which came with Ubuntu.)


Top
 Profile  
Reply with quote  
PostPosted: Sun Mar 31, 2013 8:24 pm 
Offline

Joined: Sun Apr 10, 2011 8:29 am
Posts: 597
Location: Norway/Japan
ls on Linux works as Unix ls, something I always run into when I work on other than my own systems. The first thing I have to do is to alias ls to ls -AFC (I think! My fingers remember it for me) because I wish to see everything always.

-Tor


Top
 Profile  
Reply with quote  
PostPosted: Sun Mar 31, 2013 8:27 pm 
Offline

Joined: Sun Apr 10, 2011 8:29 am
Posts: 597
Location: Norway/Japan
BigDumbDinosaur wrote:
I'm not using any environment variables to change the behavior of ls

There's probably an alias in place. Try 'alias ls' to verify.

-Tor


Top
 Profile  
Reply with quote  
PostPosted: Mon Apr 01, 2013 1:43 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8485
Location: Midwestern USA
Tor wrote:
BigDumbDinosaur wrote:
I'm not using any environment variables to change the behavior of ls

There's probably an alias in place. Try 'alias ls' to verify.

No alias in place. This is on SuSE Linux 11 SP1.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Tue Mar 15, 2022 2:19 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Mon Apr 01, 2013 6:28 am 
Offline
User avatar

Joined: Tue Nov 16, 2010 8:00 am
Posts: 2353
Location: Gouda, The Netherlands
Modern filesystems, such as ext3/ext4 on Linux use hashing algorithms for file name lookup in a directory, avoiding linear searches. Using those, you can have millions of files in a single directory without kernel performance impact.


Top
 Profile  
Reply with quote  
PostPosted: Mon Apr 01, 2013 8:37 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8540
Location: Southern California
Arlet wrote:
Modern filesystems, such as ext3/ext4 on Linux use hashing algorithms for file name lookup in a directory, avoiding linear searches. Using those, you can have millions of files in a single directory without kernel performance impact.

Closely related to this might be the Forth dictionary-hashing topic at viewtopic.php?f=9&t=555 with its explanations.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Mon Apr 01, 2013 3:58 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8485
Location: Midwestern USA
Arlet wrote:
Modern filesystems, such as ext3/ext4 on Linux use hashing algorithms for file name lookup in a directory, avoiding linear searches. Using those, you can have millions of files in a single directory without kernel performance impact.

That's certainly true, but in my case, I'm going to stick with the tried-and-true S51K directory model for now. The linear search won't be much of a performance bottleneck as long as the directory doesn't become so large that indirect blocks have to be used. The threshold for that in my filesystem model would be 673 files (1024 byte logical block size × 21 direct blocks per inode ÷ 32 bytes per filename slot). In practice, most well-structured systems don't have that many files in a single directory. On my main Linux box, for example, /etc has the most number of files at 306 (ext3 filesystem). At that level, the hashing algorithm is probably as expensive as a linear search.

I may eventually look at implementing a fast file search capability just for the exercise of doing so. One method would be to structure directories into B-trees, which would produce a constant directory search activity of three disk accesses per filename, assuming none of the required blocks in the search path have been buffered. It would be a "rob Peter to pay Paula" kind of situation, since more work would be required to catalog a file into the tree, especially if a shuffle has to be performed to recover slack space. When concocting some of this stuff, I do have to keep in mind that my current system is powered by a 10 MHz 65C816, not an 8-core AMD Opteron running at 3 GHz. :D

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Tue Mar 15, 2022 2:20 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
PostPosted: Mon Apr 01, 2013 6:47 pm 
Offline

Joined: Mon Apr 16, 2012 8:45 pm
Posts: 60
Just curious: how does this compare to Lanyard Filesystem (LanyFS)? Requirements look somewhat similar.


Top
 Profile  
Reply with quote  
PostPosted: Mon Apr 01, 2013 7:29 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8485
Location: Midwestern USA
Alienthe wrote:
Just curious: how does this compare to Lanyard Filesystem (LanyFS)?

Dunno. I haven't fully read up on it, so I can't draw out a comparison at this time. I should note (in my typically pedantic way) that it might be a bit difficult to take something seriously when the author has difficulty with details of English grammar and spelling:

    Simplicity. LanyFS avoids any unnecessary complexity whithout loosing track of scalability. It deals with the limited capabilities of embedded hardware as well as with the power of state-of-the-art computing systems. Simplicity is seen as the key to not loosing one party or another.

This is an exact quote from section 1.1 of the LanyFS exposition. :D Now, I'm not trying to bust the author's chops, and it appears that he's not a native English speaker. However, one might (unfairly) opine that if he has trouble with basic English, chances are his filesystem model will have logic and/or implementation errors. His core ideas seem sound, though, so I may take a closer look at the structure some time in the future.

This brings up an interesting point of view that I acquired decades ago when I first started out on computers (c. 1966). It's important to keep an open mind when developing new technology—learning from the past can be a big help in shaping the future. However, it's equally important to not over-dilute one's thinking. Also, new technology isn't necessarily better or even good technology. The S51K analog that I am going to develop in my filesystem is based upon a proven model (the real S51K, which came into existence over 30 years ago), with some features added to take advantage of updated thinking and better hardware. However, at its core, my filesystem model won't be much different than its prototype, which means I will know that almost all errors that arise will most likely be implementation errors, not flaws in the design itself.

Incorporating ideas from LanyFS might seem like a good path to take, since the design is tailored (to some extent) to the limitations of embedded hardware. However, it's not a "battle-tested" model and may even embody algorithms that don't scale well as the filesystem grows in size. Diluting my S51K hybrid with algorithms of some other unrelated filesystem may not accomplish all that much, but will open the door to potentially hard-to-solve bugs.

Quote:
Requirements look somewhat similar.

I agree. His filesystem is targeted to lightweight (metaphorically speaking) hardware, which is mostly what we 6502.org types like to build.

—————————————————————————————

Edit: I read a little further into the LanyFS exposition and discovered that it has no concept of file ownership and access permissions. That alone would disqualify it for my purposes.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Last edited by BigDumbDinosaur on Tue Mar 15, 2022 2:23 am, edited 1 time in total.

Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 89 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 5 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: