teamtempest on Mon 11 Apr 2022 wrote:
Why does the user have to enter anything more than the number of 1K blocks available?
Dynamic allocation of inodes is possible but it adds complexity. It is much easier to specify the maximum number of files when the filing system is created.
The simplest technique for dynamic inode allocation would be a raw block scheme where 28 bits are for block number and three bits are for position within the block. (One bit is unused because this may be interpreted as a negative inode number.) This has several limitations. Most significantly, it doubles RAM usage and BigDumbDinosaur is trying to implement this on a system with less than 128KB RAM. At present, BigDumbDinosaur is cunningly using a bitmap scheme where a zero bit indicates an allocated block. This allows a quick linear scan where any zero byte represents a fully allocated range of eight data blocks. Any non-zero byte represents a partially allocated range or a completely unused range. A dynamic scheme would require one bit for free/used and another bit for data/meta-data. This scheme also creates a filing system limit of 2^28 blocks (2^38 bytes) which is independent of other limits unless larger inode numbers are used. This requires 40 bit, 48 bit or 64 bit inode numbers. This arrangement is also highly resistant to being compacted or optimized.
It is possible to place inodes in a hidden file. However, recovering allocation after a file is deleted requires the implementation a hash within the hidden file or the implementation of sparse files. Or maybe both. Again, this is intended for implementation on a system with less than 128KB RAM.
A fixed allocation of inodes is a really good place to stop and make a tractable implementation.
BigDumbDinosaur on Tue 12 Apr 2022 wrote:
a minimum filesystem would be minimally useful, as it could, at most, store 30 files, assuming no subdirectories were created.
Acorn DFS has no sub-directories and a 30 file limit. The part that really makes it suck is that every file must be one contiguous range of blocks. This prevents, for example, two files being opened for output. It is otherwise sufferable. Actually, 30 files may be more than sufficient for data logging.