[SGVLUG] Setting the block size when making a file system
David Lawyer
dave at lafn.org
Sun Oct 9 17:20:33 PDT 2005
I just copied everything on my regular hard-drive (HD) to a backup HD
using cpbk. I noted that it occupied over 1% more space on the backup
drive. I suspect that the reason is that there is more internal
fragmentation on the backup. Both disks are 3GB but the backup has
2kbytes/block while my regular drive is 1kbytes/block.
This shows that I may have done the right thing by using a small
block size when making the file system: mkfs, mk2fs, or mkfs.ext3.
By default, mkfs creates a large block size, resulting in large
internal fragmentation (if one has a lot of small files and few if any
huge files). For example, a 1-byte file would use 8kb if the block
size were 8k. But I understand that file access is slower for a large
file if there are many thousands of blocks in it, each one pointed to
by the inodes for this file.
There must be a lot written about the inefficient design of the Linux
file system. It was originally copied from the Minix OS. It has small
inodes which list all the blocks in the file with no ranges like
24,854-39,757 allowed; they have to list tens of thousands of block
numbers for a large file.
There's a more efficient Reiser filesystem which has no inodes, but
the Debian package for the patch to create it has a warning notice
that it may not be suitable for production systems. So perhaps the
simple-minded ext2 and ext3 (journalling) filesystems are not so bad
after all.
David Lawyer
More information about the SGVLUG
mailing list