Occasionally, when rebooting after a clean shutdown, I get the following boot sequence for all my volumes: [/sbin/fsck.ext2] fsck.ext2 -a /dev/hdb8 /dev/hdb8 has reached maximal mount count, check forced /dev/hdb8: 31/256000 files (0.0% non-contiguous), ... Obviously, this is not good (it takes a long time, too). It doesn't seem to be related to anything I did while I was running before shutting down. What is this "maximal mount count"?
This is acually a feature of the Linux operating system. What is happening is that the system is reaching a mixmal count for the number of times that a file system has been mounted. Once that count is reached, the system performs a filesystem check to verify that everything is functioning well with the filesystem. Just call is preventative maintenance. Anyway, the number of mounts before this happens is set in the fstab file. Check out the man page for fstab is you would like to alter the number of mounts before the system check is performed.
I believe, the max mount count is set by tune2fs -c, not by /etc/fstab
You are correct. That was our error. You do need to use the tune2fs command for altering the amount of times between two mounts to run fsck on the partition. The last two numbers of each line in fstab are for the dump utility and the order to mount the partitions during boot up.