Red Hat Bugzilla – Bug 39303
dump to file doesn't cope with large filesystems
Last modified: 2007-04-18 12:33:06 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux 2.4.2-2 i686; en-US; 0.7) Gecko/20010316
Description of problem:
I suspect this issue really only arises now because it is possible to
create a large file(> 2147483647 bytes). The dump program will not allow a
file greater than this to be created - eg "dump -0f /mnt/mnt1/home.dump
/home". Where the resulting file will be larger than the value above, dump
complains and exits after dumping that amount. It is possible to use dump
-0f - /home > /mnt/mnt1/home.dump but this is hard to restore.
With this second option, trying to restore with "restore -xf
/mnt/mnt1/home.dump" will cause the an error similar to "File to Large".
It appears this error is coming from libc. The command "restore -xf - <
/mnt/mnt1/home.dump" appears to work (I'm not 100% certain here) but has an
error after restoring files. I'm not sure if it restores all permissions.
Having looked at the dump distribution, there appears to be an option to
configure. Using "./configure --enable-largefile" and compiling, the
resulting binaries don't have the problem I mention above. However I
haven't fully looked into this to check that using this option is ok. The
documention does say that it causes dump to use a 64 bit interface to
glibc, and that a minimum version of glibc is needed. RedHat 7.1 seems to
Steps to Reproduce:
I've set the severity to High because of the possible risk of someone
thinking they've done a full dump of a file system only to find that they
Make sure you get dump-0.4b22 when you --enable-largefile it, earlier versions
had problems with LFS.
I have updated the dump package and enabled the --enable-largefile