Dump will fail with a file larger than 2GB. Using the 2.2.16-xxenterprise kernel, create a file larger than 2GB. Dump will not back it up properly, it's assuming the old ext2 semantics of a signed 32-bit integer for a file size. When restored, a 3GB file became 49KB. Reproduce: otherwise empty fs, create a file that's > 2GB. I did a dd if=/dev/random of=largefile bs=1M count=3072 -- not the most efficient way, but it works. Then, dump the filesystem (dump 0f target- dev /dev/hdxx) and it's done in a remarkably short time. Restore brings back a much smaller file. I'd call this a must-fix, personally. Bad backups aren't likely to be looked on happily.
Just as a side note, it might be a good idea to document that other backup software like BRU needs to be updated as well. Trying to back up the same file with bru 15.1, I get this (largefile is 3GB): /nfs/brute/d1/vol/t/foo bru: [W013] "/nfs/brute/d1/vol/t/largefile": can't stat: errno = 75, Value too large for defined data type
Additional Info posted to testers-list by John on 17-Aug-2000: Filesystem was created by anaconda, I didn't recreate it with the enterprise kernel. Here's the output of tune2fs: Filesystem volume name: /nfs/brute/d1 Last mounted on: <not available> Filesystem UUID: 06ceb046-568a-11d4-82dc-f070a7c67f5c Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: filetype sparse_super large_file Filesystem state: not clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 2660160 Block count: 5313490 Reserved block count: 265674 Free blocks: 4442793 Free inodes: 2660145 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16320 Inode blocks per group: 510 Last mount time: Thu Aug 17 11:28:59 2000 Last write time: Thu Aug 17 14:05:34 2000 Mount count: 2 Maximum mount count: 20 Last checked: Wed Aug 16 15:35:28 2000 Check interval: 15552000 (6 months) Next check after: Mon Feb 12 14:35:28 2001 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128
Ok, we have two bugs here: 1. There is a small bug for files between 2 and 4 GB, a small patch to dump/traverse.c is below. Please test it (I've done just some minimal testing...) 2. For files bigger than 4 GB, I need to change the dump structure on tape to support 64bit sizes. The change itself is rather simple but I'm not sure if some parts of dump or restore will not break. So I'll delay this change for dump-0.4b19, taking the time to test this in a real-world situation (difficult to test dumps and restores of files bigger than 4 GB on my 10 GB disk :) Stelian. Here is the patch for the files between 2 and 4 GB -> --- traverse.c.old Sun May 28 20:16:42 2000 +++ traverse.c Fri Aug 18 23:14:27 2000 @@ -84,10 +84,14 @@ #define HASDUMPEDFILE 0x1 #define HASSUBDIRS 0x2 +#ifdef __linux__ +typedef u_quad_t fsizeT; +#else #ifdef FS_44INODEFMT typedef quad_t fsizeT; #else typedef long fsizeT; +#endif #endif #ifdef __linux__
fixed for 7.0 final.