Bad: if you run hexdump -s on a file larger than 2Gb it returns "Value too large for defined data type." instead of working correctly. Good: if you run it without -s it begins to hexdump (but I didn't wait till it pass the 2Gb limit in dumping). Reproduce it with: 1. Find a file larger than 2Gb (let call it foo.big) 2. hexdump -s 1 foo.big 3. it says hexdump: foo.big: Value too large for defined data type. Here is the (last lines of) stracing the above: open("foo.big", O_RDONLY|O_LARGEFILE) = 0 fstat64(0, {st_mode=S_IFREG|0640, st_size=3659968591, ...}) = 0 write(2, "hexdump: foo.big: Value to"..., 63hexdump: foo.big: Value too large for defined data type.) = 63 _exit(1)
Now I've recompiled it (starting from util-linux-2.10s-13.7.src.rpm) and adding -D_FILE_OFFSET_BITS=64 to the CFLAGS in make_include as generated by rpm -ba. Now the errors are: hexdump: foo.big: File too large hexdump: stdin: Bad file descriptor. The (relevant) strace is open("foo.big", O_RDONLY) = -1 EFBIG (File too large) open("/usr/share/locale/en_US/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en/LC_MESSAGES/libc.mo", O_RDONLY) = -1 ENOENT (No such file or directory)write(2, "hexdump: /mnt/Big/ubicrawler.tes"..., 77hexdump: foo.big: File too large) = 77 mmap2(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40022000 _llseek(-1, 1, 0xbffff4b0, SEEK_SET) = -1 EBADF (Bad file descriptor) write(2, "hexdump: stdin: Bad file descrip"..., 37hexdump: stdin: Bad file descriptor.) = 37 _exit(1) = ?
It works for me with util-linux-2.11f-17 on an x86 box. I think some additional flags (-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE) were added to make things work nicely. Please reopen this bug if you still have problems with the 7.2 errata util-linux.