Dump has been updated to dump-0.3-17
I'm willing to provide access to the host for RH staff to
test/fix bug. I've seen this and other similar problems on
This is a consistent problem on one filesystem. Output
from dump follows (some extra stuff from our local
DUMP Tape: 1 File: 3 apricot:/nfs/apricot/d1
Dumping apricot:/nfs/apricot/d1 (/dev/hdb1) level: 6
Date/time is: Wed Apr 7 02:00:16 CDT 1999
/sbin/dump 6ubdsf 20 10000 99999 - /dev/hdb1 |
--best --verbose | rsh -l dumpdisk incr "cat - >
DUMP: Date of this level 6 dump: Wed Apr 7 02:00:16 1999
DUMP: Date of last level 0 dump: Tue Mar 2 19:47:39 1999
DUMP: Dumping /dev/hdb1 (/nfs/apricot/d1) to standard
DUMP: mapping (Pass I) [regular files]
DUMP: mapping (Pass II) [directories]
DUMP: estimated 8826 tape blocks.
DUMP: dumping (Pass III) [directories]
DUMP: master/slave protocol botched.
DUMP: The ENTIRE dump is aborted.
Date/time is: Wed Apr 7 02:01:30 CDT 1999
Elapsed time for dump was: 0 hours 1 minutes 14 seconds
DUMP error detected: " DUMP: The ENTIRE dump is aborted."
Please try the dump which is currently in RawHide (dump 0.4b4). I
believe that this bug has been fixed.
You will have to rebuild the src RPM on your system to get it linked
against glibc 2.0, not glibc 2.1. Please report your results back
Sir, I need further input from you on this bug if we are going to be
able to solve it.
The code did fix the problem. Sorry about not responding immediately.
The source RPM didn't rebuild correctly -- I was able to rebuild by
cd'ing to /usr/src/redhat/BUILD... and configuring/making manually.
Once that was done, the dump binary generated worked correctly.
Customer reports new RPM fixes problem -- i.e. problem will be fixed
for all customers as of RHL 6.0.