Bug 8571

Summary: /sbin/dump crashes when doing incremental backup
Product: [Retired] Red Hat Linux Reporter: Michael Schmitz <mschmitz>
Component: dumpAssignee: Nalin Dahyabhai <nalin>
Status: CLOSED WORKSFORME QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: 6.1CC: jhmail
Target Milestone: ---   
Target Release: ---   
Hardware: i386   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2000-02-03 20:56:12 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Michael Schmitz 2000-01-18 15:09:12 UTC
Since upgrading from RedHat 6.0 to 6.1 /sbin/dump crashes when trying
to do incemental backups:

madeira:[/tmp]# /sbin/dump -1 -f /dev/null /dev/sda1
  DUMP: Date of this level 1 dump: Tue Jan 18 17:00:38 2000
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/sda1 (/usr) to /dev/null
  DUMP: mapping (Pass I) [regular files]
  DUMP: mapping (Pass II) [directories]
  DUMP: estimated 710739 tape blocks on 18.27 tape(s).
  DUMP: Volume 1 started at: Tue Jan 18 17:00:39 2000
  DUMP: dumping (Pass III) [directories]
  DUMP: SIGSEGV: ABORTING!
  DUMP: SIGSEGV: ABORTING!
Segmentation fault
madeira:[/tmp]#   DUMP: SIGSEGV: ABORTING!
  DUMP: SIGSEGV: ABORTING!
  DUMP: SIGSEGV: ABORTING!

Does anybody have an idea of what might be the problem?

This is what's installed:
  dump-0.4b4-11
  e2fsprogs-1.17-1
  glibc-2.1.2-12

Michael.

Comment 1 Jeff Johnson 2000-01-18 18:53:59 UTC
I couldn't reproduce this problem on
	sparc	dump-0.4b12
	i386	dump-0.4b10
You might try the latest dump-0.4b12 and see if your problem persists.
Otherwise, you might try incrementals on other file systems, as I've seen
and heard of problems with e2fs file systems that pass e2fsck, but are
un-dump-able. You could verify this by rebuilding your file system using
tar if you wish.

Comment 2 Stelian Pop 2000-01-18 21:17:59 UTC
This looks exactly like the 'famous' filetype feature which must be enabled
in your filesystem (which was corrected in 0.4b9 or 10, if I remember well). Do
a 'dumpe2fs /dev/sda1 | head' to confirm.

Upgrading to the last version of dump should resolv the problem.

By the way, in the example you are doing a full backup and not an incremental
one, since you have no level-0 dump on this filesystem:
	DUMP: Date of last level 0 dump: the epoch

Stelian.

Comment 3 Joe Harrington 2000-02-02 18:13:59 UTC
I had this problem with 0.4b9.  It's fixed for me in 0.4b13-1.  It only happened
on one disk.  I have SCSI partitions up to 8303623 1-k blocks in size, and had
no problems.  I have IDE partitions up to 4498576 1-k blocks in size, no
problems.  The disk that had the problem is an IDE with 16255644 1-k blocks.
Unlike the other partitions, it was the only partition on its disk. All
partitions were ext2 disks with -m 0 mkfs option, all dumps were level 0, all
sizes were from df (I'm lazy!).  Anyway, it's fixed now.  Try 0.4b13:

wget -N http://download.sourceforge.net/dump/dump-0.4b13-1.i386.rpm
rpm -Fvh dump-0.4b13-1.i386.rpm

--jh--

Comment 4 Joe Harrington 2000-02-02 23:03:59 UTC
When I dumped with the new version, I discovered to my chagrin another
difference between the large IDE disk and the others: bad blocks all over the
place.  The new version handles these as well as can be hoped.

--jh--

Comment 5 Michael Schmitz 2000-02-03 08:06:59 UTC
Your right, Stelian, I did never do a sucessful full backup of
/dev/sda1, but I have the same problem for disks already dumped
sucessfully. If I read the Amanda reports and log files more
closely, I would have noticed, sorry!

To verify whether the filesystem is correct but un-dump-able, I
created a brandnew filesystem, and tried a full dump (which
worked fine), changed some files and started a level 1 backup.
The problem is exactly the same as stated in the trouble ticket.

My second step was upgrading to dump version 0.4b12. I downloaded
both the dynamic and static RPM. Both of them seem to work
perfect, there was no problem in the past two weeks. I'm going
to upgrade to dump-0.4b13 today.

Comment 6 Elliot Lee 2000-02-03 20:56:59 UTC
Reporter indicates problem is solved.

Comment 7 openshift-github-bot 2018-04-04 17:50:15 UTC
Commits pushed to master at https://github.com/openshift/openshift-docs

https://github.com/openshift/openshift-docs/commit/a3108b232c59451083a1f0f886b0bcbeb446fe28
Issue 8571, Bumped Kube and Docker versions

https://github.com/openshift/openshift-docs/commit/c0551eb83f9bb20baa8cf6093bcbcfc2a04a9e04
Merge pull request #8595 from ahardin-rh/kube-docker-bump

Issue 8571, Bumped Kube and Docker versions