Bug 8571
Summary: | /sbin/dump crashes when doing incremental backup | ||
---|---|---|---|
Product: | [Retired] Red Hat Linux | Reporter: | Michael Schmitz <mschmitz> |
Component: | dump | Assignee: | Nalin Dahyabhai <nalin> |
Status: | CLOSED WORKSFORME | QA Contact: | |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 6.1 | CC: | jhmail |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | i386 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2000-02-03 20:56:12 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Michael Schmitz
2000-01-18 15:09:12 UTC
I couldn't reproduce this problem on sparc dump-0.4b12 i386 dump-0.4b10 You might try the latest dump-0.4b12 and see if your problem persists. Otherwise, you might try incrementals on other file systems, as I've seen and heard of problems with e2fs file systems that pass e2fsck, but are un-dump-able. You could verify this by rebuilding your file system using tar if you wish. This looks exactly like the 'famous' filetype feature which must be enabled in your filesystem (which was corrected in 0.4b9 or 10, if I remember well). Do a 'dumpe2fs /dev/sda1 | head' to confirm. Upgrading to the last version of dump should resolv the problem. By the way, in the example you are doing a full backup and not an incremental one, since you have no level-0 dump on this filesystem: DUMP: Date of last level 0 dump: the epoch Stelian. I had this problem with 0.4b9. It's fixed for me in 0.4b13-1. It only happened on one disk. I have SCSI partitions up to 8303623 1-k blocks in size, and had no problems. I have IDE partitions up to 4498576 1-k blocks in size, no problems. The disk that had the problem is an IDE with 16255644 1-k blocks. Unlike the other partitions, it was the only partition on its disk. All partitions were ext2 disks with -m 0 mkfs option, all dumps were level 0, all sizes were from df (I'm lazy!). Anyway, it's fixed now. Try 0.4b13: wget -N http://download.sourceforge.net/dump/dump-0.4b13-1.i386.rpm rpm -Fvh dump-0.4b13-1.i386.rpm --jh-- When I dumped with the new version, I discovered to my chagrin another difference between the large IDE disk and the others: bad blocks all over the place. The new version handles these as well as can be hoped. --jh-- Your right, Stelian, I did never do a sucessful full backup of /dev/sda1, but I have the same problem for disks already dumped sucessfully. If I read the Amanda reports and log files more closely, I would have noticed, sorry! To verify whether the filesystem is correct but un-dump-able, I created a brandnew filesystem, and tried a full dump (which worked fine), changed some files and started a level 1 backup. The problem is exactly the same as stated in the trouble ticket. My second step was upgrading to dump version 0.4b12. I downloaded both the dynamic and static RPM. Both of them seem to work perfect, there was no problem in the past two weeks. I'm going to upgrade to dump-0.4b13 today. Reporter indicates problem is solved. Commits pushed to master at https://github.com/openshift/openshift-docs https://github.com/openshift/openshift-docs/commit/a3108b232c59451083a1f0f886b0bcbeb446fe28 Issue 8571, Bumped Kube and Docker versions https://github.com/openshift/openshift-docs/commit/c0551eb83f9bb20baa8cf6093bcbcfc2a04a9e04 Merge pull request #8595 from ahardin-rh/kube-docker-bump Issue 8571, Bumped Kube and Docker versions |