Bug 1919989
Summary: | ReaR Restore Fails At Disk Recreation Step With Volume Group Messages. | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Bernie Hoefer <bhoefer> | ||||
Component: | rear | Assignee: | Pavel Cahyna <pcahyna> | ||||
Status: | CLOSED NOTABUG | QA Contact: | CS System Management SST QE <rhel-cs-system-management-subsystem-qe> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | 7.9 | CC: | ovasik, pcahyna, sorkim | ||||
Target Milestone: | rc | ||||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2021-01-26 09:12:05 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Bernie Hoefer
2021-01-25 14:16:13 UTC
Are you sure the dd commands are sufficient to erase any traces of the previously existing PVs/VGs ? (Also, I would use a multiple of 512 as the block size, although that's probably not the problem.) I suggest to wipe the disks completely, or use freshly created disk images, to rule out the possibility that there are some traces of the previous LVM setup lingering on the disks. (In reply to Pavel Cahyna from comment #2) === > Are you sure the dd commands are sufficient to erase any traces of the > previously existing PVs/VGs ? === That was apparently the problem. I just tried it again, this time wiping the partitions 1st and then the disks: dd if=/dev/zero of=/dev/vdb2 bs=1024 count=1024 dd if=/dev/zero of=/dev/vdb1 bs=1024 count=1024 dd if=/dev/zero of=/dev/vdb bs=1024 count=1024 dd if=/dev/zero of=/dev/vda2 bs=1024 count=1024 dd if=/dev/zero of=/dev/vda1 bs=1024 count=1024 dd if=/dev/zero of=/dev/vda bs=1024 count=1024 Whenever I've wanted to `blow away` a machine to install something else, zeroing-out just the disk was good enough since that removes its partition table. But I suppose with ReaR re-creating the partition table, *exactly*, it shouldn't be a surprise that it then sees what was left on the disk where those partitions existed/are recreated. Like I wrote in this Bugzilla ticket's description, this is my first exploration of ReaR. I guess it is an uncommon use-case to do a restore on the same disks from which the ReaR backup was made. Thanks again for your help. (In reply to Bernie Hoefer from comment #3) > Whenever I've wanted to `blow away` a machine to install something else, > zeroing-out just the disk was good enough since that removes its partition > table. But I suppose with ReaR re-creating the partition table, *exactly*, > it shouldn't be a surprise that it then sees what was left on the disk where > those partitions existed/are recreated. Indeed, and I believe that modern fdisk even leaves about 1 MB between the partition table and the first partition, in which case wiping less than 1 MB is certainly not enough. > Like I wrote in this Bugzilla ticket's description, this is my first > exploration of ReaR. I guess it is an uncommon use-case to do a restore on > the same disks from which the ReaR backup was made. > > Thanks again for your help. Not sure if it is an uncommon use-case, probably not that uncommon, but anyway ReaR does not seem to have any built-in automation for wiping the previous disk content. Given the destructiveness of such operation and the potential of going wrong, it may be actually a good thing to not have it. You are welcome and good luck with ReaR. Make sure to test your backups before use, as you just did. Whole system backup and restore is a complex and fragile process, many things can go wrong. By the way, there is a useful command for this, you don't need to use dd manually. It is called wipefs. First call it on partitions wipefs -a /dev/vdb2 ... and then on entire disks wipefs -a /dev/vdb ... There is now code upstream to perform this, and another bugs open for this RFE, so it will probably get fixed in the future (at least when we do a rebase to a new version). bz1925530 and bz1925531 |