RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1919989 - ReaR Restore Fails At Disk Recreation Step With Volume Group Messages.
Summary: ReaR Restore Fails At Disk Recreation Step With Volume Group Messages.
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: rear
Version: 7.9
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Pavel Cahyna
QA Contact: CS System Management SST QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-01-25 14:16 UTC by Bernie Hoefer
Modified: 2024-03-25 17:59 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-01-26 09:12:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
ReaR log file. (24.13 KB, text/plain)
2021-01-25 14:16 UTC, Bernie Hoefer
no flags Details

Description Bernie Hoefer 2021-01-25 14:16:13 UTC
Created attachment 1750527 [details]
ReaR log file.

Description of problem:
I am using ReaR for the first time, using
[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-relax-and-recover_rear]
...as my guide.  I am not sure if I'm just doing something wrong, but when I try to restore my test machine from the ISO created, it fails at the "disk recreation script" step.


Version-Release number of selected component (if applicable):
rear-2.4-13.el7.x86_64


Steps to Reproduce:
1. Create RHEL 7.9 virtual machine using a minimal install.
2. Post-installation, add a 2nd disk to the virtual machine.  Create 2
   partitions; add the 1st partition as a physical volume to the volume
   group.  Extended the root logical volume and file system to use this
   new space.  Make the 2nd partition on the new drive its own file system.
   Final layout of disks should look like this:
     # lsblk
     NAME                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
     sr0                         11:0    1 1024M  0 rom  
     vda                        252:0    0   10G  0 disk 
     ├─vda1                     252:1    0  512M  0 part /boot
     └─vda2                     252:2    0  9.5G  0 part 
       ├─rhel_rhel79--gold-root 253:0    0  9.5G  0 lvm  /
       └─rhel_rhel79--gold-swap 253:1    0  508M  0 lvm  [SWAP]
     vdb                        252:16   0    1G  0 disk 
     ├─vdb1                     252:17   0  500M  0 part 
     │ └─rhel_rhel79--gold-root 253:0    0  9.5G  0 lvm  /
     └─vdb2                     252:18   0  523M  0 part /opt/3rdparty
3. Create a ReaR ISO per chapter 27 of the RHEL 7
   _System Administrator's Guide_.
4. Boot the virtual machine off of RHEL ISO; drop to shell and blank both
   disks by using:
     # dd if=/dev/zero of=/dev/vda bs=1000 count=1000
     1000+0 records in
     1000+0 records out
     1000000 bytes (1.0 MB) copied, 0.0475139 s, 21.0 MB/s
     # dd if=/dev/zero of=/dev/vdb bs=1000 count=1000
     1000+0 records in
     1000+0 records out
     1000000 bytes (1.0 MB) copied, 0.0548423 s, 18.2 MB/s
5. Try to restore the machine using the ISO created in step #3 and the
   instructions in chapter 27 of the RHEL 7 _System Administrator's Guide_.


Actual results:
Restore fails at this step:
  User confirmed disk recreation script
  Start system layout restoration.
  Creating partitions for disk /dev/vda (msdos)
  Creating partitions for disk /dev/vdb (msdos)
  Creating LVM PV /dev/vda2
  Creating LVM PV /dev/vdb1
  Creating LVM VG 'rhel_rhel79-gold'; Warning: some properties may not be preserved...
  The disk layout recreation script failed


Expected results:
Successful restoration.


Additional info:
Looking at the log file (see attached), the disk recreation seems to fail at line 528:
  +++ lvm vgcreate --physicalextentsize 4096k rhel_rhel79-gold /dev/vda2 /dev/vdb1
    WARNING: Failed to connect to lvmetad. Falling back to device scanning.
    A volume group called rhel_rhel79-gold already exists.
Does the volume group already get created at log file lines 455 and 489 with the "lvm vgchange -a n" commands?  The output of those commands have some warnings about inconsistent metadata, but I do not know why that would be.

I took a snapshot of my virtual machine before creating the ReaR ISO and wiping the disks, so I can always go back to it for BZ troubleshooting purposes.

Comment 2 Pavel Cahyna 2021-01-25 14:49:39 UTC
Are you sure the dd commands are sufficient to erase any traces of the previously existing PVs/VGs ? (Also, I would use a multiple of 512 as the block size, although that's probably not the problem.)

I suggest to wipe the disks completely, or use freshly created disk images, to rule out the possibility that there are some traces of the previous LVM setup lingering on the disks.

Comment 3 Bernie Hoefer 2021-01-25 15:26:58 UTC
(In reply to Pavel Cahyna from comment #2)
===
> Are you sure the dd commands are sufficient to erase any traces of the
> previously existing PVs/VGs ?
===

That was apparently the problem.  I just tried it again, this time wiping the partitions 1st and then the disks:

  dd if=/dev/zero of=/dev/vdb2 bs=1024 count=1024
  dd if=/dev/zero of=/dev/vdb1 bs=1024 count=1024
  dd if=/dev/zero of=/dev/vdb bs=1024 count=1024
  dd if=/dev/zero of=/dev/vda2 bs=1024 count=1024
  dd if=/dev/zero of=/dev/vda1 bs=1024 count=1024
  dd if=/dev/zero of=/dev/vda bs=1024 count=1024

Whenever I've wanted to `blow away` a machine to install something else, zeroing-out just the disk was good enough since that removes its partition table.  But I suppose with ReaR re-creating the partition table, *exactly*, it shouldn't be a surprise that it then sees what was left on the disk where those partitions existed/are recreated.

Like I wrote in this Bugzilla ticket's description, this is my first exploration of ReaR.  I guess it is an uncommon use-case to do a restore on the same disks from which the ReaR backup was made.

Thanks again for your help.

Comment 4 Pavel Cahyna 2021-01-26 09:12:05 UTC
(In reply to Bernie Hoefer from comment #3)

> Whenever I've wanted to `blow away` a machine to install something else,
> zeroing-out just the disk was good enough since that removes its partition
> table.  But I suppose with ReaR re-creating the partition table, *exactly*,
> it shouldn't be a surprise that it then sees what was left on the disk where
> those partitions existed/are recreated.

Indeed, and I believe that modern fdisk even leaves about 1 MB between the partition table and the first partition, in which case wiping less than 1 MB is certainly not enough.

> Like I wrote in this Bugzilla ticket's description, this is my first
> exploration of ReaR.  I guess it is an uncommon use-case to do a restore on
> the same disks from which the ReaR backup was made.
> 
> Thanks again for your help.

Not sure if it is an uncommon use-case, probably not that uncommon, but anyway ReaR does not seem to have any built-in automation for wiping the previous disk content. Given the destructiveness of such operation and the potential of going wrong, it may be actually a good thing to not have it.

You are welcome and good luck with ReaR. Make sure to test your backups before use, as you just did. Whole system backup and restore is a complex and fragile process, many things can go wrong.

Comment 5 Pavel Cahyna 2021-03-17 11:38:33 UTC
By the way, there is a useful command for this, you don't need to use dd manually. It is called wipefs.
First call it on partitions
wipefs -a /dev/vdb2
...
and then on entire disks
wipefs -a /dev/vdb
...

There is now code upstream to perform this, and another bugs open for this RFE, so it will probably get fixed in the future (at least when we do a rebase to a new version). bz1925530 and bz1925531


Note You need to log in before you can comment on or make changes to this bug.