RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1925531 - Cannot recreate volume group when using Raid on LVM
Summary: Cannot recreate volume group when using Raid on LVM
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: rear
Version: 8.3
Hardware: All
OS: Linux
Target Milestone: rc
: 8.0
Assignee: Pavel Cahyna
QA Contact: CS System Management SST QE
Šárka Jana
Depends On:
TreeView+ depends on / blocked
Reported: 2021-02-05 13:08 UTC by Renaud Métrich
Modified: 2023-09-18 00:24 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
.ReaR fails to recreate a volume group when you do not use clean disks for restoring ReaR fails to perform recovery when you want to restore to disks that contain existing data. To work around this problem, wipe the disks manually before restoring to them if they have been previously used. To wipe the disks in the rescue environment, use one of the following commands before running the `rear recover` command: * The `dd` command to overwrite the disks. * The `wipefs` command with the `-a` flag to erase all available metadata. See the following example of wiping metadata from the `/dev/sda` disk: ----- # wipefs -a /dev/sda[1-9] /dev/sda ----- This command wipes the metadata from the partitions on `/dev/sda` first, and then the partition table itself.
Clone Of:
Last Closed: 2023-02-28 07:27:54 UTC
Type: Bug
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Github rear rear pull 2564 0 None Merged Update 110_include_lvm_code.sh to make sure vgremove is called before recreating the VG 2023-02-28 08:09:10 UTC
Red Hat Knowledge Base (Solution) 5779321 0 None None None 2021-02-05 15:30:48 UTC

Description Renaud Métrich 2021-02-05 13:08:47 UTC
This bug was initially created as a copy of Bug #1925530

I am copying this bug because: 

Also applies.

Description of problem:

See Upstream PR https://github.com/rear/rear/pull/2564

When having a LV in Raid, and 2 PVs making the VG, an error occurs when re-creating the VG in case Migration mode is used, which usually is the default since having multiple disks with same size make ReaR do a mapping and enter Migration mode.

-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
Start system layout restoration.
Disk '/dev/vda': creating 'msdos' partition table
Disk '/dev/vda': creating partition number 1 with name 'primary'
Disk '/dev/vda': creating partition number 2 with name 'primary'
Creating LVM PV /dev/vdb
Creating LVM PV /dev/vdc
Creating LVM PV /dev/vda2
Creating LVM VG 'data'; Warning: some properties may not be preserved...
The disk layout recreation script failed
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

Internally the failure happens because a "vgcreate" is used on the existing VG, no "vgremove" is ever performed.

Version-Release number of selected component (if applicable):

All versions, including Upstream

How reproducible:


Steps to Reproduce:

1. Install a system

2. Add 2 additional disks that will be used to host a LVM VG with *same* size


3. Create a Raid volume

# pvcreate /dev/vdb
# pvcreate /dev/vdc
# vgcreate data /dev/vdb /dev/vdc
# lvcreate -n vol1 -L 1G -m 1 data
# mkfs.xfs /dev/data/vol1
# mount /dev/data/vol1 /mnt

4. Build a rescue image and perform a recovery

Actual results:


Expected results:

No failure

Additional info:

There is no easy workaround, the admin needs to edit the diskrestore.sh file.

Comment 7 RHEL Program Management 2023-02-28 07:27:54 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 9 Red Hat Bugzilla 2023-09-18 00:24:34 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days

Note You need to log in before you can comment on or make changes to this bug.