This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira ( If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1925530 - Cannot recreate volume group when using Raid on LVM
Summary: Cannot recreate volume group when using Raid on LVM
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: rear
Version: 7.9
Hardware: All
OS: Linux
Target Milestone: rc
: ---
Assignee: Pavel Cahyna
QA Contact: CS System Management SST QE
Depends On:
TreeView+ depends on / blocked
Reported: 2021-02-05 13:07 UTC by Renaud Métrich
Modified: 2023-09-22 00:08 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2023-09-22 00:08:59 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Github rear rear pull 2564 0 None closed Update to make sure vgremove is called before recreating the VG 2021-02-17 13:30:34 UTC
Red Hat Issue Tracker   RHEL-6952 0 None Migrated None 2023-09-22 00:08:53 UTC
Red Hat Knowledge Base (Solution) 5779321 0 None None None 2021-02-05 15:30:40 UTC

Description Renaud Métrich 2021-02-05 13:07:42 UTC
Description of problem:

See Upstream PR

When having a LV in Raid, and 2 PVs making the VG, an error occurs when re-creating the VG in case Migration mode is used, which usually is the default since having multiple disks with same size make ReaR do a mapping and enter Migration mode.

-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
Start system layout restoration.
Disk '/dev/vda': creating 'msdos' partition table
Disk '/dev/vda': creating partition number 1 with name 'primary'
Disk '/dev/vda': creating partition number 2 with name 'primary'
Creating LVM PV /dev/vdb
Creating LVM PV /dev/vdc
Creating LVM PV /dev/vda2
Creating LVM VG 'data'; Warning: some properties may not be preserved...
The disk layout recreation script failed
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------

Internally the failure happens because a "vgcreate" is used on the existing VG, no "vgremove" is ever performed.

Version-Release number of selected component (if applicable):

All versions, including Upstream

How reproducible:


Steps to Reproduce:

1. Install a system

2. Add 2 additional disks that will be used to host a LVM VG with *same* size


3. Create a Raid volume

# pvcreate /dev/vdb
# pvcreate /dev/vdc
# vgcreate data /dev/vdb /dev/vdc
# lvcreate -n vol1 -L 1G -m 1 data
# mkfs.xfs /dev/data/vol1
# mount /dev/data/vol1 /mnt

4. Build a rescue image and perform a recovery

Actual results:


Expected results:

No failure

Additional info:

There is no easy workaround, the admin needs to edit the file.

Comment 2 Pavel Cahyna 2021-02-05 13:41:37 UTC
Does the problem occur only when restoring to disks that have existing data? If so, wouldn't it help to wipe the disks first before restoring to them?

Comment 3 Renaud Métrich 2021-02-05 14:21:01 UTC
Yes, only upon recovering an existing system.
Wiping works but requires manual intervention.

Comment 4 Pavel Cahyna 2021-02-05 15:05:20 UTC
This sounds like the case in bz1919989, which did not involve RAID. So I think the problem is not restricted to RAID.

Comment 5 Renaud Métrich 2021-02-05 15:31:43 UTC
Very likely indeed, I think this happens as soon as a volume group has multiple PVs on different disks.
What is really required is to have ReaR enter Migration mode. There is no issue with the `vgcfgrestore` way.

Comment 7 RHEL Program Management 2023-09-21 23:35:29 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 8 RHEL Program Management 2023-09-22 00:08:59 UTC
This BZ has been automatically migrated to the Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit for general account information.

Note You need to log in before you can comment on or make changes to this bug.