Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1608117

Summary: [RHEL-7.6] Anaconda should remove automatically the missing vgs during installation for the not existing vg_uuid
Product: Red Hat Enterprise Linux 7 Reporter: xhe <xhe>
Component: anacondaAssignee: Anaconda Maintenance Team <anaconda-maint-list>
Status: CLOSED WONTFIX QA Contact: Release Test Team <release-test-team-automation>
Severity: high Docs Contact:
Priority: unspecified    
Version: 7.6CC: jkonecny, vtrefny
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-02-15 07:40:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
anaconda_issue_after_selected_san_disks none

Description xhe@redhat.com 2018-07-25 02:19:53 UTC
Created attachment 1470417 [details]
anaconda_issue_after_selected_san_disks

Description of problem:

I found anaconda can not remove automatically the missing vgs during installation, I tried to install system to a remote disks which existing partitions and vg/lv/pv with the same name. I wonder if anaconda can remove the missing UUID of vgs directly? so that the provision system can be completed for automation in beaker. Thank You!

Version-Release number of selected component (if applicable):
RHEL-7.6-20180724.0 Server x86_64
Anaconda 21.48.22.143-1

How reproducible:
often

Steps to Reproduce:
1. Install system on the FCoE LUN by manual.
   Or clone this beaker job https://beaker.engineering.redhat.com/recipes/5429455 
2. connect the system via vncviewer and select the san disk, issue occurs here.

Actual results:
anaconda-21.48.22.143-1 pop out a dialog here to remind user to rename the vg name with UUID. As below:
************ paste ****************
There is a problem with your existing storage configuration: multiple LVM volume groups with the same name (rhel_storageqe_60)

You must resolve this matter before the installation can proceed. There is a shell available for use which you can access by pressing ctrl-alt and then ctrl-b-2.

Once you have resolved the issue you can retry the storage scan. If you do not fix it you will have to exist the installer.

Rename one of the volume groups so the names as distinct.
Hint 1: vgrename accepts UUID in place of the old name.
Hint 2: You can get the VG UUIDs by running 'pvs -o +vg_uuid'
******************************

I entered the anaconda terminal as prompt, and checked the UUID of vgs names as below:
-----------------------------------
[anaconda root@storageqe-60 ~]# vgs -o +vg_uuid
  WARNING: Device for PV UmRrto-XwY1-s3hk-twJR-LXLg-Ys61-GVF20j not found or rejected by a filter.
  VG                #PV #LV #SN Attr   VSize    VFree VG UUID                               
  rhel_storageqe-60   1   3   0 wz--n-  277.87g    0  lYtrog-C7gv-iqj7-9E1A-wDph-qBfp-3ubEN1
  rhel_storageqe-60   2   3   0 wz-pn- <320.87g 4.00m YQADph-bSrb-jisC-V3HW-23gA-qyGR-ruLxde

As prompt, renaming vg actually doesn't work well. As below:
------------------------------------------
[anaconda root@storageqe-60 ~]# vgrename YQADph-bSrb-jisC-V3HW-23gA-qyGR-ruLxde  
rhel_st60
  Processing VG rhel_storageqe-60 because of matching UUID YQADph-bSrb-jisC-V3HW-23gA-qyGR-ruLxde
  WARNING: Device for PV UmRrto-XwY1-s3hk-twJR-LXLg-Ys61-GVF20j not found or rejected by a filter.
  Couldn't find device with uuid UmRrto-XwY1-s3hk-twJR-LXLg-Ys61-GVF20j.
  Cannot change VG rhel_storageqe-60 while PVs are missing.
  Consider vgreduce --removemissing.
  Cannot process volume group rhel_storageqe-60

I have to remove the missing vgs with the following command:
------------------------------------------
[anaconda root@storageqe-60 ~]# vgreduce --removemissing
  No command with matching syntax recognised.  Run 'vgreduce --help' for more information.
  Nearest similar command has syntax:
  vgreduce --removemissing VG
  Remove all missing PVs from a VG.

Rename again the vg name via UUID, it works. system can install in next step.
--------------------------------------------
rhel_st60 root@storageqe-60 ~]# vgrename lYtrog-C7gv-iqj7-9E1A-wDph-qBfp-3ubEN1  
  Processing VG rhel_storageqe-60 because of matching UUID lYtrog-C7gv-iqj7-9E1A-wDph-qBfp-3ubEN1
  Volume group "lYtrog-C7gv-iqj7-9E1A-wDph-qBfp-3ubEN1" successfully renamed to "rhel_st60"
[anaconda root@storageqe-60 ~]# vgs
  WARNING: Device for PV UmRrto-XwY1-s3hk-twJR-LXLg-Ys61-GVF20j not found or rejected by a filter.
  Couldn't find device with uuid UmRrto-XwY1-s3hk-twJR-LXLg-Ys61-GVF20j.
  VG                #PV #LV #SN Attr   VSize    VFree
  rhel_st60           1   3   0 wz--n-  277.87g    0 
  rhel_storageqe-60   2   3   0 wz-pn- <320.87g 4.00m

Expected results:
The non-existing LVM volume groups could be cleaned automatically, don't let user to rename it here, in fact vg rename doesn't work for the non-existing vg. The correct solution is "vgreduce --removemissing"

Additional info:

Comment 2 xhe@redhat.com 2018-07-25 02:57:37 UTC
I hit "Recovery of standalone physical volumes failed" during system reboot after installation. Does it a issue? Will it cause system can not boot up? 

**************** snip *************************
+ lvm pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               rhel_st60
  PV Size               277.87 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              71135
  Free PE               0
  Allocated PE          71135
  PV UUID               YmaNtu-hTdj-czV9-daEQ-oTND-4Tk7-LKhuEE

  Read-only locking type set. Write locks are prohibited.
  Recovery of standalone physical volumes failed.
  Cannot process standalone physical volumes
  Read-only locking type set. Write locks are prohibited.
  Recovery of standalone physical volumes failed.
  Cannot process standalone physical volumes
  Read-only locking type set. Write locks are prohibited.
  Recovery of standalone physical volumes failed.
  Cannot process standalone physical volumes

Comment 3 xhe@redhat.com 2018-07-25 03:16:54 UTC
I filed bug for the issue in #c2 
https://bugzilla.redhat.com/show_bug.cgi?id=1608127

Comment 4 Jiri Konecny 2019-05-26 11:09:41 UTC
Vojta could you please tell us how we can achieve this? It sounds to me like something which should be implemented in blivet first.

Comment 7 RHEL Program Management 2021-02-15 07:40:53 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.