RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1956966 - if a system.devices file entry contains DEVNAME=. and PVID=. then maybe automatically remove it or offer solution to remove it instead of printing warnings for every lvm cmd
Summary: if a system.devices file entry contains DEVNAME=. and PVID=. then maybe autom...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.5
Hardware: x86_64
OS: Linux
low
low
Target Milestone: beta
: ---
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-05-04 18:52 UTC by Corey Marthaler
Modified: 2022-11-04 07:27 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-11-04 07:27:47 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2021-05-04 18:52:59 UTC
Description of problem:
I started noticing PV warnings while running our regression tests with the system devices file turned on. I learned it was due to how some of the tests stack PVs on LVs. An easy solution is for the user to just remove the now invalid dev entry by hand, however if the user doesn't know to do that, then these warnings can get annoying. Maybe a new warning could say just that, "if this is no longer a valid device, remove it with this cmd: ..."

[root@hayes-02 ~]# vgcreate VG /dev/sd[cd]1
  Volume group "VG" successfully created
[root@hayes-02 ~]# lvcreate -n stack_pv -L 3T VG
  Logical volume "stack_pv" created.
[root@hayes-02 ~]# pvcreate --config devices/scan_lvs=1 /dev/VG/stack_pv 
  Physical volume "/dev/VG/stack_pv" successfully created.
[root@hayes-02 ~]# grep stack /etc/lvm/devices/system.devices 
IDTYPE=lvmlv_uuid IDNAME=LVM-BZosy9LROBQtF03HnRThnnvq8OewpeatvILa27Srrb2Is56ES7moNO13toPYDdja DEVNAME=/dev/VG/stack_pv PVID=DyJ4VbufdlCIDA8blMFIJ7xtzJVu5QHX

[root@hayes-02 ~]# pvremove /dev/VG/stack_pv
  Cannot use /dev/VG/stack_pv: device is an LV
[root@hayes-02 ~]# pvremove --config devices/scan_lvs=1 /dev/VG/stack_pv
  Labels on physical volume "/dev/VG/stack_pv" successfully wiped.

[root@hayes-02 ~]# grep stack /etc/lvm/devices/system.devices 
IDTYPE=lvmlv_uuid IDNAME=LVM-BZosy9LROBQtF03HnRThnnvq8OewpeatvILa27Srrb2Is56ES7moNO13toPYDdja DEVNAME=/dev/VG/stack_pv PVID=.
[root@hayes-02 ~]# vgremove VG
Do you really want to remove volume group "VG" containing 1 logical volumes? [y/n]: y
Do you really want to remove active logical volume VG/stack_pv? [y/n]: y
  Logical volume "stack_pv" successfully removed.
  Volume group "VG" successfully removed
[root@hayes-02 ~]# pvscan
  Devices file lvmlv_uuid LVM-BZosy9LROBQtF03HnRThnnvq8OewpeatvILa27Srrb2Is56ES7moNO13toPYDdja PVID none last seen on /dev/VG/stack_pv not found.
  PV /dev/sdc1                      lvm2 [<1.82 TiB]
  PV /dev/sdd1                      lvm2 [<1.82 TiB]
  PV /dev/sde1                      lvm2 [<1.82 TiB]
  PV /dev/sdf1                      lvm2 [<1.82 TiB]
  PV /dev/sdg1                      lvm2 [<1.82 TiB]
  PV /dev/sdh1                      lvm2 [<1.82 TiB]
  Total: 6 [10.91 TiB] / in use: 0 [0   ] / in no VG: 6 [10.91 TiB]

[root@hayes-02 ~]# vgcreate VG /dev/sd[gh]1
  Devices file lvmlv_uuid LVM-BZosy9LROBQtF03HnRThnnvq8OewpeatvILa27Srrb2Is56ES7moNO13toPYDdja PVID none last seen on /dev/VG/stack_pv not found.
  Volume group "VG" successfully created
[root@hayes-02 ~]# lvcreate -n stack_pv -L 3T VG
  Devices file lvmlv_uuid LVM-BZosy9LROBQtF03HnRThnnvq8OewpeatvILa27Srrb2Is56ES7moNO13toPYDdja PVID none last seen on /dev/VG/stack_pv not found.
  Logical volume "stack_pv" created.
[root@hayes-02 ~]# pvcreate --config devices/scan_lvs=1 /dev/VG/stack_pv
  Devices file lvmlv_uuid LVM-BZosy9LROBQtF03HnRThnnvq8OewpeatvILa27Srrb2Is56ES7moNO13toPYDdja PVID none last seen on /dev/VG/stack_pv not found.
  Devices file PVID (null) clearing wrong DEVNAME /dev/VG/stack_pv
  Physical volume "/dev/VG/stack_pv" successfully created.
[root@hayes-02 ~]# grep stack /etc/lvm/devices/system.devices 
IDTYPE=lvmlv_uuid IDNAME=LVM-o4C25NnmVWSukpT2QZ40pn53mMelfDdDr8h6j4YZdXhusajvWSb9VDfEGWKMLl5R DEVNAME=/dev/VG/stack_pv PVID=xPhukVMw2ex1VSY6AxX0PNlaKeNIEFwx
[root@hayes-02 ~]# grep stack /etc/lvm/devices/system.devices 
IDTYPE=lvmlv_uuid IDNAME=LVM-o4C25NnmVWSukpT2QZ40pn53mMelfDdDr8h6j4YZdXhusajvWSb9VDfEGWKMLl5R DEVNAME=/dev/VG/stack_pv PVID=xPhukVMw2ex1VSY6AxX0PNlaKeNIEFwx
[root@hayes-02 ~]# 
[root@hayes-02 ~]# pvscan
  Devices file lvmlv_uuid LVM-BZosy9LROBQtF03HnRThnnvq8OewpeatvILa27Srrb2Is56ES7moNO13toPYDdja PVID none not found.
  PV /dev/sdg1   VG VG              lvm2 [<1.82 TiB / 0    free]
  PV /dev/sdh1   VG VG              lvm2 [<1.82 TiB / 652.99 GiB free]
  PV /dev/sdc1                      lvm2 [<1.82 TiB]
  PV /dev/sdd1                      lvm2 [<1.82 TiB]
  PV /dev/sde1                      lvm2 [<1.82 TiB]
  PV /dev/sdf1                      lvm2 [<1.82 TiB]
  Total: 6 [10.91 TiB] / in use: 2 [<3.64 TiB] / in no VG: 4 [<7.28 TiB]
[root@hayes-02 ~]# pvscan  --config devices/scan_lvs=1
  Devices file lvmlv_uuid LVM-BZosy9LROBQtF03HnRThnnvq8OewpeatvILa27Srrb2Is56ES7moNO13toPYDdja PVID none not found.
  PV /dev/sdg1          VG VG              lvm2 [<1.82 TiB / 0    free]
  PV /dev/sdh1          VG VG              lvm2 [<1.82 TiB / 652.99 GiB free]
  PV /dev/VG/stack_pv                      lvm2 [3.00 TiB]
  PV /dev/sdc1                             lvm2 [<1.82 TiB]
  PV /dev/sdd1                             lvm2 [<1.82 TiB]
  PV /dev/sde1                             lvm2 [<1.82 TiB]
  PV /dev/sdf1                             lvm2 [<1.82 TiB]
  Total: 7 [13.91 TiB] / in use: 2 [<3.64 TiB] / in no VG: 5 [<10.28 TiB]

[root@hayes-02 ~]# cat /etc/lvm/devices/system.devices 
# LVM uses devices listed in this file.
# Created by LVM command pvcreate pid 1632910 at Tue May  4 13:36:51 2021
VERSION=1.1.1336
IDTYPE=sys_wwid IDNAME=naa.6d094660650d1e0022bd29e81db0cefc DEVNAME=/dev/sdb1 PVID=. PART=1
IDTYPE=sys_wwid IDNAME=naa.6d094660650d1e0022bd29f91ebe756c DEVNAME=/dev/sde1 PVID=xalc0B0nC7W2JwNxu0IvMfxFqeisddol PART=1
IDTYPE=sys_wwid IDNAME=naa.6d094660650d1e0022bd2a001f1ae16e DEVNAME=/dev/sdf1 PVID=oyRQxWyjqhPOQaQJRiZBemm2BGk6S12b PART=1
IDTYPE=sys_wwid IDNAME=naa.6d094660650d1e0022bd29ee1e0945a8 DEVNAME=/dev/sdc1 PVID=NDzAx15sK2U9DI9L5OUif0bZJXPfJ4V1 PART=1
IDTYPE=sys_wwid IDNAME=naa.6d094660650d1e0022bd29f31e631f3e DEVNAME=/dev/sdd1 PVID=xE7hhB3B0vrQCv2WwWHh1TSx42ajleZg PART=1
IDTYPE=lvmlv_uuid IDNAME=LVM-BZosy9LROBQtF03HnRThnnvq8OewpeatvILa27Srrb2Is56ES7moNO13toPYDdja DEVNAME=. PVID=.
IDTYPE=sys_wwid IDNAME=naa.6d094660650d1e0022bd2a061f7869e9 DEVNAME=/dev/sdg1 PVID=H5barwmUbRBONnxy8Pc3YGIhmjLee66Q PART=1
IDTYPE=sys_wwid IDNAME=naa.6d094660650d1e0022bd2a0c1fd6d4e8 DEVNAME=/dev/sdh1 PVID=FhDYpc90TFEIzMNA5226VQX3w6C2zQzf PART=1
IDTYPE=lvmlv_uuid IDNAME=LVM-o4C25NnmVWSukpT2QZ40pn53mMelfDdDr8h6j4YZdXhusajvWSb9VDfEGWKMLl5R DEVNAME=/dev/VG/stack_pv PVID=xPhukVMw2ex1VSY6AxX0PNlaKeNIEFwx



kernel-4.18.0-304.5.el8    BUILT: Mon Apr 19 16:17:43 CDT 2021
lvm2-2.03.12-0.1.20210426git4dc5d4a.el8    BUILT: Mon Apr 26 08:23:33 CDT 2021
lvm2-libs-2.03.12-0.1.20210426git4dc5d4a.el8    BUILT: Mon Apr 26 08:23:33 CDT 2021
lvm2-dbusd-2.03.12-0.1.20210426git4dc5d4a.el8    BUILT: Mon Apr 26 08:23:33 CDT 2021

device-mapper-1.02.177-0.1.20210426git4dc5d4a.el8    BUILT: Mon Apr 26 08:23:33 CDT 2021
device-mapper-libs-1.02.177-0.1.20210426git4dc5d4a.el8    BUILT: Mon Apr 26 08:23:33 CDT 2021
device-mapper-event-1.02.177-0.1.20210426git4dc5d4a.el8    BUILT: Mon Apr 26 08:23:33 CDT 2021
device-mapper-event-libs-1.02.177-0.1.20210426git4dc5d4a.el8    BUILT: Mon Apr 26 08:23:33 CDT 2021

Comment 1 David Teigland 2021-05-04 20:18:45 UTC
> Maybe a new warning could say just that, "if this is no longer a valid device, remove it with this cmd: ..."

That sounds like a good idea.  I've really struggled with how to automatically deal with this, but haven't come up with anything that seems very good.  We don't know if an entry is momentarily missing or permanently gone.  I've thought about adding a timestamp to the devices file entry when it becomes stale, and then having some policy to automatically drop it after some period of time.

Comment 5 David Teigland 2021-07-28 14:12:28 UTC
low priority so moving to next release

Comment 8 RHEL Program Management 2022-11-04 07:27:47 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.