Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 2057759

Summary: '_get_device_info: device not found.' warning when deleting snapshot volumes
Product: Red Hat Enterprise Linux 9 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: Zdenek Kabelac <zkabelac>
lvm2 sub component: Snapshots QA Contact: cluster-qe <cluster-qe>
Status: CLOSED WONTFIX Docs Contact:
Severity: medium    
Priority: unspecified CC: agk, heinzm, jbrassow, mcsontos, msnitzer, prajnoha, zkabelac
Version: 9.0Keywords: Triaged
Target Milestone: rcFlags: pm-rhel: mirror+
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-08-24 07:28:30 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Corey Marthaler 2022-02-24 03:05:27 UTC
Description of problem:
This appears somewhat similar to rhel7.3 bug 1376942. I don't see this every time but I have seen it quite a few times now in rhel9.0. It's likely a timing issue, I'll attempt to debug this more and add additional information.


[root@hayes-02 ~]# lvs
  LV       VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  snap     vdo_sanity swi-a-s--- 92.00m      vdo_lv 0.00                                   
  vdo_lv   vdo_sanity owi-aos--- 50.00g                                                    
  vdo_pool vdo_sanity -wi-a----- 50.00g                                                    

[root@hayes-02 ~]# pvscan
  PV /dev/sdc1   VG vdo_sanity      lvm2 [<1.82 TiB / 1.72 TiB free]
  PV /dev/sde1   VG vdo_sanity      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdd1   VG vdo_sanity      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdf1   VG vdo_sanity      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdg1   VG vdo_sanity      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdh1   VG vdo_sanity      lvm2 [<1.82 TiB / <1.82 TiB free]
  PV /dev/sdi1   VG vdo_sanity      lvm2 [<1.82 TiB / <1.82 TiB free]
  Total: 7 [12.73 TiB] / in use: 7 [12.73 TiB] / in no VG: 0 [0   ]

[root@hayes-02 ~]# lvremove -f vdo_sanity
  Logical volume vdo_sanity/vdo_lv contains a filesystem in use.

[root@hayes-02 ~]# umount /mnt/*

[root@hayes-02 ~]# lvremove -f vdo_sanity
  _get_device_info: LVM-gqTDqmSOh10tZmuHeDTXpuO0q9AS6OCEr5rz1tylDgMUCSFWgUg2fdyPLm9v7Aoj: device not found.
  WARNING: Failed to unmonitor vdo_sanity/snap.
  Logical volume "vdo_pool" successfully removed.
  Logical volume "snap" successfully removed.
  Logical volume "vdo_lv" successfully removed.



Version-Release number of selected component (if applicable):
kernel-5.14.0-58.el9    BUILT: Thu Feb 10 11:18:21 AM CST 2022
lvm2-2.03.14-4.el9    BUILT: Wed Feb 16 06:01:21 AM CST 2022
lvm2-libs-2.03.14-4.el9    BUILT: Wed Feb 16 06:01:21 AM CST 2022


How reproducible:
Sometimes

Comment 5 Corey Marthaler 2023-06-07 20:28:00 UTC
This is not yet fixed in the latest.

kernel-5.14.0-322.el9    BUILT: Fri Jun  2 10:00:53 AM CEST 2023
lvm2-2.03.21-2.el9    BUILT: Thu May 25 12:03:04 AM CEST 2023
lvm2-libs-2.03.21-2.el9    BUILT: Thu May 25 12:03:04 AM CEST 2023


[root@virt-499 ~]# lvs -a -o +devices,segtype
  LV               VG            Attr       LSize   Pool     Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices           Type    
  snap             vdo_sanity    swi-a-s---   6.00g          vdo_lv 5.69                                    /dev/sdb1(3202)   linear  
  vdo_lv           vdo_sanity    owi-aos--- 100.00g vdo_pool                                                vdo_pool(0)       vdo     
  vdo_pool         vdo_sanity    dwi-------  50.00g                 9.27                                    vdo_pool_vdata(0) vdo-pool
  [vdo_pool_vdata] vdo_sanity    Dwi-ao----  50.00g                                                         /dev/sda1(0)      linear  
  [vdo_pool_vdata] vdo_sanity    Dwi-ao----  50.00g                                                         /dev/sdb1(0)      linear  
[root@virt-499 ~]# lvremove -f vdo_sanity
  Logical volume vdo_sanity/vdo_lv contains a filesystem in use.
  Logical volume vdo_sanity/vdo_lv contains a filesystem in use.

[root@virt-499 ~]# umount /mnt/*

[root@virt-499 ~]# lvremove -f vdo_sanity
  _get_device_info: LVM-cbz5joGpdO3htk6E7HebRXaKvNtDzDVhzsMyioYwv0lEY1X8bPuf6xcluilm0NJ6: device not found.
  WARNING: Failed to unmonitor vdo_sanity/snap.
  Logical volume "snap" successfully removed.
  Logical volume "vdo_lv" successfully removed.

Comment 9 Corey Marthaler 2023-06-27 15:56:28 UTC
Another reproduction of this issue. This appears to be some timing issue, as this is not 100% reproducible.

[root@grant-01 ~]# lvs
  LV                          VG            Attr       LSize    Pool Origin                      Data%  Meta%  Move Log Cpy%Sync Convert
  bb_snap1                    black_bird    swi---s---  252.00m      synced_random_raid1_2legs_1                                        
  synced_random_raid1_2legs_1 black_bird    owi-a-r---  500.00m                                                         100.00          

[root@grant-01 ~]# lvremove -f black_bird
  _get_device_info: LVM-ebZovEeJooQwGsTfTn7eQjxUpHruNEEEMRvBkEPVc7cnNlpDV5nEVxThLXnNcTqy: device not found.
  WARNING: Failed to unmonitor black_bird/bb_snap1.
  Logical volume "bb_snap1" successfully removed.
  Logical volume "synced_random_raid1_2legs_1" successfully removed.

kernel-5.14.0-322.el9    BUILT: Fri Jun  2 10:00:53 AM CEST 2023
lvm2-2.03.21-2.el9    BUILT: Thu May 25 12:03:04 AM CEST 2023
lvm2-libs-2.03.21-2.el9    BUILT: Thu May 25 12:03:04 AM CEST 2023

Comment 12 RHEL Program Management 2023-08-24 07:28:30 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.