RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1152331 - removal attempt of exclusively activated thin pool on cluster VG can fail to remove devfs entry
Summary: removal attempt of exclusively activated thin pool on cluster VG can fail to ...
Keywords:
Status: CLOSED DUPLICATE of bug 1119561
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-10-13 22:59 UTC by Corey Marthaler
Modified: 2023-03-08 07:27 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-10-16 08:35:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
-vvvv of the pool lvremove before the devfs entry still remained (46.26 KB, text/plain)
2014-10-13 22:59 UTC, Corey Marthaler
no flags Details

Description Corey Marthaler 2014-10-13 22:59:21 UTC
Created attachment 946621 [details]
-vvvv of the pool lvremove before the devfs entry still remained

Description of problem:
This appears to happen once in every 10 or so iterations. It also doesn't appear to matter what the test case iteration is doing, just that there's a removal of a pool volume. I'll attempt to narrow this down a bit to fewer commands to reproduce...

============================================================
Iteration 15 of 10000 started at Mon Oct 13 17:46:58 CDT 2014
============================================================
SCENARIO - [extend_100_percent_vg_same_sized_pvs]
Create a thin pool on a VG and then extend it using -l100%FREE with PVs being the same size
Recreating PVs/VG with same sized devices to make thin pool volume
host-112.virt.lab.msp.redhat.com: vgcreate snapper_thinp /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Making origin volume
lvcreate --activate ey --thinpool 100_percent --profile thin-performance --zero y -L 100M --poolmetadatasize 100M snapper_thinp
Sanity checking pool device metadata
(thin_check /dev/mapper/snapper_thinp-100_percent_tmeta)
examining superblock
examining devices tree
examining mapping tree
lvcreate --activate ey --virtualsize 1G -T snapper_thinp/100_percent -n origin
lvextend -l100%FREE snapper_thinp/100_percent
Removing thin origin and other virtual thin volumes
lvremove -f /dev/snapper_thinp/origin
Removing thinpool snapper_thinp/100_percent
lvremove -f /dev/snapper_thinp/100_percent
all entries for pool volume 100_percent weren't removed from devfs


[root@host-112 ~]# ls -l /dev/snapper_thinp
total 0
lrwxrwxrwx. 1 root root 7 Oct 13 17:47 100_percent -> ../dm-2



Version-Release number of selected component (if applicable):
3.10.0-163.el7.x86_64

lvm2-2.02.111-1.el7    BUILT: Mon Sep 29 09:18:07 CDT 2014
lvm2-libs-2.02.111-1.el7    BUILT: Mon Sep 29 09:18:07 CDT 2014
lvm2-cluster-2.02.111-1.el7    BUILT: Mon Sep 29 09:18:07 CDT 2014
device-mapper-1.02.90-1.el7    BUILT: Mon Sep 29 09:18:07 CDT 2014
device-mapper-libs-1.02.90-1.el7    BUILT: Mon Sep 29 09:18:07 CDT 2014
device-mapper-event-1.02.90-1.el7    BUILT: Mon Sep 29 09:18:07 CDT 2014
device-mapper-event-libs-1.02.90-1.el7    BUILT: Mon Sep 29 09:18:07 CDT 2014
device-mapper-persistent-data-0.3.2-1.el7    BUILT: Thu Apr  3 09:58:51 CDT 2014
cmirror-2.02.111-1.el7    BUILT: Mon Sep 29 09:18:07 CDT 2014

Comment 3 Peter Rajnoha 2014-10-16 08:35:05 UTC
We completely rely on udev in RHEL7 to handle the devfs entries. Unless REMOVE uevent is missing, this is certainly systemd-udevd bug. I suppose the REMOVE event is there since the device itself is removed properly at least based on the log (you can try running "udevadm monitor --kernel" while running the test to make sure, but I don't really think the event is missing). As such, this seems to be a variant of the bug that is already filed for systemd-udevd: bug #1119561, I'm closing this one as duplicate.

*** This bug has been marked as a duplicate of bug 1119561 ***


Note You need to log in before you can comment on or make changes to this bug.