RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 809576 - "WARNING: udev failed to return a device node" during deactivation and removal
Summary: "WARNING: udev failed to return a device node" during deactivation and removal
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.3
Hardware: x86_64
OS: Linux
low
low
Target Milestone: rc
: ---
Assignee: Peter Rajnoha
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-04-03 16:51 UTC by Corey Marthaler
Modified: 2012-06-20 15:03 UTC (History)
10 users (show)

Fixed In Version: lvm2-2.02.95-4.el6
Doc Type: Bug Fix
Doc Text:
No documentation needed.
Clone Of:
Environment:
Last Closed: 2012-06-20 15:03:30 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2012:0962 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2012-06-19 21:12:11 UTC

Description Corey Marthaler 2012-04-03 16:51:30 UTC
Description of problem:
This may be related to bug 807580.

This isn't test case specific, I just happen to see it from time to time during test clean up (deactivate and remove)

SCENARIO - [split_nosync_raid1]
Create a 3-way nosync raid1 and split it
taft-01: lvcreate --type raid1 --nosync -m 2 -n split_nosync -L 300M split_image
  WARNING: New raid1 won't be synchronised. Don't read what you didn't write!
Waiting until all mirror|raid volumes become fully syncd...
   1/1 mirror(s) are fully synced: ( 100.00% )

splitting off leg from nosync...

Deactivating mirror new... and removing
Deactivating mirror split_nosync... and removing
Although the mirror removal passed, errors were found in it's output
  WARNING: udev failed to return a device node.
  Couldn't find device with uuid Vu7Roz-DHda-qvt8-ZPsh-SSlw-P3go-5oA8o6.
  Logical volume "split_nosync" successfully removed


# LOG:
Apr  2 19:25:10 taft-01 qarshd[19781]: Running cmdline: lvchange -an /dev/split_image/new
Apr  2 19:25:12 taft-01 xinetd[1862]: EXIT: qarsh status=0 pid=19781 duration=2(sec)
Apr  2 19:25:12 taft-01 xinetd[1862]: START: qarsh pid=19786 from=::ffff:10.15.80.47
Apr  2 19:25:12 taft-01 qarshd[19786]: Talking to peer 10.15.80.47:34948
Apr  2 19:25:12 taft-01 qarshd[19786]: Running cmdline: lvremove -f /dev/split_image/new
Apr  2 19:25:12 taft-01 xinetd[1862]: EXIT: qarsh status=0 pid=19786 duration=0(sec)
Apr  2 19:25:13 taft-01 xinetd[1862]: START: qarsh pid=19800 from=::ffff:10.15.80.47
Apr  2 19:25:13 taft-01 qarshd[19800]: Talking to peer 10.15.80.47:34949



Version-Release number of selected component (if applicable):
2.6.32-251.el6.x86_64
lvm2-2.02.95-3.el6    BUILT: Fri Mar 30 09:54:10 CDT 2012
lvm2-libs-2.02.95-3.el6    BUILT: Fri Mar 30 09:54:10 CDT 2012
lvm2-cluster-2.02.95-3.el6    BUILT: Fri Mar 30 09:54:10 CDT 2012
udev-147-2.40.el6    BUILT: Fri Sep 23 07:51:13 CDT 2011
device-mapper-1.02.74-3.el6    BUILT: Fri Mar 30 09:54:10 CDT 2012
device-mapper-libs-1.02.74-3.el6    BUILT: Fri Mar 30 09:54:10 CDT 2012
device-mapper-event-1.02.74-3.el6    BUILT: Fri Mar 30 09:54:10 CDT 2012
device-mapper-event-libs-1.02.74-3.el6    BUILT: Fri Mar 30 09:54:10 CDT 2012
cmirror-2.02.95-3.el6    BUILT: Fri Mar 30 09:54:10 CDT 2012


How reproducible:
seldom

Comment 1 Peter Rajnoha 2012-04-04 13:31:00 UTC
Corey, would it be possible to turn udev info log here for a while? We should see which device causes the problem then (as it's seen in bug #807580).

The /etc/udev/udev.conf and using udev_log="info" there. Thanks.

Comment 2 Peter Rajnoha 2012-04-10 09:39:18 UTC
I think we'll change the severity of the message to log_verbose only as this is normal operation of udev/libudev (also described in libudev reference manual). It's considered OK that the function returning the node name returns NULL value if the device does not exist anymore (if the deactivation happens just in between we get the list of devices and then we iterate over it and we try to get more details from udev db).

However, it would be fine to see *which devices are causing problems* here, just to be sure that this is not a problem with lvm2 udev synchronization (since the volume removal should be synchronized and when finished, we should have a consistent udev database content).

Kabi can reproduce this problem with our testsuite when running several testsuite runs in parallel - so we have one test cleaning up devices and the other one getting NULL values from libudev... That's because we get all existing block devices in case the obtain_device_list_from_udev is used, not just the testing one...

So the question here is if the device is related to the actions done by the LVM command or if this is just some other block device being processed in parallel (which is not related). We should see that from the libudev log (comment #1), if provided.

Comment 3 Peter Rajnoha 2012-04-11 09:14:48 UTC
As we have to count with the situation that libudev returns NULL value here (if the record goes away by the time we ask for more info from udev db), we decided to lower the severity of the message to log_very_verbose only. It wouldn't be correct to issue warnings for non-related devices which is what we see in bug #807580 for example.

  https://www.redhat.com/archives/lvm-devel/2012-April/msg00005.html

I'll include this patch in next respin (though it would still be benefical to know which device causes trouble in this particular case reported here in this bz).

Comment 4 Peter Rajnoha 2012-04-11 09:38:20 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
No documentation needed.

Comment 7 Corey Marthaler 2012-05-10 21:36:17 UTC
I'm not seeing these warnings any more in the latest rpms. Marking verified.

2.6.32-269.el6.x86_64
lvm2-2.02.95-8.el6    BUILT: Wed May  9 03:33:32 CDT 2012
lvm2-libs-2.02.95-8.el6    BUILT: Wed May  9 03:33:32 CDT 2012
lvm2-cluster-2.02.95-8.el6    BUILT: Wed May  9 03:33:32 CDT 2012
udev-147-2.41.el6    BUILT: Thu Mar  1 13:01:08 CST 2012
device-mapper-1.02.74-8.el6    BUILT: Wed May  9 03:33:32 CDT 2012
device-mapper-libs-1.02.74-8.el6    BUILT: Wed May  9 03:33:32 CDT 2012
device-mapper-event-1.02.74-8.el6    BUILT: Wed May  9 03:33:32 CDT 2012
device-mapper-event-libs-1.02.74-8.el6    BUILT: Wed May  9 03:33:32 CDT 2012
cmirror-2.02.95-8.el6    BUILT: Wed May  9 03:33:32 CDT 2012

Comment 9 errata-xmlrpc 2012-06-20 15:03:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2012-0962.html


Note You need to log in before you can comment on or make changes to this bug.