| Summary: | "WARNING: udev failed to return a device node" during deactivation and removal | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Corey Marthaler <cmarthal> |
| Component: | lvm2 | Assignee: | Peter Rajnoha <prajnoha> |
| Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> |
| Severity: | low | Docs Contact: | |
| Priority: | low | ||
| Version: | 6.3 | CC: | agk, dwysocha, heinzm, jbrassow, mbroz, msnitzer, prajnoha, prockai, thornber, zkabelac |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.02.95-4.el6 | Doc Type: | Bug Fix |
| Doc Text: |
No documentation needed.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2012-06-20 15:03:30 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Corey Marthaler
2012-04-03 16:51:30 UTC
Corey, would it be possible to turn udev info log here for a while? We should see which device causes the problem then (as it's seen in bug #807580). The /etc/udev/udev.conf and using udev_log="info" there. Thanks. I think we'll change the severity of the message to log_verbose only as this is normal operation of udev/libudev (also described in libudev reference manual). It's considered OK that the function returning the node name returns NULL value if the device does not exist anymore (if the deactivation happens just in between we get the list of devices and then we iterate over it and we try to get more details from udev db). However, it would be fine to see *which devices are causing problems* here, just to be sure that this is not a problem with lvm2 udev synchronization (since the volume removal should be synchronized and when finished, we should have a consistent udev database content). Kabi can reproduce this problem with our testsuite when running several testsuite runs in parallel - so we have one test cleaning up devices and the other one getting NULL values from libudev... That's because we get all existing block devices in case the obtain_device_list_from_udev is used, not just the testing one... So the question here is if the device is related to the actions done by the LVM command or if this is just some other block device being processed in parallel (which is not related). We should see that from the libudev log (comment #1), if provided. As we have to count with the situation that libudev returns NULL value here (if the record goes away by the time we ask for more info from udev db), we decided to lower the severity of the message to log_very_verbose only. It wouldn't be correct to issue warnings for non-related devices which is what we see in bug #807580 for example. https://www.redhat.com/archives/lvm-devel/2012-April/msg00005.html I'll include this patch in next respin (though it would still be benefical to know which device causes trouble in this particular case reported here in this bz).
Technical note added. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.
New Contents:
No documentation needed.
I'm not seeing these warnings any more in the latest rpms. Marking verified. 2.6.32-269.el6.x86_64 lvm2-2.02.95-8.el6 BUILT: Wed May 9 03:33:32 CDT 2012 lvm2-libs-2.02.95-8.el6 BUILT: Wed May 9 03:33:32 CDT 2012 lvm2-cluster-2.02.95-8.el6 BUILT: Wed May 9 03:33:32 CDT 2012 udev-147-2.41.el6 BUILT: Thu Mar 1 13:01:08 CST 2012 device-mapper-1.02.74-8.el6 BUILT: Wed May 9 03:33:32 CDT 2012 device-mapper-libs-1.02.74-8.el6 BUILT: Wed May 9 03:33:32 CDT 2012 device-mapper-event-1.02.74-8.el6 BUILT: Wed May 9 03:33:32 CDT 2012 device-mapper-event-libs-1.02.74-8.el6 BUILT: Wed May 9 03:33:32 CDT 2012 cmirror-2.02.95-8.el6 BUILT: Wed May 9 03:33:32 CDT 2012 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2012-0962.html |