Red Hat Bugzilla – Bug 809576
"WARNING: udev failed to return a device node" during deactivation and removal
Last modified: 2012-06-20 11:03:30 EDT
Description of problem:
This may be related to bug 807580.
This isn't test case specific, I just happen to see it from time to time during test clean up (deactivate and remove)
SCENARIO - [split_nosync_raid1]
Create a 3-way nosync raid1 and split it
taft-01: lvcreate --type raid1 --nosync -m 2 -n split_nosync -L 300M split_image
WARNING: New raid1 won't be synchronised. Don't read what you didn't write!
Waiting until all mirror|raid volumes become fully syncd...
1/1 mirror(s) are fully synced: ( 100.00% )
splitting off leg from nosync...
Deactivating mirror new... and removing
Deactivating mirror split_nosync... and removing
Although the mirror removal passed, errors were found in it's output
WARNING: udev failed to return a device node.
Couldn't find device with uuid Vu7Roz-DHda-qvt8-ZPsh-SSlw-P3go-5oA8o6.
Logical volume "split_nosync" successfully removed
Apr 2 19:25:10 taft-01 qarshd: Running cmdline: lvchange -an /dev/split_image/new
Apr 2 19:25:12 taft-01 xinetd: EXIT: qarsh status=0 pid=19781 duration=2(sec)
Apr 2 19:25:12 taft-01 xinetd: START: qarsh pid=19786 from=::ffff:10.15.80.47
Apr 2 19:25:12 taft-01 qarshd: Talking to peer 10.15.80.47:34948
Apr 2 19:25:12 taft-01 qarshd: Running cmdline: lvremove -f /dev/split_image/new
Apr 2 19:25:12 taft-01 xinetd: EXIT: qarsh status=0 pid=19786 duration=0(sec)
Apr 2 19:25:13 taft-01 xinetd: START: qarsh pid=19800 from=::ffff:10.15.80.47
Apr 2 19:25:13 taft-01 qarshd: Talking to peer 10.15.80.47:34949
Version-Release number of selected component (if applicable):
lvm2-2.02.95-3.el6 BUILT: Fri Mar 30 09:54:10 CDT 2012
lvm2-libs-2.02.95-3.el6 BUILT: Fri Mar 30 09:54:10 CDT 2012
lvm2-cluster-2.02.95-3.el6 BUILT: Fri Mar 30 09:54:10 CDT 2012
udev-147-2.40.el6 BUILT: Fri Sep 23 07:51:13 CDT 2011
device-mapper-1.02.74-3.el6 BUILT: Fri Mar 30 09:54:10 CDT 2012
device-mapper-libs-1.02.74-3.el6 BUILT: Fri Mar 30 09:54:10 CDT 2012
device-mapper-event-1.02.74-3.el6 BUILT: Fri Mar 30 09:54:10 CDT 2012
device-mapper-event-libs-1.02.74-3.el6 BUILT: Fri Mar 30 09:54:10 CDT 2012
cmirror-2.02.95-3.el6 BUILT: Fri Mar 30 09:54:10 CDT 2012
Corey, would it be possible to turn udev info log here for a while? We should see which device causes the problem then (as it's seen in bug #807580).
The /etc/udev/udev.conf and using udev_log="info" there. Thanks.
I think we'll change the severity of the message to log_verbose only as this is normal operation of udev/libudev (also described in libudev reference manual). It's considered OK that the function returning the node name returns NULL value if the device does not exist anymore (if the deactivation happens just in between we get the list of devices and then we iterate over it and we try to get more details from udev db).
However, it would be fine to see *which devices are causing problems* here, just to be sure that this is not a problem with lvm2 udev synchronization (since the volume removal should be synchronized and when finished, we should have a consistent udev database content).
Kabi can reproduce this problem with our testsuite when running several testsuite runs in parallel - so we have one test cleaning up devices and the other one getting NULL values from libudev... That's because we get all existing block devices in case the obtain_device_list_from_udev is used, not just the testing one...
So the question here is if the device is related to the actions done by the LVM command or if this is just some other block device being processed in parallel (which is not related). We should see that from the libudev log (comment #1), if provided.
As we have to count with the situation that libudev returns NULL value here (if the record goes away by the time we ask for more info from udev db), we decided to lower the severity of the message to log_very_verbose only. It wouldn't be correct to issue warnings for non-related devices which is what we see in bug #807580 for example.
I'll include this patch in next respin (though it would still be benefical to know which device causes trouble in this particular case reported here in this bz).
Technical note added. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.
No documentation needed.
I'm not seeing these warnings any more in the latest rpms. Marking verified.
lvm2-2.02.95-8.el6 BUILT: Wed May 9 03:33:32 CDT 2012
lvm2-libs-2.02.95-8.el6 BUILT: Wed May 9 03:33:32 CDT 2012
lvm2-cluster-2.02.95-8.el6 BUILT: Wed May 9 03:33:32 CDT 2012
udev-147-2.41.el6 BUILT: Thu Mar 1 13:01:08 CST 2012
device-mapper-1.02.74-8.el6 BUILT: Wed May 9 03:33:32 CDT 2012
device-mapper-libs-1.02.74-8.el6 BUILT: Wed May 9 03:33:32 CDT 2012
device-mapper-event-1.02.74-8.el6 BUILT: Wed May 9 03:33:32 CDT 2012
device-mapper-event-libs-1.02.74-8.el6 BUILT: Wed May 9 03:33:32 CDT 2012
cmirror-2.02.95-8.el6 BUILT: Wed May 9 03:33:32 CDT 2012
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.