Hide Forgot
> Description of problem: When start the virt-manager or click on refresh pool this error is showed in the log: --snip-- May 2 09:12:11 ibm-x3755-1 libvirtd: 09:12:11.109: error : virStorageBackendMpathRefreshPool:321 : in virStorageBackendMpathRefreshPool --/snip-- > Version-Release number of selected component (if applicable): libvirt-0.8.1-27.el6.x86_64 kernel-2.6.32-71.el6.x86_64 device-mapper-multipath-0.4.9-31.el6.x86_64 > How reproducible: Always > Steps to Reproduce: After the aliases configuration in /etc/multipath.conf, the Storage Pool was created with the following xml : $ cat mpath.xml <pool type="mpath"> <name>mpath-test</name> <target> <path>/dev/mapper</path> </target> </pool> And defined with : # virsh pool-define mpath.xml Everything seems ok and virt-manager now is able to active this pool and populated it with 'mpath devices'. However, every time when I try to refresh this pool, the following error was logged by system: libvirtd: 16:19:22.686: error : virStorageBackendMpathRefreshPool:321 : in virStorageBackendMpathRefreshPool > Actual results: libvirtd is showing "virStorageBackendMpathRefreshPool error" when the pool is refreshed. > Expected results: libvirtd doesn't show "virStorageBackendMpathRefreshPool error" on logs. > Additional info: Source code: src/storage/storage_backend_mpath.c According to the source code libvirt is getting problems when try to check the virFileWaitForDevices(); line 321. static int virStorageBackendMpathRefreshPool(virConnectPtr conn ATTRIBUTE_UNUSED, virStoragePoolObjPtr pool) { int retval = 0; VIR_DEBUG("in %s", __func__); pool->def->allocation = pool->def->capacity = pool->def->available = 0; virFileWaitForDevices(); virStorageBackendGetMaps(pool); return retval; } -- /snip -- Here the function virFileWaitForDevices, source code: src/util/util.c -- snip -- void virFileWaitForDevices(void) { # ifdef UDEVADM const char *const settleprog[] = { UDEVADM, "settle", NULL }; # else const char *const settleprog[] = { UDEVSETTLE, NULL }; # endif int exitstatus; if (access(settleprog[0], X_OK) != 0) return; /* * NOTE: we ignore errors here; this is just to make sure that any device * nodes that are being created finish before we try to scan them. * If this fails for any reason, we still have the backup of polling for * 5 seconds for device nodes. */ if (virRun(settleprog, &exitstatus) < 0) {} }
Since RHEL 6.1 External Beta has begun, and this bug remains unresolved, it has been rejected as it is not proposed as exception or blocker. Red Hat invites you to ask your support representative to propose this request, if appropriate and relevant, in the next release of Red Hat Enterprise Linux.
Reproduced the bug with libvirt-0.8.1-27.el6.x86_64 When refresh the pool could see the following error in /var/log/libvirt/libvirtd.log 02:14:53.811: error : virStorageBackendMpathRefreshPool:321 : in virStorageBackendMpathRefreshPool Verified this bug pass with libvirt-0.9.1-1.el6.x86_64 1. Install device-mapper-multipath-0.4.9-41.el6.x86_64 2. # cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc/multipath.conf 3. # /etc/init.d/multipathd start Starting multipathd daemon: [ OK ] 4. Edit /etc/libvirt/libvirtd.log ################################################################# # # Logging controls # # Logging level: 4 errors, 3 warnings, 2 informations, 1 debug # basically 1 will log everything possible #log_level = 3 log_level = 1 # Multiple output can be defined, they just need to be separated by spaces. # e.g.: # log_outputs="3:syslog:libvirtd" log_outputs="1:file:/var/log/libvirt/libvirtd.log" 5. # cat mpath.xml <pool type="mpath"> <name>mpath-test</name> <target> <path>/dev/mapper</path> </target> </pool> 6. # virsh pool-define mpath.xml Pool mpath-test defined from mpath.xml # virsh pool-start mpath-test Pool mpath-test started 7. # virsh pool-refresh mpath-test Pool mpath-test refreshed 8. # cat /var/log/libvirt/libvirtd.log, no error info for pool refresh any more 02:21:11.869: 4677: debug : virStorageBackendMpathRefreshPool:319 : conn=0x7fb7ec07f100, pool=0x7fb7e4002b80
Move to VERIFIED according to Comment 6
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2011-1513.html