Bug 1230664
Summary: | mpath checkPool using wrong check method | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Pei Zhang <pzhang> |
Component: | libvirt | Assignee: | John Ferlan <jferlan> |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 7.2 | CC: | dyuan, mzhan, rbalakri, xuzhang, yanyang, yisun |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | libvirt-1.2.17-1.el7 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-11-19 06:41:19 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Pei Zhang
2015-06-11 09:57:40 UTC
Patch sent upstream to add "/dev/mapper" to the list of allowed paths for the CheckPool function. Even though /dev/mpath isn't allowed, it was at one time and was left for legacy, see: http://www.redhat.com/archives/libvir-list/2015-June/msg01231.html Patches pushed upstream: commit dbad00189958db304295519842b28342a005a3bc Author: John Ferlan <jferlan> Date: Wed Jun 24 07:46:47 2015 -0400 mpath: Update path in CheckPool function https://bugzilla.redhat.com/show_bug.cgi?id=1230664 Per the devmapper docs, use "/dev/mapper" or "/dev/dm-n" in order to determine if a device is under control of DM Multipath. So add "/dev/mapper" to the virFileExists, leaving the "/dev/mpath" as a "legacy" option since it appears for a while it was the preferred mechanism, but is no longer maintained $ git describe dbad00189958db304295519842b28342a005a3bc v1.2.17-rc1-12-gdbad001 $ PASSED verified on: libvirt-1.2.17-2.el7.x86_64 qemu-kvm-rhev-2.3.0-9.el7.x86_64 kernel-3.10.0-290.el7.x86_64 the virStorageBackendMpathCheckPool (checkPool) is called by storagePoolUpdateState (src/storage/storage_driver.c), which as follow static void storagePoolUpdateState(virStoragePoolObjPtr pool) { bool active; virStorageBackendPtr backend; int ret = -1; char *stateFile; if (!(stateFile = virFileBuildPath(driver->stateDir, pool->def->name, ".xml"))) goto error; if ((backend = virStorageBackendForType(pool->def->type)) == NULL) { VIR_ERROR(_("Missing backend %d"), pool->def->type); goto error; } /* Backends which do not support 'checkPool' are considered * inactive by default. */ active = false; if (backend->checkPool && backend->checkPool(pool, &active) < 0) { // so if mpath and mapper dirs are not existing, "active" will be set to false virErrorPtr err = virGetLastError(); VIR_ERROR(_("Failed to initialize storage pool '%s': %s"), pool->def->name, err ? err->message : _("no error message found")); goto error; } /* We can pass NULL as connection, most backends do not use * it anyway, but if they do and fail, we want to log error and * continue with other pools. */ if (active) { // when mpath and mapper dirs are not existing, this segment will be ignored. virStoragePoolObjClearVols(pool); if (backend->refreshPool(NULL, pool) < 0) { virErrorPtr err = virGetLastError(); if (backend->stopPool) backend->stopPool(NULL, pool); VIR_ERROR(_("Failed to restart storage pool '%s': %s"), pool->def->name, err ? err->message : _("no error message found")); goto error; } } pool->active = active; // when mpath and mapper dirs are not existing, pool will be set as inactive in libvirt. ret = 0; error: if (ret < 0) { if (stateFile) unlink(stateFile); } VIR_FREE(stateFile); return; } So my test scenarios are as follow:================== 1. Make sure neither of /dev/mapper and /dev/mpath are existing, define and start a mpath pool, restart libvirtd and check the mpath pool status. ("inactive" expected) 2. Make sure /dev/mapper dir exists, define and start a mpath pool, restart libvirtd and check the mpath pool status. ("active" expected) 3. Make sure /dev/mpath dir exists, define and start a mpath pool, restart libvirtd and check the mpath pool status. ("active" expected) Prepare a mpath pool============================= #cat mpath.pool <pool type='mpath'> <name>mpath</name> <available unit='bytes'>0</available> <source> </source> <target> <path>/dev/mapper</path> </target> </pool> # virsh pool-define mpath.pool Pool mpath defined from mpath.pool # virsh pool-start mpath Pool mpath started # virsh pool-list | grep mpath mpath active no Scenario 1:====================================== 1. # service multipathd stop Redirecting to /bin/systemctl stop multipathd.service 2.# rm -fr /dev/mpath/; rm -fr /dev/mpath/ 3. # service libvirtd restart Redirecting to /bin/systemctl restart libvirtd.service 4. # virsh pool-list --all Name State Autostart ------------------------------------------- ... mpath inactive no <=== inactive as expected. ... Scenario 2:====================================== 1. # mkdir /dev/mapper 2. # virsh pool-start mpath Pool mpath started 3. # service libvirtd restart Redirecting to /bin/systemctl restart libvirtd.service 4. # virsh pool-list --all Name State Autostart ------------------------------------------- ... mpath active no <==== active as expected Scenario 3:====================================== 1. # rm -rf /dev/mapper/ 2. # mkdir /dev/mpath 3. # virsh pool-start mpath error: Failed to start pool mpath error: Requested operation is not valid: storage pool 'mpath' is already active 4. # service libvirtd restart Redirecting to /bin/systemctl restart libvirtd.service [root@localhost ~]# virsh pool-list --all Name State Autostart ------------------------------------------- ... mpath active no <====== active as expected ... Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2202.html |