RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1230664 - mpath checkPool using wrong check method
Summary: mpath checkPool using wrong check method
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.2
Hardware: x86_64
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: John Ferlan
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-06-11 09:57 UTC by Pei Zhang
Modified: 2015-11-19 06:41 UTC (History)
6 users (show)

Fixed In Version: libvirt-1.2.17-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-19 06:41:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2202 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2015-11-19 08:17:58 UTC

Description Pei Zhang 2015-06-11 09:57:40 UTC
Description of problem:
mpath checkpool using wrong check method .

Version-Release number of selected component (if applicable):
libvirt-1.2.16-1.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.
Description in
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/DM_Multipath/mpath_devices.html#multipath_device_id
said that
......
When new devices are brought under the control of DM Multipath, the new devices may be seen in two different places under the /dev directory: /dev/mapper/mpathn and /dev/dm-n.

2.check mpath checkpool function

virStorageBackendMpathCheckPool(virStoragePoolObjPtr pool ATTRIBUTE_UNUSED,
                                bool *isActive)
{
    *isActive = virFileExists("/dev/mpath");
    return 0;
}

3.check volume in mpath pool
# virsh vol-list mpathp
 Name                 Path                                    
------------------------------------------------------------------------------
 dm-3                 /dev/mapper/3600a0b80005adb0b0000ab2d4cae9254
 dm-6                 /dev/mapper/3600a0b80005ad1d700002ddc4fa32c87

Actual results:
In step 2 , actually the path "/dev/mpath" is already discard , Currently it just using /dev/mapper and /dev/dm-n.  

Expected results:
If the checkpool function is useful , the content of it need to be updated ; If the function is useless , it's better to delete it , and update virStorageBackendMpath .
Perhaps libvirt should pass target.path of pool or a fixed path for mpath pool

Additional info:

Comment 2 John Ferlan 2015-06-24 13:07:20 UTC
Patch sent upstream to add "/dev/mapper" to the list of allowed paths for the CheckPool function.  Even though /dev/mpath isn't allowed, it was at one time and was left for legacy, see:

http://www.redhat.com/archives/libvir-list/2015-June/msg01231.html

Comment 3 John Ferlan 2015-06-30 15:31:21 UTC
Patches pushed upstream:

commit dbad00189958db304295519842b28342a005a3bc
Author: John Ferlan <jferlan>
Date:   Wed Jun 24 07:46:47 2015 -0400

    mpath: Update path in CheckPool function
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1230664
    
    Per the devmapper docs, use "/dev/mapper" or "/dev/dm-n" in order to
    determine if a device is under control of DM Multipath.
    
    So add "/dev/mapper" to the virFileExists, leaving the "/dev/mpath"
    as a "legacy" option since it appears for a while it was the preferred
    mechanism, but is no longer maintained
$ git describe dbad00189958db304295519842b28342a005a3bc
v1.2.17-rc1-12-gdbad001
$

Comment 5 yisun 2015-07-17 02:03:56 UTC
PASSED
verified on:
libvirt-1.2.17-2.el7.x86_64
qemu-kvm-rhev-2.3.0-9.el7.x86_64
kernel-3.10.0-290.el7.x86_64





the virStorageBackendMpathCheckPool (checkPool) is called by storagePoolUpdateState (src/storage/storage_driver.c), which as follow

static void
storagePoolUpdateState(virStoragePoolObjPtr pool)
{
    bool active;
    virStorageBackendPtr backend;
    int ret = -1;
    char *stateFile;

    if (!(stateFile = virFileBuildPath(driver->stateDir,
                                       pool->def->name, ".xml")))
        goto error;

    if ((backend = virStorageBackendForType(pool->def->type)) == NULL) {
        VIR_ERROR(_("Missing backend %d"), pool->def->type);
        goto error;
    }

    /* Backends which do not support 'checkPool' are considered
     * inactive by default.
     */
    active = false;
    if (backend->checkPool &&
        backend->checkPool(pool, &active) < 0) {     // so if mpath and mapper dirs are not existing, "active" will be set to false
        virErrorPtr err = virGetLastError();
        VIR_ERROR(_("Failed to initialize storage pool '%s': %s"),
                  pool->def->name, err ? err->message :
                  _("no error message found"));
        goto error;
    }

    /* We can pass NULL as connection, most backends do not use
     * it anyway, but if they do and fail, we want to log error and
     * continue with other pools.
     */
    if (active) {  // when mpath and mapper dirs are not existing, this segment will be ignored.
        virStoragePoolObjClearVols(pool);
        if (backend->refreshPool(NULL, pool) < 0) {
            virErrorPtr err = virGetLastError();
            if (backend->stopPool)
                backend->stopPool(NULL, pool);
            VIR_ERROR(_("Failed to restart storage pool '%s': %s"),
                      pool->def->name, err ? err->message :
                      _("no error message found"));
            goto error;
        }
    }

    pool->active = active;   // when mpath and mapper dirs are not existing, pool will be set as inactive in libvirt.
    ret = 0;
 error:
    if (ret < 0) {
        if (stateFile)
            unlink(stateFile);
    }
    VIR_FREE(stateFile);

    return;
}


So my test scenarios are as follow:==================
1. Make sure neither of /dev/mapper and /dev/mpath are existing, define and start a mpath pool, restart libvirtd and check the mpath pool status. ("inactive" expected)
2. Make sure /dev/mapper dir exists, define and start a mpath pool, restart libvirtd and check the mpath pool status. ("active" expected)
3. Make sure /dev/mpath dir exists, define and start a mpath pool, restart libvirtd and check the mpath pool status. ("active" expected)


Prepare a mpath pool=============================
#cat mpath.pool
<pool type='mpath'>
  <name>mpath</name>
  <available unit='bytes'>0</available>
  <source>
  </source>
  <target>
    <path>/dev/mapper</path>
  </target>
</pool>

# virsh pool-define mpath.pool
Pool mpath defined from mpath.pool

# virsh pool-start mpath
Pool mpath started

# virsh pool-list | grep mpath
 mpath                active     no 


Scenario 1:======================================
1. # service multipathd stop
Redirecting to /bin/systemctl stop  multipathd.service
2.# rm -fr /dev/mpath/; rm -fr /dev/mpath/

3. # service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service
4. # virsh pool-list --all
 Name                 State      Autostart
-------------------------------------------
...   
 mpath                inactive   no        <=== inactive as expected.
...


Scenario 2:======================================
1. # mkdir /dev/mapper
2. # virsh pool-start mpath
Pool mpath started

3. # service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

4. # virsh pool-list --all
 Name                 State      Autostart
-------------------------------------------
...   
 mpath                active     no        <==== active as expected


Scenario 3:======================================
1. # rm -rf /dev/mapper/
2. # mkdir /dev/mpath
3. # virsh pool-start mpath
error: Failed to start pool mpath
error: Requested operation is not valid: storage pool 'mpath' is already active

4. # service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service
[root@localhost ~]# virsh pool-list --all
 Name                 State      Autostart
-------------------------------------------
...
 mpath                active     no        <====== active as expected
...

Comment 7 errata-xmlrpc 2015-11-19 06:41:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2202.html


Note You need to log in before you can comment on or make changes to this bug.