Bug 1472277
Summary: | An attempt to start a vHBA storage pool backed by an already pre-created vHBA returns unknown cause error | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Erik Skultety <eskultet> |
Component: | libvirt | Assignee: | John Ferlan <jferlan> |
Status: | CLOSED ERRATA | QA Contact: | yisun |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 7.4 | CC: | dyuan, lmen, rbalakri, xuzhang |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | libvirt-3.7.0-1.el7 | Doc Type: | No Doc Update |
Doc Text: |
undefined
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2018-04-10 10:52:40 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Erik Skultety
2017-07-18 10:49:50 UTC
hrmph... various refactors in the code broke this check. I've posted a patch to resolve: https://www.redhat.com/archives/libvir-list/2017-July/msg00662.html as part of a series https://www.redhat.com/archives/libvir-list/2017-July/msg00661.html Review of original changes, results in following patch: https://www.redhat.com/archives/libvir-list/2017-July/msg00838.html as part of a v3 of series: https://www.redhat.com/archives/libvir-list/2017-July/msg00837.html which now has been pushed: $ git describe c4030331c8bd820c6825db2dcd23c8743a5b9297 v3.5.0-238-gc403033 $ git show c4030331c8bd820c6825db2dcd23c8743a5b9297 commit c4030331c8bd820c6825db2dcd23c8743a5b9297 Author: John Ferlan <jferlan> Date: Tue Jul 18 09:21:30 2017 -0400 storage: Fix existing parent check for vHBA creation ... Commit id '106930aaa' altered the order of checking for an existing vHBA (e.g something created via nodedev-create functionality outside of the storage pool logic) which inadvertantly broke the code to decide whether to alter/force the fchost->managed field to be 'yes' because the storage pool will be managing the created vHBA in order to ensure when the storage pool is destroyed that the vHBA is also destroyed. This patch moves the check (and checkParent helper) for an existing vHBA back into the createVport in storage_backend_scsi. It also adjusts the checkParent logic to more closely follow the intentions prior to commit id '79ab0935'. The changes made by commit id '08c0ea16f' are only necessary to run the virStoragePoolFCRefreshThread when a vHBA was really created because there's a timing lag such that the refreshPool call made after a startPool from storagePoolCreate* wouldn't necessarily find LUNs, but the thread would. For an already existing vHBA, using the thread is unnecessary since the vHBA already exists and the lag to configure the LUNs wouldn't exist. Signed-off-by: John Ferlan <jferlan> verified with: libvirt-3.9.0-4.el7.x86_64 kernel-3.10.0-768.el7.x86_64 qemu-kvm-rhev-2.10.0-9.el7.x86_64 1. having an online hba scsi_host8 # virsh nodedev-dumpxml scsi_host8 <device> <name>scsi_host8</name> <path>/sys/devices/pci0000:00/0000:00:03.0/0000:08:00.1/host8</path> <parent>pci_0000_08_00_1</parent> <capability type='scsi_host'> <host>8</host> <unique_id>8</unique_id> <capability type='fc_host'> <wwnn>2001001b32a9da4e</wwnn> <wwpn>2101001b32a9da4e</wwpn> <fabric_wwn>2001547feeb71cc1</fabric_wwn> </capability> <capability type='vport_ops'> <max_vports>127</max_vports> <vports>1</vports> </capability> </capability> </device> 2. create a vhba with wwnn:wwpn = 20000000c99e2b81:1000000000000001 and parent=scsi_host8 # cat vhba.xml <device> <parent>scsi_host8</parent> <capability type='scsi_host'> <capability type='fc_host'> <wwnn>20000000c99e2b81</wwnn> <wwpn>1000000000000001</wwpn> </capability> </capability> </device> # virsh nodedev-create vhba.xml Node device scsi_host9 created from vhba.xml # virsh nodedev-dumpxml scsi_host9 <device> <name>scsi_host9</name> <path>/sys/devices/pci0000:00/0000:00:03.0/0000:08:00.1/host8/vport-8:0-0/host9</path> <parent>scsi_host8</parent> <capability type='scsi_host'> <host>9</host> <unique_id>9</unique_id> <capability type='fc_host'> <wwnn>20000000c99e2b81</wwnn> <wwpn>1000000000000001</wwpn> <fabric_wwn>2001547feeb71cc1</fabric_wwn> </capability> </capability> </device> # lsscsi | grep "\[9" [9:0:0:0] disk IBM 2145 0000 /dev/sdf [9:0:0:1] disk IBM 2145 0000 /dev/sdg [9:0:1:0] disk IBM 2145 0000 /dev/sdh [9:0:1:1] disk IBM 2145 0000 /dev/sdi 3. prepare a scsi pool with same wwnn:wwpn # cat vhba.pool <pool type='scsi'> <name>vhba</name> <capacity unit='bytes'>0</capacity> <allocation unit='bytes'>0</allocation> <available unit='bytes'>0</available> <source> <adapter type='fc_host' parent='scsi_host8' managed='no' wwnn='20000000c99e2b81' wwpn='1000000000000001'/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool> 4. try to create or start the pool # virsh pool-create vhba.pool error: Failed to create pool from vhba.pool error: unsupported configuration: the wwnn/wwpn for 'host9' are assigned to an HBA # virsh pool-define vhba.pool; virsh pool-start vhba Pool vhba defined from vhba.pool error: Failed to start pool vhba error: unsupported configuration: the wwnn/wwpn for 'host9' are assigned to an HBA 5. destroy the vhba and create the pool again, it should be successful # virsh nodedev-destroy scsi_host9 Destroyed node device 'scsi_host9' # virsh pool-start vhba Pool vhba started 6. check a new vhba created successfully # virsh nodedev-dumpxml scsi_host14 <device> <name>scsi_host14</name> <path>/sys/devices/pci0000:00/0000:00:03.0/0000:08:00.1/host8/vport-8:0-6/host14</path> <parent>scsi_host8</parent> <capability type='scsi_host'> <host>14</host> <unique_id>14</unique_id> <capability type='fc_host'> <wwnn>20000000c99e2b81</wwnn> <wwpn>1000000000000001</wwpn> <fabric_wwn>2001547feeb71cc1</fabric_wwn> </capability> </capability> </device> 7. destroy the pool, check the vhba destroyed # virsh pool-destroy vhba Pool vhba destroyed # virsh nodedev-dumpxml scsi_host14 error: Could not find matching device 'scsi_host14' error: Node device not found: no node device with matching name 'scsi_host14' Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:0704 |