RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1152382 - [NPIV] The volume in scsi pool appears only after refreshing pool
Summary: [NPIV] The volume in scsi pool appears only after refreshing pool
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: John Ferlan
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-10-14 03:52 UTC by Yang Yang
Modified: 2015-03-05 07:46 UTC (History)
5 users (show)

Fixed In Version: libvirt-1.2.8-8.el7
Doc Type: Bug Fix
Doc Text:
Cause: For NPIV (N_Port ID Virtualization) devices, the timing between pool creation (VPORT_CREATE) and the host udev causing the devices to be available resulted in libvirt not finding any volumes in a pool. Consequence: At pool creation time, the pool would appear to be empty if a 'virsh vol-list $pool' was run. However, after executing a 'virsh pool-refresh $pool', the devices would appear. Fix: Added a thread to poll looking for the udev to complete configuring the devices for the host. Once devices are discovered, the equivalent of a virsh pool-refresh $pool will be run to fill the volumes into the pool so that 'virsh vol-list $pool' will display them. Result: Running 'virsh vol-list $pool' after pool creation will now properly display the devices without requiring an intervening 'virsh pool-refresh $pool'.
Clone Of:
Environment:
Last Closed: 2015-03-05 07:46:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
libvirtd log (2.72 MB, text/plain)
2014-10-14 03:52 UTC, Yang Yang
no flags Details
/var/log/messages (4.94 KB, text/plain)
2014-10-14 03:53 UTC, Yang Yang
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0323 0 normal SHIPPED_LIVE Low: libvirt security, bug fix, and enhancement update 2015-03-05 12:10:54 UTC

Description Yang Yang 2014-10-14 03:52:26 UTC
Created attachment 946649 [details]
libvirtd log

Description of problem:
Define/start a scsi pool by defining the adapter type with fc_host.
No volume can be found by virsh comm vol-list in the pool.
However, the volume appears after refreshing the pool.

Version-Release number of selected component (if applicable):
3.10.0-187.el7.x86_64
libvirt-1.2.8-5.el7.x86_64
qemu-kvm-rhev-2.1.2-3.el7.x86_64

How reproducible:
100%

Steps to Reproduce:


1. Discover the physical HBA
# virsh nodedev-list --cap vports
scsi_host4
scsi_host5

[root@dell-pet105-04 timesu]# virsh nodedev-list scsi_host
scsi_host0
scsi_host1
scsi_host2
scsi_host3
scsi_host4
scsi_host5

2. Define/start a scsi pool based on scsi_host5

# virsh pool-define fc-pool.xml
Pool fc-pool defined from fc-pool.xml
# virsh pool-start fc-pool
Pool fc-pool started

# virsh pool-dumpxml fc-pool
<pool type='scsi'>
  <name>fc-pool</name>
  <uuid>7e3d8ee2-ca7f-45ca-a2cc-c2125267b628</uuid>
  <capacity unit='bytes'>10737418240</capacity>
  <allocation unit='bytes'>10737418240</allocation>
  <available unit='bytes'>0</available>
  <source>
    <adapter type='fc_host' parent='scsi_host5' wwnn='2101001b32a90002' wwpn='2101001b32a90003'/>
  </source>
  <target>
    <path>/dev/disk/by-path</path>
    <permissions>
      <mode>0700</mode>
      <owner>0</owner>
      <group>0</group>
    </permissions>
  </target>
</pool>

# virsh pool-list --all
 Name                 State      Autostart
-------------------------------------------
 default              active     no        
 fc-pool              active     no

3. List the volume
# virsh vol-list fc-pool
 Name                 Path                                    
---------------------------------------------------------------

4. discover the lun on vHBA
# virsh nodedev-list scsi_host
scsi_host0
scsi_host1
scsi_host2
scsi_host3
scsi_host4
scsi_host5
scsi_host8

+- scsi_host5
  |           |
  |           +- scsi_host8
  |           |   |
  |           |   +- scsi_target8_0_0
  |           |   |   |
  |           |   |   +- scsi_8_0_0_0
  |           |   |     
  |           |   +- scsi_target8_0_1
  |           |   |   |
  |           |   |   +- scsi_8_0_1_0
  |           |   |       |
  |           |   |       +- block_sdf_3600a0b80005ad1d700002dde4fa32ca8
  |           |   |         
  |           |   +- scsi_target8_0_2
  |           |   |   |
  |           |   |   +- scsi_8_0_2_0
  |           |   |       |
  |           |   |       +- block_sdg_3600a0b80005ad1d700002dde4fa32ca8
  |           |   |         
  |           |   +- scsi_target8_0_3
  |           |       |
  |           |       +- scsi_8_0_3_0


5. refresh the pool and list the volume again
# virsh pool-refresh fc-pool
Pool fc-pool refreshed

# virsh vol-list fc-pool
 Name                 Path                                    
------------------------------------------------------------------------------
 unit:0:1:0           /dev/disk/by-path/pci-0000:04:00.1-fc-0x203600a0b85b5dd4-lun-0


Actual Results:
No volume can be found by virsh comm vol-list after the pool started.
However, the volume appears after refreshing the pool.

Expected Results:
in step 3: the volume should be displayed


Additional info:

Comment 1 Yang Yang 2014-10-14 03:53:06 UTC
Created attachment 946650 [details]
/var/log/messages

Comment 3 John Ferlan 2014-11-04 21:01:57 UTC
I have access to a system now with the capability to create fc_host adapters and I'm not seeing the same results as you. It's not totally surprising given that in order to discover LUN's on the Fiber, we first run '/sbin/udevadm settle' and that's known to sometimes take "time" - time that we don't necessarily wait for successful completion (there is/was a bz on it, but I cannot remember it).

FWIW: It seems the bulk of your instructions are creating an HBA and not a vHBA. That is there's no 'virsh nodedev-create vhba.xml' in order to define (for example) scsi_host8, which is then used as the parent, wwnn, wwpn for the pool as described at http://wiki.libvirt.org/page/NPIV_in_libvirt

By putting NPIV in the subject, I assume you meant to create/use the vHBA, but didn't.  The instructions and documentation is a bit vague with respect to when one or the other could/should be used. I'm trying to research a bit more.

I'm still having more issues with the code - right now I cannot destroy either the HBA or vHBA pool. In fact, I may have inadvertently crashed a host in the vHBA model by using nodedev-destroy on the vHBA while "something" was happening in the background...

I'm not quite sure how this code ever worked <grumble, grumble>...

Comment 4 Yang Yang 2014-11-05 02:16:27 UTC
(In reply to John Ferlan from comment #3)
> I have access to a system now with the capability to create fc_host adapters
> and I'm not seeing the same results as you. It's not totally surprising
> given that in order to discover LUN's on the Fiber, we first run
> '/sbin/udevadm settle' and that's known to sometimes take "time" - time that
> we don't necessarily wait for successful completion (there is/was a bz on
> it, but I cannot remember it).
> 
> FWIW: It seems the bulk of your instructions are creating an HBA and not a
> vHBA. That is there's no 'virsh nodedev-create vhba.xml' in order to define
> (for example) scsi_host8, which is then used as the parent, wwnn, wwpn for
> the pool as described at http://wiki.libvirt.org/page/NPIV_in_libvirt
> 
> By putting NPIV in the subject, I assume you meant to create/use the vHBA,
> but didn't.  The instructions and documentation is a bit vague with respect
> to when one or the other could/should be used. I'm trying to research a bit
> more.
> 

Actually, I attempted to create a vHBA (not HBA) by creation of 'fc_host' adapter pool as described at http://wiki.libvirt.org/page/NPIV_in_libvirt#Creation_of_vHBA_by_the_storage_pool. The vHBA was indeed created, but the LUN from the vHBA was NOT attached to the pool.

> I'm still having more issues with the code - right now I cannot destroy
> either the HBA or vHBA pool. In fact, I may have inadvertently crashed a
> host in the vHBA model by using nodedev-destroy on the vHBA while
> "something" was happening in the background...
> 
> I'm not quite sure how this code ever worked <grumble, grumble>...

Comment 5 John Ferlan 2014-11-05 12:15:24 UTC
Oh.... It wasn't clear to me that "Creation of the vHBA using the node device driver" and "Creation of the vHBA by the storage pool" was an *OR* type operation.  What use would the former have without the latter?  That is after creating a vHBA via the node device driver - what use would it have? The purpose of creating the the libvirt storage pool is to ensure that vHBA sticks around after a reboot (e.g., from the document "This vHBA will only be defined as long the host is not rebooted. In order to create a persistent vHBA, one must use a libvirt storage pool (see below).").

Curious - without a node device driver vHBA created, how what values are you using for wwnn/wwpn? From the documentation? Randomly selected?

I was taking the wwnn/wwpn of the vHBA I created via the node device driver and using them as the wwnn/wwpn for my pool. I figured the only way the pool could "hook" into the vHBA was to have a matching wwnn/wwpn - I think that's why by ignoring the first step your pool doesn't find the LUN's, but I'll have to dig more at the code.

In any case, certainly not very clear which is frustrating.

Comment 6 John Ferlan 2014-11-05 14:27:56 UTC
While researching something else I tripped across the (well hidden) description of the "timing" issue with refresh, see:

http://libvirt.org/git/?p=libvirt.git;a=commit;h=6b29eb848f741742a0f393df40bbcc176520bf27

I'll continue to poke at this particular issue, once I have all the possible combinations to define things figured out...

Comment 7 John Ferlan 2014-11-05 20:05:35 UTC
FWIW: I use the following:

log_filters="3:remote 4:event 3:json 3:rpc"

In my libvirtd.conf - it seems to reduce the noise of all the event polling code present in your libvirtd.log...

Interesting - I tried to fabricate my own wwnn/wwpn today using values similar to my existing scsi_host, but that failed with:

# virsh pool-start fc_pool_host3
error: Failed to start pool fc_pool_host3
error: Write of '11000000c9848140:21000000c9848140' to '/sys/class/fc_host//host3/vport_create' during vport create/delete failed: No such file or directory

#

My scsi_host3 wwnn/wwpn is '10000000c9848140:20000000c9848140' (I even tried adjust the first 4 numbers to be 1999 and 2999 respectively and that didn't work as well.  I have to "assume" there's some sort of numbering scheme I'm unaware of)

In any case, if I use the wwnn/wwpn you've used, then I can recreate this issue.  I can also reproduce the issue we've been e-mailing about with respect to not providing a 'parent' and the existence of the automagically created vHBA.

What's really "interesting" is that once I do the refresh, I'll get a list such as the following:

 Name                 Path                                    
------------------------------------------------------------------------------
 unit:0:10:0          /dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016944602198-lun-0
 unit:0:4:0           /dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016044602198-lun-0
 unit:0:5:0           /dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016844602198-lun-0
 unit:0:8:0           /dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016144602198-lun-0

However, if I create a "more direct" HBA to my scsi_host#, I get:
 Name                 Path                                    
------------------------------------------------------------------------------
 unit:0:10:0          /dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016044602198-lun-0
 unit:0:11:0          /dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016844602198-lun-0
 unit:0:2:13          /dev/disk/by-path/pci-0000:10:00.0-fc-0x217800c0ffd79b2a-lun-13
 unit:0:3:0           /dev/disk/by-path/pci-0000:10:00.0-fc-0x207000c0ffd79b2a-lun-0
 unit:0:3:10          /dev/disk/by-path/pci-0000:10:00.0-fc-0x207000c0ffd79b2a-lun-10
 unit:0:3:3           /dev/disk/by-path/pci-0000:10:00.0-fc-0x207000c0ffd79b2a-lun-3
 unit:0:3:4           /dev/disk/by-path/pci-0000:10:00.0-fc-0x207000c0ffd79b2a-lun-4
 unit:0:3:5           /dev/disk/by-path/pci-0000:10:00.0-fc-0x207000c0ffd79b2a-lun-5
 unit:0:3:6           /dev/disk/by-path/pci-0000:10:00.0-fc-0x207000c0ffd79b2a-lun-6
 unit:0:3:7           /dev/disk/by-path/pci-0000:10:00.0-fc-0x207000c0ffd79b2a-lun-7
 unit:0:3:8           /dev/disk/by-path/pci-0000:10:00.0-fc-0x207000c0ffd79b2a-lun-8
 unit:0:3:9           /dev/disk/by-path/pci-0000:10:00.0-fc-0x207000c0ffd79b2a-lun-9
 unit:0:6:0           /dev/disk/by-path/pci-0000:10:00.0-fc-0x217000c0ffd79b2a-lun-0
 unit:0:6:10          /dev/disk/by-path/pci-0000:10:00.0-fc-0x217000c0ffd79b2a-lun-10
 unit:0:6:3           /dev/disk/by-path/pci-0000:10:00.0-fc-0x217000c0ffd79b2a-lun-3
 unit:0:6:4           /dev/disk/by-path/pci-0000:10:00.0-fc-0x217000c0ffd79b2a-lun-4
 unit:0:6:5           /dev/disk/by-path/pci-0000:10:00.0-fc-0x217000c0ffd79b2a-lun-5
 unit:0:6:6           /dev/disk/by-path/pci-0000:10:00.0-fc-0x217000c0ffd79b2a-lun-6
 unit:0:6:7           /dev/disk/by-path/pci-0000:10:00.0-fc-0x217000c0ffd79b2a-lun-7
 unit:0:6:8           /dev/disk/by-path/pci-0000:10:00.0-fc-0x217000c0ffd79b2a-lun-8
 unit:0:6:9           /dev/disk/by-path/pci-0000:10:00.0-fc-0x217000c0ffd79b2a-lun-9
 unit:0:7:13          /dev/disk/by-path/pci-0000:10:00.0-fc-0x207800c0ffd79b2a-lun-13
 unit:0:8:0           /dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016144602198-lun-0
 unit:0:9:0           /dev/disk/by-path/pci-0000:10:00.0-fc-0x5006016944602198-lun-0

I'll have to dig more on that later, but at the very least I can "see" now what you're seeing, although it doesn't quite make sense. I've got so many different "options" of how these things can be defined floating around inside my head it's hard to keep focused....

Comment 8 Yang Yang 2014-11-06 14:52:51 UTC
(In reply to John Ferlan from comment #5)
> Oh.... It wasn't clear to me that "Creation of the vHBA using the node
> device driver" and "Creation of the vHBA by the storage pool" was an *OR*
> type operation.  What use would the former have without the latter?  That is
> after creating a vHBA via the node device driver - what use would it have?
> The purpose of creating the the libvirt storage pool is to ensure that vHBA
> sticks around after a reboot (e.g., from the document "This vHBA will only
> be defined as long the host is not rebooted. In order to create a persistent
> vHBA, one must use a libvirt storage pool (see below).").
> 
> Curious - without a node device driver vHBA created, how what values are you
> using for wwnn/wwpn? From the documentation? Randomly selected?

As far as I know, the value of wwnn/wwpn I used is specially assigned to the HBA (scsi_host5), not randomly selected. It can NOT be used in any other storages.
From my test results, if I create a vHBA using a random wwnn/wwpn, the vHBA is indeed created but no LUN is assigned to it. So it's meaningless using a random one.

> 
> I was taking the wwnn/wwpn of the vHBA I created via the node device driver
> and using them as the wwnn/wwpn for my pool. I figured the only way the pool
> could "hook" into the vHBA was to have a matching wwnn/wwpn - I think that's
> why by ignoring the first step your pool doesn't find the LUN's, but I'll
> have to dig more at the code.
> 
> In any case, certainly not very clear which is frustrating.

Comment 9 John Ferlan 2014-11-06 19:11:09 UTC
My question was more along the lines of if I never created a vHBA via nodedev-create, then how would I know what to provide for the wwnn/wwpn values?  If I take the values you've used, they work on my host as well; however, if I try to fabricate values on my own, it's a "hit or miss" whether they'll work.  What that algorithm is - I'm not sure.

I did find through Google search of "vport_create" a link which describes what I've observed:

http://www.linuxtopia.org/online_books/rhel6/rhel_6_virtualization/rhel_6_virtualization_chap-Para-virtualized_Windows_Drivers_Guide-N_Port_ID_Virtualization_NPIV.html

In particular, the section that says:


WWNN and WWPN validation
Libvirt does not validate the WWPN or WWNN values, invalid WWNs are rejected by the kernel and libvirt reports the failure. The error reported by the kernel is similar to the following:

# virsh nodedev-create badwwn.xml
error: Failed to create node device from badwwn.xml
error: Write of '1111222233334444:5555666677778888' to '/sys/class/fc_host/host6/vport_create' during vport create/delete failed: No such file or directory


Something I want to be sure to add to the documentation that we have. It may even be worth while to create some sort of utility to be a valid wwnn/wwpn generator.  It's added to my "to do" list!

Comment 10 John Ferlan 2014-11-06 20:26:32 UTC
I found the answer (sort of) to my own question - took a bit to dig within the code; however, there's a function 'virRandomGenerateWWN' which fabricates valid wwnn and wwpn's for the nodedev-create.  

The results I see follow the pattern as specified in the code as all my libvirt generate wwnn/wwpn values start with "5001a4a" followed by 9 random hexdigits. 

I also found the following wikipedia entry:

http://en.wikipedia.org/wiki/World_Wide_Name

There's a link off that page to the IEEE "official list".  The one used by libvirt for qemu is :

  00-1A-4A   (hex)		Qumranet Inc.
  001A4A     (base 16)		Qumranet Inc.
  				530 Lakeside Drive
				Suite 220
				Sunnyvale California 94085
				UNITED STATES


This matches the code. 

The "5" is some sort of marker defining the format/length of the following digits (the NAA). For "5", the following 6 digits are "assigned" to specific companies.

The prefix you've used "2101001" seems to be some sort of "original" IEEE format at described on the wikipedia page. I'd have to assume it's valid since it works, but I don't what the "range" for good/bad values would be.

So now that I've learned something new today - it's a good day!

Comment 11 Yang Yang 2014-11-07 08:27:20 UTC
(In reply to yangyang from comment #8)
> (In reply to John Ferlan from comment #5)
> > Oh.... It wasn't clear to me that "Creation of the vHBA using the node
> > device driver" and "Creation of the vHBA by the storage pool" was an *OR*
> > type operation.  What use would the former have without the latter?  That is
> > after creating a vHBA via the node device driver - what use would it have?
> > The purpose of creating the the libvirt storage pool is to ensure that vHBA
> > sticks around after a reboot (e.g., from the document "This vHBA will only
> > be defined as long the host is not rebooted. In order to create a persistent
> > vHBA, one must use a libvirt storage pool (see below).").
> > 
> > Curious - without a node device driver vHBA created, how what values are you
> > using for wwnn/wwpn? From the documentation? Randomly selected?
> 
> As far as I know, the value of wwnn/wwpn I used is specially assigned to the
> HBA (scsi_host5), not randomly selected. It can NOT be used in any other
> storages.
> From my test results, if I create a vHBA using a random wwnn/wwpn, the vHBA
> is indeed created but no LUN is assigned to it. So it's meaningless using a
> random one.
> 

Pardon the confused message I replied. Confirmed with lsu(lsu), the values of wwnn/wwpn I used were ever created via nodedev-create by someone (not me), then IT guys help bind them to the HBA (scsi_host5) so that LUN can be assigned to the vHBA. So I always create vHBA providing the same value of wwnn/wwpn via nodedev-create and/or storage pool.

If you use the wwnn/wwpn of the vHBA created via nodedev-create as the wwnn/wwpn for a pool directly and never bind them to a HBA, I think LUN can never be found on the vHBA and volume can never be detected in the pool too.

> > 
> > I was taking the wwnn/wwpn of the vHBA I created via the node device driver
> > and using them as the wwnn/wwpn for my pool. I figured the only way the pool
> > could "hook" into the vHBA was to have a matching wwnn/wwpn - I think that's
> > why by ignoring the first step your pool doesn't find the LUN's, but I'll
> > have to dig more at the code.
> > 
> > In any case, certainly not very clear which is frustrating.

Comment 14 Yang Yang 2014-12-09 14:58:41 UTC
Verify it using the following component
libvirt-1.2.8-10.el7.x86_64
kernel-3.10.0-212.el7.x86_64

Steps:
# virsh nodedev-list scsi_host
scsi_host0
scsi_host1
scsi_host2
scsi_host3
scsi_host4
scsi_host5

# virsh nodedev-list --cap vports
scsi_host4
scsi_host5

# cat fc-pool.xml 
<pool type='scsi'>
<name>fc-pool</name>
<source>
<adapter type='fc_host' parent='scsi_host5' wwnn='2101001b32a90002' wwpn='2101001b32a90003'/>
</source>
<target>
<path>/dev/disk/by-path</path>
<permissions>
<mode>0700</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>

# virsh pool-define fc-pool.xml 
Pool fc-pool defined from fc-pool.xml

[root@dell-pet105-04 yy]# virsh pool-start fc-pool
Pool fc-pool started

[root@dell-pet105-04 yy]# virsh nodedev-list scsi_host
scsi_host0
scsi_host1
scsi_host2
scsi_host3
scsi_host4
scsi_host5
scsi_host6

# virsh vol-list fc-pool
 Name                 Path                                    
------------------------------------------------------------------------------
 unit:0:2:0           /dev/disk/by-path/pci-0000:04:00.1-fc-0x203600a0b85b5dd4-lun-0

# virsh pool-refresh fc-pool
Pool fc-pool refreshed

[root@dell-pet105-04 yy]# virsh vol-list fc-pool
 Name                 Path                                    
------------------------------------------------------------------------------
 unit:0:2:0           /dev/disk/by-path/pci-0000:04:00.1-fc-0x203600a0b85b5dd4-lun-0

Also test scsi pool startup using 'scsi_host' as adapter type, works well

As above steps get the expected result, mark it as verified

Comment 16 errata-xmlrpc 2015-03-05 07:46:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0323.html


Note You need to log in before you can comment on or make changes to this bug.