RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1349115 - [RFE] fc_host support in virtio-scsi guests, with support for live migration (QEMU)
Summary: [RFE] fc_host support in virtio-scsi guests, with support for live migration ...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.2
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: 7.4
Assignee: Fam Zheng
QA Contact: Xueqiang Wei
URL:
Whiteboard:
: NPIV_SAN_PASSTHROUGH_TO_GUEST (view as bug list)
Depends On: NPIV_SAN_PASSTHROUGH_TO_GUEST, NPIV_SAN_PASSTHROUGH_TO_GUEST 1553682 1553685
Blocks: 806907 1349117 1404963
TreeView+ depends on / blocked
 
Reported: 2016-06-22 17:53 UTC by Ademar Reis
Modified: 2022-03-13 14:04 UTC (History)
25 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of: 1320621
: 1349117 (view as bug list)
Environment:
Last Closed: 2018-06-21 16:02:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 403413 0 None None None 2016-06-22 17:53:39 UTC
Red Hat Knowledge Base (Solution) 758463 0 None None None 2018-11-30 22:02:23 UTC

Description Ademar Reis 2016-06-22 17:53:40 UTC
+++ This bug was initially created as a clone of Bug #1320621 +++

1. What is the nature and description of the request?
   As PCI Passthrough is more a pain, as all components (Hardware, Firmware, BIOS, OS) need to play well together, doing a passthrough of a NPV adapter would help in solving this issue.
   This is already done by several other vendors (IBM AIX LPAR e.g.)

2. Why does the customer need this? (List the business requirements here)
   The reason is having a virtual Fibre Channel HBA in the VM available, so that mapping storage can be easily done without the need of the RHEV admin to take actions.
   Also presenting other FC devices than disks to the VM is sometimes needed (e.g. Backup Server, which needs to handle tapes)
      
3. How would the customer like to achieve this? (List the functional requirements here)
   - Add a NPIV to a phyiscal FC-Adapter
   - Add this NPV adapter to the VM
   - The VM should be able to use the NPV adapter as if it was plain hardware
      
4. For each functional requirement listed, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.  
   Just follow the steps laidf out before. If the RHEL system running in the VM can use and access the presented devices it does work.
   http://www.ibm.com/developerworks/aix/library/au-NPIV/ is also a good source on how it should work
      
5. Is there already an existing RFE upstream or in Red Hat Bugzilla?
   Not that I am aware of. There is a similar BZ, where this has been discussed: BZ #431454
   Also see https://bugzilla.redhat.com/show_bug.cgi?id=431454#c87 for some more explanation
      
6. Does the customer have any specific timeline dependencies and which release would they like to target (i.e. RHEL5, RHEL6)?
   asap, as it is getting more difficult with each hardware generation to get a working environment.
   Also with the RMRR restrictions introduced in RHEL 7 it is impossible to do PCI passthrough with modern HP hardware.
      
7. Is the sales team involved in this request and do they have any additional input?
   No
      
8. List any affected packages or components.
   RHEV-M and qemu/libvirt
      
9. Would the customer be able to assist in testing this functionality if implemented?  
   Yes

--- Additional comment from Yaniv Kaul on 2016-03-23 16:50:51 BRT ---

Dup of bug 431454 ?

--- Additional comment from Martin Tessun on 2016-03-24 04:32:45 BRT ---

Hi Yaniv,

no, as Bug 431454 discusses a LUN passthrough with a NPV adapter being created on the hypervisor, whereas this RFE is for NPV adapter passthrough, so not a single LUN, but as discussed in Bug 431454 a complete NPV adapter.

It was asked in https://bugzilla.redhat.com/show_bug.cgi?id=431454#c92 to create a separate RFE for this.

Cheers,
Martin

--- Additional comment from Yaniv Kaul on 2016-03-24 05:11:14 BRT ---

(In reply to Martin Tessun from comment #4)
> Hi Yaniv,
> 
> no, as Bug 431454 discusses a LUN passthrough with a NPV adapter being
> created on the hypervisor, whereas this RFE is for NPV adapter passthrough,
> so not a single LUN, but as discussed in Bug 431454 a complete NPV adapter.
> 
> It was asked in https://bugzilla.redhat.com/show_bug.cgi?id=431454#c92 to
> create a separate RFE for this.

Do we know it does NOT work as regular PCI passthrough device?
> 
> Cheers,
> Martin

--- Additional comment from Martin Tessun on 2016-03-24 05:34:54 BRT ---

Hi Yaniv,

(In reply to Yaniv Kaul from comment #6)
> (In reply to Martin Tessun from comment #4)
> > Hi Yaniv,
> > 
> > no, as Bug 431454 discusses a LUN passthrough with a NPV adapter being
> > created on the hypervisor, whereas this RFE is for NPV adapter passthrough,
> > so not a single LUN, but as discussed in Bug 431454 a complete NPV adapter.
> > 
> > It was asked in https://bugzilla.redhat.com/show_bug.cgi?id=431454#c92 to
> > create a separate RFE for this.
> 
> Do we know it does NOT work as regular PCI passthrough device?

Yes, it does not work that way, as it is a virtual device, and PCI passthrough is not able to address this device.

> > 
> > Cheers,
> > Martin

--- Additional comment from Michal Skrivanek on 2016-05-26 07:47:37 BRT ---

Martin, I tried to understand the various NPIV related bugs...there are a bit too many around:) 
But IIUC this is what bug 1270581 was about. The only thing which is not there is the actual creation of vHBA, that needs to be done separately on host. But once it is there it will be seen in the hostdev UI and can be passed through as a scsi adapter into a VM
Can you confirm?

--- Additional comment from Martin Tessun on 2016-05-27 04:42:51 BRT ---

Hi Michal,

Afaik thios will not work (yet) as qemu-kvm is missing this feature. See BZ #834514 for some more details.

But as soon as this is sorted, indeed, it will probably work this way.

Cheers,
Martin

Comment 1 Ademar Reis 2016-08-31 20:15:02 UTC
*** Bug 834514 has been marked as a duplicate of this bug. ***

Comment 2 Stefan Hajnoczi 2016-09-21 14:49:56 UTC
This feature is best implemented at the libvirt level where LUN hotplug can result in hot adding/removing LUNs at the QEMU level.  In order to support this NPIV use case libvirt will also have to add new syntax.  See bz#1349117 for the libvirt bug.

Comment 3 Martin Tessun 2016-09-22 09:01:04 UTC
Hi Stefan,

I don't agree on this. It is not about any LUN passthrough, but the NPIV port should be presneted to the guest OS as an adapter, like done with physical adapter passthrough.

As already mentioned, please see, e.g. http://www.ibm.com/developerworks/aix/library/au-NPIV/ for implementation.

As qemu is not yet able (afaik) to forward a NPIV port to the guest, I think this needs to be added as a functionality to qemu.

So I agree, that we also need some libvirt changes for being able to tell qemu which NPIV Port should be provided to the guest, but still qemu also needs to know, how to present this NPIV port to the guest, which also needs to be implemented.

So to describe the workflow:

1. NPIV created on OS level
2. the vPort is forwarded to the guest OS
3. The guest OS initializes this device and does the FC-scanning, etc.

I think that (2) is not yet implemented in qemu as of now.

So put to libvirt / qemu architecture:
- libvirt to create N-Port with WWNN/WWPN as requested
- libvirt to start qemu and "tell" qemu which virtual device (virtual N-Port) to "forward" to the guest 
- qemu to take control of this vN-Port and present it to the guest OS accordingly
- GuestOS to initialize the vN-Port correctly (FC-LIP, scanning, etc)

Please let me know, if you need to discuss this further.

Kind regards,
Martin

Comment 4 Stefan Hajnoczi 2016-09-26 12:29:27 UTC
(In reply to Martin Tessun from comment #3)
> 3. The guest OS initializes this device and does the FC-scanning, etc.

I don't think FibreChannel scanning is possible since the virtio-scsi device is not a FC HBA.

QEMU emulates a SCSI target.  The libvirt solution involves automatically adding/removing LUNs to the QEMU target as they change on the host's NPIV port.

The result is that the LUNs in the guest always reflect what is visible on the NPIV port without manually reconfiguring libvirt/QEMU.

Can you explain why this doesn't meet your requirements?

Comment 5 Martin Tessun 2016-09-26 13:27:48 UTC
Hi Stefan,

(In reply to Stefan Hajnoczi from comment #4)
> (In reply to Martin Tessun from comment #3)
> > 3. The guest OS initializes this device and does the FC-scanning, etc.
> 
> I don't think FibreChannel scanning is possible since the virtio-scsi device
> is not a FC HBA.

It should not be a virtio-scsi device, but a LPFC/QLOGIC device (depending on the parent device).

As I have not seen it implemented in Linux, I am not able if the lpfc/qla drivers do detect that there is only a virtual function forwarded to the guest and skips the card initialisation or if it should be done differently.

Within AIX you do not really notice if the HBA is a physical provided or a virtualized, as they are always represented as fcp (fibre channel port).

From the guest OS perspective there is no difference to a physical adapter on AIX side.

> 
> QEMU emulates a SCSI target.  The libvirt solution involves automatically
> adding/removing LUNs to the QEMU target as they change on the host's NPIV
> port.

Ah ok. So if I understand what we are doing correctly, then we provide a virtio-scsi device to the guest and libvirt "attaches" the LUNs provided to the NPIV port to the guest automatically.

So just some questions here:
* What about "non-disk" devices, like Tapes, Changers? Will they also work?
* Does this virtual adapter provide all SCSI commands possible (so does it forward, INQ, SCSI-3-Reservation, etc.

> 
> The result is that the LUNs in the guest always reflect what is visible on
> the NPIV port without manually reconfiguring libvirt/QEMU.

Ack. Sounds a possible solution to me, as long as all "types" of devices can be correctly "seen" in the guest OS.

> 
> Can you explain why this doesn't meet your requirements?

Looks like I did not get your approach in implementing it. Sounded to me like needing to tell qemu to attach the new disks, but I missed the automation part here.

Still it would be nice to check if this does work in a generic way (not only for disks).

Cheers,
Martin

Comment 6 Stefan Hajnoczi 2016-09-27 16:47:37 UTC
(In reply to Martin Tessun from comment #5)
> Hi Stefan,
> 
> (In reply to Stefan Hajnoczi from comment #4)
> > (In reply to Martin Tessun from comment #3)
> > > 3. The guest OS initializes this device and does the FC-scanning, etc.
> > 
> > I don't think FibreChannel scanning is possible since the virtio-scsi device
> > is not a FC HBA.
> 
> It should not be a virtio-scsi device, but a LPFC/QLOGIC device (depending
> on the parent device).
> 
> As I have not seen it implemented in Linux, I am not able if the lpfc/qla
> drivers do detect that there is only a virtual function forwarded to the
> guest and skips the card initialisation or if it should be done differently.
> 
> Within AIX you do not really notice if the HBA is a physical provided or a
> virtualized, as they are always represented as fcp (fibre channel port).
> 
> From the guest OS perspective there is no difference to a physical adapter
> on AIX side.

The Linux lpfc driver has PCI SR-IOV support.  You can pass through the virtual function to the guest and get the behavior you want (in theory).  I have never tried so I don't know how to do it exactly.

I'll need to think about your other questions regarding limitations on SCSI commands that can be sent through QEMU.  I discussed an alternative software NPIV approach with Paolo Bonzini that may be closer to the virtual Fibre Channel that POWER and Hyper-V offer and it wouldn't go through QEMU's emulated SCSI target..

Comment 12 Paolo Bonzini 2016-12-15 11:52:57 UTC
I'm recycling this bug for fc_host support in QEMU, a separate bug is needed for the kernel.

Bug 1349117 is now the tracker.

Comment 13 Paolo Bonzini 2016-12-15 14:18:40 UTC
New command line option for QEMU:

-device virtio-scsi-pci,
   {primary,secondary}_{wwnn,wwpn}=0x1234,fc_host=off|primary|secondary

Comment 17 Fam Zheng 2018-05-16 09:10:49 UTC
Paolo, could you answer that?

Comment 18 Paolo Bonzini 2018-05-18 13:14:46 UTC
I think this bug is more or less replaced by bug 1553682 and bug 1553685.

Comment 19 Ademar Reis 2018-06-21 16:02:35 UTC
(In reply to Paolo Bonzini from comment #18)
> I think this bug is more or less replaced by bug 1553682 and bug 1553685.

Looks like that's the case indeed, and we need to cleanup some of these BZs. So I'm closing this one.


Note You need to log in before you can comment on or make changes to this bug.