Bug 594661

Summary: The example commands for "Find the devices with virsh" in section "11.2. Using SR-IOV" may not be correct for RHEL 6
Product: Red Hat Enterprise Linux 6 Reporter: Justin Clift <justin>
Component: doc-Virtualization_Administration_GuideAssignee: Christopher Curran <ccurran>
Status: CLOSED CURRENTRELEASE QA Contact: ecs-bugs
Severity: medium Docs Contact:
Priority: low    
Version: 6.0CC: jskeoch
Target Milestone: rcKeywords: Documentation
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2010-08-31 04:23:42 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Justin Clift 2010-05-21 09:29:05 UTC
Description of problem:

In RHEL 6 (beta) x86_64, the example steps for "11.2 Using SR-IOV" don't seem to work as intended (using the same hardware being demonstrated - Intel 1GbE server adapters):

  http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6-Beta/html/Virtualization/sect-Para-virtualized_Windows_Drivers_Guide-How_SR_IOV_Libvirt_Works.html

The existing example has:

  6. Find the devices with virsh
  The libvirt service must find the device to add a device to a guest. Use the
  virsh nodedev-list command to list available host devices.

  # virsh nodedev-list | grep 8086
  pci_8086_10c9
  pci_8086_10c9_0
  pci_8086_10ca
  pci_8086_10ca_0
  [output truncated]

On the RHEL 6 beta (x86_64) hosts here, virsh lists the physical and virtual igb cards differently.

  # lspci | grep 82576
  0b:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
  0b:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
  0b:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
  0b:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
  0b:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
  0b:10.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
  0b:10.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
  0b:10.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
  0b:10.6 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
  0b:10.7 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
  0b:11.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
  0b:11.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
  0b:11.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
  0b:11.3 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
  0b:11.4 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
  0b:11.5 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
  # lspci -n | grep 0b:00.0
  0b:00.0 0200: 8086:10c9 (rev 01)
  # lspci -n | grep 0b:10.0
  0b:10.0 0200: 8086:10ca (rev 01)
  # virsh nodedev-list | grep 8086
  #

None of the adapters were shown when searching for "8086".  Instead of using "8086", grepping for the PCI address given above works (replacing ":" with underscores):

  # virsh nodedev-list | grep 0b_00_0
  pci_0000_0b_00_0
  # virsh nodedev-list | grep 0b_10_0
  pci_0000_0b_10_0
  # virsh nodedev-dumpxml pci_0000_0b_00_0
  <device>
    <name>pci_0000_0b_00_0</name>
    <parent>pci_0000_00_01_0</parent>
    <driver>
      <name>igb</name>
    </driver>
    <capability type='pci'>
      <domain>0</domain>
      <bus>11</bus>
      <slot>0</slot>
      <function>0</function>
      <product id='0x10c9'>Intel Corporation</product>
      <vendor id='0x8086'>82576 Gigabit Network Connection</vendor>
    </capability>
  </device>


  # virsh nodedev-dumpxml pci_0000_0b_10_0
  <device>
    <name>pci_0000_0b_10_0</name>
    <parent>pci_0000_00_01_0</parent>
    <driver>
      <name>igbvf</name>
    </driver>
    <capability type='pci'>
      <domain>0</domain>
      <bus>11</bus>
      <slot>16</slot>
      <function>0</function>
      <product id='0x10ca'>Intel Corporation</product>
      <vendor id='0x8086'>82576 Virtual Function</vendor>
    </capability>
  </device>


  #

Then, in the example for "8.  Detach the Virtual Functions" it has first then the physical and then virtual device attached:

  # virsh nodedev-dettach pci_8086_10ca
  Device pci_8086_10ca dettached
  # virsh nodedev-dettach pci_8086_10ca_0
  Device pci_8086_10ca_0 dettached

In RHEL 6 this fails when disconnecting the virtual device interface after the physical one.  In RHEL 6, it looks like detaching the physical device from virsh will automatically detach its corresponding virtual ones:

  # virsh nodedev-dettach pci_0000_0b_00_0
  Device pci_0000_0b_00_0 dettached

  # virsh nodedev-dettach pci_0000_0b_10_0
  error: Could not find matching device 'pci_0000_0b_10_0'
  error: Node device not found

  #

However, reversing this (detaching the virtual, then physical) works.

After a reboot to reset things:

  # virsh nodedev-dettach pci_0000_0b_10_0
  Device pci_0000_0b_10_0 dettached

  # virsh nodedev-dettach pci_0000_0b_00_0
  Device pci_0000_0b_00_0 dettached

  #

As it turns out, detaching the physical one stops the virtual one from attaching.  After the above detach of the physical, attempting to start the VM gives:

  # virsh start test0
  error: Failed to start domain test0
  error: this function is not supported by the hypervisor: Failed to read product/vendor ID for 0000:0b:10.0

  #

Instead, if the physical is left attached, and only the virtual one detached, then the VM starts fine and the device is seen inside the VM.

From after a reboot again, to reset the state of things (starting with both physical and virtual attached):

  # virsh list --all
   Id Name                 State
  ----------------------------------
    - test0                shut off

  # virsh start test0
  error: Failed to start domain test0
  error: internal error unable to start guest: 18:40:02.375: debug : qemuSecurityDACSetProcessLabel:411 : Dropping privileges of VM to 0:0
char device redirected to /dev/pts/2
  Failed to assign device "hostdev0" : Device or resource busy
  Failed to deassign device "hostdev0" : Invalid argument
  Error initializing device pci-assign

  (then detach the virtual)
  # virsh nodedev-dettach pci_0000_0b_10_0
  Device pci_0000_0b_10_0 dettached

  # virsh start test0
  Domain test0 started

  #

Looks good.  An lspci from this virtual machine showing the attached virtual device:

  00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
  00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
  00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
  00:01.2 USB Controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
  00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
  00:02.0 VGA compatible controller: Cirrus Logic GD 5446
  00:03.0 RAM memory: Qumranet, Inc. Virtio memory balloon
  00:04.0 SCSI storage controller: Qumranet, Inc. Virtio block device
  00:05.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 20)
  00:06.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)


Version-Release number of selected component (if applicable):

RHEL 6 beta Virtualisation Docs online (April 2010):

  http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6-Beta/html/Virtualization/sect-Para-virtualized_Windows_Drivers_Guide-How_SR_IOV_Libvirt_Works.html


How reproducible:

Every time.

  
Actual results:

The instructions as written don't function as intended in RHEL 6.


Expected results:

Written examples that work when followed in RHEL 6.


Additional info:

Comment 2 Christopher Curran 2010-05-24 01:09:58 UTC
This is good information. I'll update the chapter.

Chris

Comment 3 RHEL Program Management 2010-06-07 15:56:10 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 5 Christopher Curran 2010-07-07 01:58:22 UTC
Modified in Build 35. Thanks again for the feedback.

Chris