RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1337490 - Hot-plugs into root-port and downstream-port fail
Summary: Hot-plugs into root-port and downstream-port fail
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Laine Stump
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-19 10:11 UTC by Yang Yang
Modified: 2016-11-03 18:45 UTC (History)
7 users (show)

Fixed In Version: libvirt-2.0.0-9.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-03 18:45:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:2577 0 normal SHIPPED_LIVE Moderate: libvirt security, bug fix, and enhancement update 2016-11-03 12:07:06 UTC

Description Yang Yang 2016-05-19 10:11:42 UTC
Description of problem:
Under q35, when hot-plugging into downstream port, error out like

# virsh attach-disk vm1-q35 /mnt/nfs2/virtio2.img vdc --targetbus virtio --subdriver qcow2 --address pci:0.6.0.0
error: Failed to attach disk
error: internal error: PCI bus is not compatible with the device at 0000:06:00.0. Device requires a standard PCI slot, which is not provided by bus 0000:06

# virsh attach-disk vm1-q35 /mnt/nfs2/virtio2.img vdc --targetbus virtio --subdriver qcow2 --address pci:0.6.0.0 --print-xml
<disk type='file'>
  <driver type='qcow2'/>
  <source file='/mnt/nfs2/virtio2.img'/>
  <target dev='vdc' bus='virtio'/>
  <address type='pci' domain='0x0000' bus ='0x06' slot='0x00' function='0x0'/>
</disk>

However, if I add above xml in domain and start domain, it works without error. 

The same error came out if hotplugging scsi controller/nic into downstream port

Version-Release number of selected component (if applicable):
libvirt-1.3.4-1.el7.x86_64
qemu-kvm-rhev-2.6.0-1.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. start a vm with a couple of downstream ports
<controller type='pci' index='0' model='pcie-root'/>
<controller type='pci' index='1' model='dmi-to-pci-bridge'/>
<controller type='pci' index='2' model='pci-bridge'/>
<controller type='pci' index='3' model='pcie-root-port'/>
<controller type='pci' index='4' model='pcie-switch-upstream-port'/>
<controller type='pci' index='5' model='pcie-switch-downstream-port'/>
<controller type='pci' index='6' model='pcie-switch-downstream-port'/>
......snip.....
<controller type='pci' index='17' model='pcie-switch-downstream-port'/>

2. hotplug virtio disk into downstream port
# virsh attach-disk vm1-q35 /mnt/nfs2/virtio2.img vdc --targetbus virtio --subdriver qcow2 --address pci:0.6.0.0
error: Failed to attach disk
error: internal error: PCI bus is not compatible with the device at 0000:06:00.0. Device requires a standard PCI slot, which is not provided by bus 0000:06

3. edit domain xml, add following xml, then destroy/start vm
#virsh edit vm1-q35
<disk type='file'>
  <driver type='qcow2'/>
  <source file='/mnt/nfs2/virtio2.img'/>
  <target dev='vdc' bus='virtio'/>
  <address type='pci' domain='0x0000' bus ='0x06' slot='0x00' function='0x0'/>
</disk>

# virsh destroy vm1-q35; virsh start vm1-q35
Domain vm1-q35 destroyed

Domain vm1-q35 started

4. check xml
# virsh dumpxml vm1-q35 | grep vdc -a6
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/mnt/nfs2/virtio2.img'>
        <seclabel model='selinux' labelskip='yes'/>
      </source>
      <backingStore/>
      <target dev='vdc' bus='virtio'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </disk>
    
5. attempt to hotplug scsi controller into downstream port
# cat scsi-controller.xml
<controller type='scsi' index='1'>
      <address type='pci' domain='0x0000' bus='7' slot='0x0' function='0x0'/>
    </controller>

# virsh attach-device vm1-q35 scsi-controller.xml
error: Failed to attach device from scsi-controller.xml
error: internal error: PCI bus is not compatible with the device at 0000:07:00.0. Device requires a standard PCI slot, which is not provided by bus 0000:07

6. attempt to hotplug nic into downstream port
# cat nic.xml 
<interface type='network'>
      <source network='default' bridge='virbr0'/>
      <model type='e1000'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x0' function='0x0'/>
    </interface>

# virsh attach-device vm1-q35 nic.xml 
error: Failed to attach device from nic.xml
error: internal error: PCI bus is not compatible with the device at 0000:07:00.0. Device requires a standard PCI slot, which is not provided by bus 0000:07

Actual results:
Hotplug devices into downstream port failed

Expected results:
Hotplug devices into downstream port pass

Additional info:
2016-05-19 08:30:17.672+0000: 31896: error : virDomainPCIAddressFlagsCompatible:127 : internal error: PCI bus is not compatible with the device at 0000:07:00.0. Device requires a standard PCI slot, which is not provided by bus 0000:07

Comment 1 Yang Yang 2016-08-05 01:46:45 UTC
It happens as well when hot-plugging devices into root-port

Steps like this
1. start a vm with 1 root-port
<controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='pci' index='1' model='dmi-to-pci-bridge'>
      <model name='i82801b11-bridge'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='2'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x1f' function='0x0'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='ioh3420'/>
      <target chassis='3' port='0x10'/>
      <alias name='pci.3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </controller>

2. hot-plug 1 nic into root-port

# cat nic.xml 
<interface type='network'>
      <source network='default' bridge='virbr0'/>
      <model type='e1000'/>
      <address type='pci' domain='0x0000' bus='3' slot='0' function='0x0'/>
    </interface>

# virsh attach-device vm1-q35 nic.xml 
error: Failed to attach device from nic.xml
error: internal error: PCI bus is not compatible with the device at 0000:03:00.0. Device requires a standard PCI slot, which is not provided by bus 0000:03

Comment 2 Laine Stump 2016-09-08 16:29:52 UTC
I see the problem:

1) All of the hotplug functions set flags to indicate the device needs a bus with legacy ("standard") PCI slots. Then they call virDomainPCIAddressEnsureAddr()


2) virDomainPCIAddressEnsureAddr() checks for a "compatible" bus type, but sets the "address is from Config" flag, which has the effect of accepting either PCI or PCIe buses. (i.e. so far we're doing okay). Then it calls...

3) virDomainPCIAddressReserveSlot()l but that function doesn't have an argument for "address is from Config", it just knows we want a "legacy PCI slot". It then calls:

4) virDomainPCIAddressReserveAddr(), which *does* have a "address is from Config" argument, but since virDomainPCIAddressReserveSlot() doesn't know what the callers wanted, it assumes "address is internally auto-generated by libvirt". Because of this, our request for "legacy PCI" is interpreted very strictly, and the pcie-root-port fails the validation.

All of this code will be changing soon, but a simple fix is possible.

Comment 3 Laine Stump 2016-09-08 17:49:35 UTC
The fix is a single line, and has no potential for causing new problems. I will have the patch posted/reviewed/pushed upstream and backported within the next day (I just want to test it first :-)

The lack of this fix makes it impossible to hotplug devices into a PCI Express port on Q35. Since hotplug into legacy PCI slots also doesn't work on Q35, not having the fix means that hotplug of PCI devices won't work at all on Q35, so I'm proposing as a blocker. If this is more appropriate as a 0-day or something that's fine too.

Comment 4 Laine Stump 2016-09-12 20:35:05 UTC
Fix pushed upstream:

commit b87703cf79559157404667628802d7fe8f9f19a6
Author: Laine Stump <laine>
Date:   Fri Sep 9 15:26:34 2016 -0400

    conf: allow hotplugging "legacy PCI" device to manually addressed PCIe slot

Comment 7 Yang Yang 2016-09-18 10:25:22 UTC
Verified with libvirt-2.0.0-9.el7.x86_64 and qemu-kvm-rhev-2.6.0-25.el7.x86_64

1. Hotplug/hot-unplug virtio disk into/from root-port
#virsh dumpxml vm1-q35
<controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='dmi-to-pci-bridge'>
      <model name='i82801b11-bridge'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='2'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x1f' function='0x0'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='ioh3420'/>
      <target chassis='3' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </controller>

# virsh attach-disk vm1-q35 /var/lib/libvirt/images/test.img vdb --subdriver qcow2 --address pci:0.3.0.0
Disk attached successfully

# virsh dumpxml vm1-q35 | grep vdb -a6
<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/test.img'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </disk>

check the disk in guest, it is usable
# mkfs.xfs /dev/vdb -f
meta-data=/dev/vdb               isize=512    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
# mount /dev/vdb /mnt
# echo hello > /mnt/hello
# cat /mnt/hello 
hello
# umount /mnt

# virsh detach-disk vm1-q35 vdb
Disk detached successfully

check domain xml, vdb vanishes

2. Hot-plug/Hot-unplug nic into/from root-port
# virsh attach-device vm1-q35 nic.xml 
Device attached successfully

# cat nic.xml 
<interface type='network'>
      <source network='default' bridge='virbr0'/>
      <model type='e1000'/>
      <address type='pci' domain='0x0000' bus='3' slot='0' function='0x0'/>
    </interface>

# virsh dumpxml vm1-q35 | grep inter
<interface type='network'>
      <mac address='52:54:00:3a:81:9a'/>
      <source network='default' bridge='virbr0'/>
      <target dev='vnet0'/>
      <model type='e1000'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </interface>

check in guest
# ifconfig
enp1s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet x.x.x.x  netmask 255.255.255.0  broadcast x.x.x.x
        inet6 fe80::5054:ff:fe3a:819a  prefixlen 64  scopeid 0x20<link>
        ether 52:54:00:3a:81:9a  txqueuelen 1000  (Ethernet)
        RX packets 9  bytes 1670 (1.6 KiB)
        RX errors 296  dropped 0  overruns 0  frame 296
        TX packets 10  bytes 2378 (2.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

# virsh detach-device vm1-q35 nic.xml 
Device detached successfully

check in domain xml, nic vanishes

3. Hot-plug/Hot-unplug virtio disk into/from downstream port
# virsh dumpxml vm1-q35
<controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='dmi-to-pci-bridge'>
      <model name='i82801b11-bridge'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='2'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x1f' function='0x0'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='ioh3420'/>
      <target chassis='3' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </controller>
    <controller type='pci' index='4' model='pcie-switch-upstream-port'>
      <model name='x3130-upstream'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </controller>
    <controller type='pci' index='5' model='pcie-switch-downstream-port'>
      <model name='xio3130-downstream'/>
      <target chassis='5' port='0x0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </controller>

# virsh attach-disk vm1-q35 /var/lib/libvirt/images/test.img vdb --subdriver qcow2 --address pci:0.5.0.0
Disk attached successfully

# virsh dumpxml vm1-q35 | grep vdb -a6
<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/test.img'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </disk>

***The disk cannot be detected in guest. Checked with qemu qe, she thought it's caused by BZ1365613***

# virsh detach-disk vm1-q35 vdb
Disk detached successfully

check domain xml, vdb vanishes

4. Hot-plug/Hot-unplug nic into/from downstream port
# cat nic.xml 
<interface type='network'>
      <mac address='52:54:00:3a:81:9a'/>
      <source network='default' bridge='virbr0'/>
      <model type='e1000'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </interface>

# virsh attach-device vm1-q35 nic.xml 
Device attached successfully

# ifconfig
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet x.x.x.x  netmask 255.255.255.0  broadcast x.x.x.x
        inet6 fe80::5054:ff:fe3a:819a  prefixlen 64  scopeid 0x20<link>
        ether 52:54:00:3a:81:9a  txqueuelen 1000  (Ethernet)
        RX packets 22  bytes 3443 (3.3 KiB)
        RX errors 7  dropped 0  overruns 0  frame 7
        TX packets 40  bytes 4948 (4.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

# virsh detach-device vm1-q35 nic.xml 
Device detached successfully

check domain xml, nic vanishes

Got the similar test results under OVMF.

Comment 8 Yang Yang 2016-09-18 10:36:07 UTC
Laine,

I attempted to hot-plug nic into pcie-root-port which is attached into pcie-expander-bus (e.g. pcie-pxb --> pcie-root-port). But I cannot detect the nic in guest. Then I hot-unplugged the nic. Virsh commands returns w/o error but the nic is not unplugged from domain xml. Virtio disk has the similar problem.

I attempted to hot-plug nic into downstream-port which is attached into pcie-expander-bus (e.g. pcie-pxb --> pcie-root-port --> upstream-port --> downstream port). I cannot detect the nic in guest as well. After hot-unplugging nic, the nic xml still exists in domain xml. Virtio disk has the similar problem.

Steps are as follows:
1. start vm with following xml
<controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='dmi-to-pci-bridge'>
      <model name='i82801b11-bridge'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='2'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </controller>
    <controller type='pci' index='3' model='pcie-expander-bus'>
      <model name='pxb-pcie'/>
      <target busNr='100'>
        <node>1</node>
      </target>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='ioh3420'/>
      <target chassis='4' port='0x0'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </controller>
    <interface type='direct'>
      <mac address='52:54:00:65:18:d8'/>
      <source dev='eno1' mode='bridge'/>
      <target dev='macvtap0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
    </interface>

2. hot-plug nic into root-port
# cat nic.xml 
<interface type='network'>
      <mac address='52:54:00:3a:81:9a'/>
      <source network='default' bridge='virbr0'/>
      <model type='e1000'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </interface>

# virsh attach-device vm1-q35-Sec nic.xml 
Device attached successfully

check domain xml, there are 2 nics
# virsh dumpxml vm1-q35-Sec | grep inter -a6
      
    <interface type='direct'>
      <mac address='52:54:00:65:18:d8'/>
      <source dev='eno1' mode='bridge'/>
      <target dev='macvtap0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
    </interface>
    <interface type='network'>
      <mac address='52:54:00:3a:81:9a'/>
      <source network='default' bridge='virbr0'/>
      <target dev='vnet0'/>
      <model type='e1000'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </interface>
   
checked in guest, only 1 nic is found. I cannot detect the hot-plugged nic
# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.66.4.171  netmask 255.255.252.0  broadcast 10.66.7.255
        inet6 fe80::5054:ff:fe65:18d8  prefixlen 64  scopeid 0x20<link>
        ether 52:54:00:65:18:d8  txqueuelen 1000  (Ethernet)
        RX packets 133  bytes 10172 (9.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 15  bytes 1986 (1.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

3. hot-unplug the nic
# virsh detach-device vm1-q35-Sec nic.xml 
Device detached successfully

4. checked in domain xml. The nic xml still exists in domain xml

# virsh dumpxml vm1-q35-Sec | grep inter -a6
     
    <interface type='direct'>
      <mac address='52:54:00:65:18:d8'/>
      <source dev='eno1' mode='bridge'/>
      <target dev='macvtap0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
    </interface>
    <interface type='network'>
      <mac address='52:54:00:3a:81:9a'/>
      <source network='default' bridge='virbr0'/>
      <target dev='vnet0'/>
      <model type='e1000'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </interface>

Comment 9 Laine Stump 2016-09-18 18:23:46 UTC
libvirt is allowing the hotplug into the given port, and passing the command through to qemu. So if the guest isn't recognizing it, it's a problem with the combination of pcie-expander-bus + pcie-root-port + hotplug, either in the guest OS or in qemu.

(BTW, it's normal for the device to still be listed in the libvirt XML after an unplug in a situation like this - because the guest never saw the device in the first place, it hasn't done [whatever it is that guests do to respond to a hotplug request], so qemu hasn't been notified by the guest that it's finished unplugging the device, and thus qemu hasn't notified libvirt with a DEVICE_DELETED event, so libvirt still shows the device as in-use by the guest)

Marcel, do you have any advice or ideas?

(in the meantime, this isn't a separate problem from what this BZ was filed for, so you should open a new BZ saying that when you hotplug to a pcie-root-port connected to a pcie-expander-bus, the hotplug is successful but the guest doesn't see the new device).

Comment 10 Yang Yang 2016-09-19 02:44:07 UTC
Thanks Laine.

Opened a qemu bz1377160 to track the problem of pcie-pxb+pcie-root-port+hot-plug

According to comment#7, move it to verified status

Comment 11 Laine Stump 2016-09-28 15:11:47 UTC
Marcel answered my question in Bug 1377160, so I'm clearing needinfo.

Comment 13 errata-xmlrpc 2016-11-03 18:45:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2577.html


Note You need to log in before you can comment on or make changes to this bug.