Bug 1012804 - IRQs of passthru MSI nic device are not correctly distributed on REHL5.10 (32 bits) guest
IRQs of passthru MSI nic device are not correctly distributed on REHL5.10 (32...
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm (Show other bugs)
7.0
x86_64 Linux
medium Severity medium
: rc
: ---
Assigned To: Alex Williamson
Virtualization Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-09-27 03:44 EDT by huiqingding
Modified: 2014-06-17 23:38 EDT (History)
8 users (show)

See Also:
Fixed In Version: qemu-kvm-1.5.3-17.el7
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-06-13 06:55:15 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description huiqingding 2013-09-27 03:44:43 EDT
Description of problem:
Passthru a nic card, which only supports MSI, ping external box, modify smp_affinity and found IRQ only distributed to CPU0.

Version-Release number of selected component (if applicable):
The host kernel if kernel-3.10.0-23.el7.x86_64
The version of qemu-kvm is qemu-kvm-rhev-1.5.3-6.el7.x86_64
The guest kernel is kernel-2.6.18-371.el5PAE

How reproducible:
100%

Steps to Reproduce:
1. Boot a guest and passthru a 82579LM Gigabit nic card which only supports MSI.
/usr/libexec/qemu-kvm -M pc -cpu SandyBridge -enable-kvm -m 2048 -smp 4,sockets=2,cores=2,threads=1 -name rhel5.10-32 -uuid 6afa5f93-2d4f-420f-81c6-e5fdddbd1c83 -drive file=/home/RHEL-Server-5-10.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,serial=40c061dd-5d60-4fc5-865f-55db700407f0,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0 -device pci-assign,host=00:19.0 -vnc :1  -monitor stdio

2. Stop irqbalance on guest
service irqbalance stop

3. On guest, flood ping remote box
#ping -f 10.66.5.4

4. On remote box, flood ping guest
#ping -f 10.66.110.242

5. Set smp_affinity to be 1
#echo 1 > /proc/irq/XX/smp_affinity
Check the interrupts number of each vcpu.
#cat /proc/interrupts

6. Set smp_affinity to be 2
#echo 2 > /proc/irq/XX/smp_affinity
Check the interrupts number of each vcpu.
#cat /proc/interrupts

7. Set smp_affinity to be 3
#echo 4 > /proc/irq/XX/smp_affinity
Check the interrupts number of each vcpu.
#cat /proc/interrupts


Actual results:
Step6 and Step7, only the IRQs are only distributed to CPU0 

Expected results:
Step6, the IRQs should be distributed to CPU1.
Step7, the IRQs should be distributed to CPU2.

Additional info:
Comment 2 huiqingding 2013-09-27 03:57:41 EDT
I found Bug 919761 is similar with this problem. So set the component of this bug to be "qemu-kvm". If the component is not correct, please fix me. thanks.
Comment 3 Alex Williamson 2013-12-19 16:54:35 EST
This appears to be a duplicate of bug 1025477, the only difference is the RHEL5 guest.  Moving to ON_QA for testing.  Note that pci-assign is not supported on RHEL7, vfio-pci should be used for verification.
Comment 5 Chao Yang 2014-01-21 04:07:58 EST
Verified as passed in a fresh installed rhel5.10 i686 guest with qemu-kvm-1.5.3-38.el7.x86_64.

Actual Result:

# cat /proc/interrupts 
           CPU0       CPU1       CPU2       CPU3       
  0:     114865      12074      12061      12060    IO-APIC-edge  timer
  1:        118         16          2         14    IO-APIC-edge  i8042
  6:          0          0          3          1    IO-APIC-edge  floppy
  8:          0          0          0          1    IO-APIC-edge  rtc
  9:          0          0          0          0   IO-APIC-level  acpi
 12:        739         26         29         29    IO-APIC-edge  i8042
 15:        618         54        418         62    IO-APIC-edge  ide1
177:          0          0          0          0       PCI-MSI-X  virtio0-config
185:       6399       1555          0          0       PCI-MSI-X  virtio0-requests
193:        215          0          0        143         PCI-MSI  eth0

# cat /proc/irq/193/smp_affinity 
00000001

# cat /proc/interrupts | grep -i 'cpu\|eth0'
           CPU0       CPU1       CPU2       CPU3       
193:      14859          0          0        143         PCI-MSI  eth0

# echo 2 > /proc/irq/193/smp_affinity 
# cat /proc/irq/193/smp_affinity
00000002
# cat /proc/interrupts | grep -i 'cpu\|eth0'
           CPU0       CPU1       CPU2       CPU3       
193:      69089      26035          0        143         PCI-MSI  eth0
# cat /proc/interrupts | grep -i 'cpu\|eth0'
           CPU0       CPU1       CPU2       CPU3       
193:      69089      30103          0        143         PCI-MSI  eth0


# echo 4 > /proc/irq/193/smp_affinity 
# cat /proc/irq/193/smp_affinity
00000004
# cat /proc/interrupts | grep -i 'cpu\|eth0'
           CPU0       CPU1       CPU2       CPU3       
193:     215344      67496      40597        143         PCI-MSI  eth0
# cat /proc/interrupts | grep -i 'cpu\|eth0'
           CPU0       CPU1       CPU2       CPU3       
193:     215344      67496      46630        143         PCI-MSI  eth0


So, this issue has been fixed.
Comment 7 Ludek Smid 2014-06-13 06:55:15 EDT
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.

Note You need to log in before you can comment on or make changes to this bug.