Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
(In reply to xuhan from comment #0)
> step 3:
> 42: 13 13 34 193847 PCI-MSI-edge
> ens5-TxRx-0
> 43: 5 7 7 5 PCI-MSI-edge ens5
> 8 <-- smp_affinity
smp_affinity is a bitmap, so 8 means CPU3 is targeted for the interrupt.
> 42: 13 13 34 975747 PCI-MSI-edge
> ens5-TxRx-0
> 43: 5 7 7 5 PCI-MSI-edge ens5
> 8 <-- smp_affinity
Tada, only CPU3's interrupt count increased.
> # cat /proc/irq/42/affinity_hint
> 0
Seems like you're making an assumption that affinity_hint should be showing something else. What do you think it should be showing? What does it show on bare metal?
> Expected results:
> irqbalance service could work properly.
I don't see how it's not working, please double check the results and clarify exactly where it's not working.
Description of problem: irqbalance service not work properly with 82599EB PF/VF. Version-Release number of selected component (if applicable): qemu-kvm-rhev-1.5.3-10.el7.x86_64 kernel-3.10.0-40.el7.x86_64 How reproducible: always Steps to Reproduce: 1. boot guest with 82599EB PF/VF. # /usr/libexec/qemu-kvm -nodefaults -M pc -m 2G -cpu Nehalem -smp 4,cores=2,threads=2,sockets=1 -boot menu=on -monitor stdio -vga qxl -spice disable-ticketing,port=5931 -drive file=/home/vfio-RHEL7.0-64.qcow2_v3,id=guest-img,if=none,cache=none,aio=native -device virtio-blk-pci,scsi=off,drive=guest-img,id=os-disk,bootindex=1 -device virtio-balloon-pci,id=balloon -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -qmp tcp:0:5555,server,nowait -serial unix:/tmp/guest-sock,server,nowait \ -device vfio-pci,host=05:10.0,id=vf0 2. start irqbalance. # service irqbalance start 3. check interrupts and smp_affinity on guest. # cat /proc/interrupts | grep ens5; \ cat /proc/irq/42/smp_affinity; \ sleep 120; \ cat /proc/interrupts | grep ens5; \ cat /proc/irq/42/smp_affinity Actual results: step 2: # service irqbalance status Redirecting to /bin/systemctl status irqbalance.service irqbalance.service - irqbalance daemon Loaded: loaded (/usr/lib/systemd/system/irqbalance.service; enabled) Active: active (running) since Sun 2013-11-03 22:49:52 MST; 21min ago Main PID: 603 (irqbalance) CGroup: /system.slice/irqbalance.service └─603 /usr/sbin/irqbalance --foreground Nov 03 22:49:52 localhost.localdomain systemd[1]: Started irqbalance daemon. Nov 03 23:00:15 localhost.localdomain systemd[1]: Started irqbalance daemon. Nov 03 23:01:49 localhost.localdomain systemd[1]: Started irqbalance daemon. step 3: 42: 13 13 34 193847 PCI-MSI-edge ens5-TxRx-0 43: 5 7 7 5 PCI-MSI-edge ens5 8 <-- smp_affinity 42: 13 13 34 975747 PCI-MSI-edge ens5-TxRx-0 43: 5 7 7 5 PCI-MSI-edge ens5 8 <-- smp_affinity # cat /proc/irq/42/affinity_hint 0 Expected results: irqbalance service could work properly. Additional info: # lspci -vvv -s 00:05.0 | grep -i MSI Capabilities: [70] MSI-X: Enable+ Count=3 Masked- Capabilities: [a0] Express (v0) Endpoint, MSI 00