Bug 1384374
Summary: | L2 network latency has higher 'Max latency value' with vhotuser mq in KVM-RT environment | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Pei Zhang <pezhang> |
Component: | kernel-rt | Assignee: | pagupta |
kernel-rt sub component: | KVM | QA Contact: | Virtualization Bugs <virt-bugs> |
Status: | CLOSED NOTABUG | Docs Contact: | |
Severity: | unspecified | ||
Priority: | unspecified | CC: | bhu, chayang, hhuang, juzhang, knoel, michen, pezhang, virt-maint, xfu |
Version: | 7.3 | ||
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-12-06 04:47:27 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Pei Zhang
2016-10-13 08:05:36 UTC
Hello Pei Zhang, In the configuration I can see OVS PMD threads are configured to run on isolated physical cores with mask 15554(2,4,6,8,10,14,16). Could you please check if PMD threads are pinned to separate isolated cores and fifo priority is assigned as mentioned below: In the host: Identify OVS PMD threads, pin then to (different) isolated cores and assigned FIFO:95 to them. You can run top as shown below to identify the PMD threads. Those are going to be threads taking 100% of the CPU and having names like "pmd54". # top -d1 -H (to identify the PMD threads) # taskset -cp CORE-NR TID (to pin the PMD thread to core CORE-NR) # chrt -fp 95 TID (to assign priority FIFO:95) If above step does not help, please check physical NIC's are bound to which numa node with below command and also get command line parameters of OpenVSwitch: # hwloc-ls -v # ps -ef | grep -i ovs-vswitchd Best regards, Pankaj (In reply to pagupta from comment #7) > Hello Pei Zhang, > > In the configuration I can see OVS PMD threads are configured to run on > isolated physical cores with mask 15554(2,4,6,8,10,14,16). Could you please > check if PMD threads are pinned to separate isolated cores and fifo priority > is assigned as mentioned below: > > In the host: Identify OVS PMD threads, pin then to (different) isolated > cores and assigned FIFO:95 to them. You can run top as shown below to > identify the PMD threads. Those are going to be threads taking 100% of the > CPU and having names like "pmd54". > > # top -d1 -H (to identify the PMD threads) > # taskset -cp CORE-NR TID (to pin the PMD thread to core CORE-NR) > # chrt -fp 95 TID (to assign priority FIFO:95) Pankaj, thanks for your suggestions. These cores are pinned to separate isolated cores, however I didn't assign priority FIFO:95 to all those 8 cores, that's why the max latency is high. I re-tested with these steps, now the latency is lower much. 2q-rt: running 12 hours min=10.325, aver=12.448, max=75.781 Best Regards, Pei > If above step does not help, please check physical NIC's are bound to which > numa node with below command and also get command line parameters of > OpenVSwitch: > > # hwloc-ls -v > # ps -ef | grep -i ovs-vswitchd > > Best regards, > Pankaj Hello Pei Zhang, Thanks for the confirmation. So, it matches/better then non-RT results in comment 0: //snippet from original issue Min(us) Avg(us) Max(us) 2q-rt: 10.322 11.375 6185.621 2q-nonrt: 10.212 11.695 1276.902 Thanks, Pankaj Closing this BZ as per comment 8. If you feel issue persists please reopen it. Thanks, Pankaj (In reply to pagupta from comment #9) > Hello Pei Zhang, > > Thanks for the confirmation. > > So, it matches/better then non-RT results in comment 0: Pankaj, I finished the 24 hours testing, the max latency still keeps lower. 2q-rt: running 24 hours min=10.274, aver=13.149, max=74.046 So I agree with you, it's not a bug. Thank you. Best regards, Pei > //snippet from original issue > Min(us) Avg(us) Max(us) > 2q-rt: 10.322 11.375 6185.621 > 2q-nonrt: 10.212 11.695 1276.902 > > > > Thanks, > Pankaj |