RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1332563 - tuned-profiles-nfv: accommodate new ktimersoftd thread
Summary: tuned-profiles-nfv: accommodate new ktimersoftd thread
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: tuned
Version: 7.4
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Jaroslav Škarvada
QA Contact: Tereza Cerna
URL:
Whiteboard:
Depends On:
Blocks: kvm-rt-tuned 1273048 1400961 1440663
TreeView+ depends on / blocked
 
Reported: 2016-05-03 13:20 UTC by Luiz Capitulino
Modified: 2017-08-01 12:32 UTC (History)
20 users (show)

Fixed In Version: tuned-2.8.0-1.el7
Doc Type: Enhancement
Doc Text:
The priority of the "ktimersoftd" and "ksoftirqd" kernel threads has been increased, which improves Real Time kernel performance when using the tuned service.
Clone Of:
: 1440663 (view as bug list)
Environment:
Last Closed: 2017-08-01 12:32:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2102 0 normal SHIPPED_LIVE tuned bug fix and enhancement update 2017-08-01 16:07:33 UTC

Description Luiz Capitulino 2016-05-03 13:20:32 UTC
Description of problem:

The RHEL7.3 RT kernel has a new per-CPU kernel thread called ktimersoftd. This thread has fifo:1 priority, which is the same we're assigning to KVM's vCPU threads.

This is how KVM-RT per-cpu threads look like today:

   167  FF  99  [posixcputmr/15] *
   164  FF  99  [migration/15] *
   163  FF   3  [rcuc/15] *
   166  FF   2  [ksoftirqd/15] *
  3272  FF   1  qemu-kvm *
   165  FF   1  [ktimersoftd/15] *

Maybe we should bump by 1 rcuc, ksoftirqd and ktimersoftd.

Version-Release number of selected component (if applicable): 3.10.0-382.rt56.261.el7.x86_64

Comment 1 Luiz Capitulino 2016-05-09 14:25:13 UTC
I've bumped the RT prio for per-cpu threads in the following way (in comparison to what I posted in the description):

   167  FF  99  [posixcputmr/15] *
   164  FF  99  [migration/15] *
   163  FF   4  [rcuc/15] *
   166  FF   3  [ksoftirqd/15] *
   165  FF   2  [ktimersoftd/15] *

And the qemu-kvm thread keeps fifo:1.

Before this change I was seeing spikes of max=40us on the multiple VMs test-case. After this change, I've got max=21us (which is considered good). This seems to indicate that the change is good, but more testing is necessary.

However, I don't know what's the relationship between ksoftirqd and ktimersoftd, so I don't know if they should have the same prio for example.

Comment 2 Clark Williams 2016-05-11 15:02:35 UTC
Luiz,

We should take a look at who is raising the timer softirq. If it is something that's part of our KVM-RT stack, we may want the timer softirq to have a higher priority than the default ksoftirqd thread.

Comment 3 Luiz Capitulino 2016-05-11 15:13:45 UTC
Oh, I raised it myself by hand :)

The KVM-RT tuned profile is responsible for the settings I showed in the description (rcuc fifo:3 and ksoftirqd fifo:2). The priorities in comment 1 were set by hand for testing.

I can test with ktimersoftirqd > ksoftirqd and if that works fine, I can post a patch for the KVM-RT profile to make this the default.

Comment 4 Beth Uptagrafft 2016-06-15 17:48:25 UTC
Luiz, any updates on this BZ?

Comment 5 Luiz Capitulino 2016-06-15 18:04:42 UTC
Not yet. What's pending here is me running a 24 hours test duration to confirm that raising the ktimersoftd thread priority won't cause any regressions.

I haven't done this yet for three reasons:

 - I was triggering bug 1328890 in my test run (this is now fixed)
 - I'm triggering an new issue where my VMs loose networking (I'm debugging this now)
 - PTO time (I may have more to come)

So, as the last two items are still in progress, I may not have an update for the next week or so.

Comment 6 Luiz Capitulino 2016-08-31 14:55:50 UTC
Jaroslav,

Before I start, let me say that it's totally my fault that this BZ fell through the cracks. But as it turns out, we need it for 7.3.

We have to make a change to the realtime-virtual-host profile so that the kernel threads priorities on a isolated core will look like this:

   135  FF  99  [posixcputmr/13] *
   132  FF  99  [migration/13] *
   131  FF   4  [rcuc/13] *
   133  FF   3  [ktimersoftd/13] *
   134  FF   2  [ksoftirqd/13] *

This is needed to accommodate the new ktimersoftd thread, otherwise its default priority will conflict with the vCPU thread priority on an isolated core.

Here's the change we have to make:

--- tuned.conf.orig     2016-08-31 08:21:50.978757302 -0400
+++ tuned.conf  2016-08-31 09:34:55.618890005 -0400
@@ -33,10 +33,13 @@ isolated_cores_expanded=${f:cpulist_unpa
 group.ksoftirqd=0:f:2:*:ksoftirqd.*
 
 # for i in `pgrep rcuc` ; do grep Cpus_allowed_list /proc/$i/status ; done
-group.rcuc=0:f:3:*:rcuc.*
+group.rcuc=0:f:4:*:rcuc.*
 
 # for i in `pgrep rcub` ; do grep Cpus_allowed_list /proc/$i/status ; done
-group.rcub=0:f:3:*:rcub.*
+group.rcub=0:f:4:*:rcub.*
+
+# for i in `pgrep ktimersoftd` ; do grep Cpus_allowed_list /proc/$i/status ; done
+group.ktimersoftd=0:f:3:*:ktimersoftd.*
 
 [script]
 script=script.sh

Comment 15 Luiz Capitulino 2016-09-12 18:29:23 UTC
There are two ways to reproduce this bug:

1. Simply check the ktimersoftd and ksoftirqd kernel threads priority

On current (not-fixed) profile:

# ps axo pid,class,rtprio,comm | grep ktimersoft
     4 FF       1 ktimersoftd/0
     ...

# ps axo pid,class,rtprio,comm | grep softirq
     3 FF       2 ksoftirqd/0
     ...

Here we see that both ksoftirqd and ktimersoft are SCHED_FIFO tasks, but ksoftirqd has greater RT priority than ktimersoftd. What we want is (fixed profile output):

# ps axo pid,class,rtprio,comm | grep ktimersoft
     4 FF       3 ktimersoftd/0
     ...

# ps axo pid,class,rtprio,comm | grep softirq
     3 FF       2 ksoftirqd/0

2. Try to reproduce one of the issues we think are possible

This will be a bit hard to do, and will require you to setup a system for KVM-RT. So let's only do this if it turns out to really necessary.

Comment 17 Jaroslav Škarvada 2017-03-21 09:48:00 UTC
Upstream commit fixing the problem:
https://github.com/redhat-performance/tuned/commit/3ca7cfceb155104b73144826af35e42a363b7072

Available for preliminary testing in tuned-*-2.7.1-1.20170321git3ca7cfce.el7 from:
https://jskarvad.fedorapeople.org/tuned/devel/repo/

Comment 19 Luiz Capitulino 2017-03-21 14:51:48 UTC
(In reply to Jaroslav Škarvada from comment #17)

> Available for preliminary testing in tuned-*-2.7.1-1.20170321git3ca7cfce.el7
> from:
> https://jskarvad.fedorapeople.org/tuned/devel/repo/

Works as expected, also passed short duration tests.

Comment 22 Tereza Cerna 2017-04-12 09:03:02 UTC
This is only sanity check that tuned can set priority of ktimersoftd process.


I did these steps:

# tuned-adm profile realtime-virtual-host

# cat ktimersoftd 
#!/bin/bash
while true
do
sleep 3600
done

# chmod a+rx ktimersoftd

# ./ktimersoftd &


=========================================
Verified in:
    tuned-2.8.0-1.el7.noarch
    tuned-profiles-nfv-2.8.0-1.el7.noarch
PASS
=========================================

# ps axo pid,class,rtprio,comm | grep ktimersoft
16976 FF       3 ktimersoftd

# ps axo pid,class,rtprio,comm | grep softirq
    3 FF       2 ksoftirqd/0
   13 FF       2 ksoftirqd/1
   17 FF       2 ksoftirqd/2
   21 FF       2 ksoftirqd/3


=========================================
Reproduced in:
    tuned-2.7.1-3.el7_3.1.noarch
	tuned-profiles-nfv-2.7.1-3.el7_3.1.noarch
FAIL
=========================================

# ps axo pid,class,rtprio,comm | grep ktimersoft
17361 TS       - ktimersoftd

# ps axo pid,class,rtprio,comm | grep softirq
    3 FF       2 ksoftirqd/0
   13 FF       2 ksoftirqd/1
   17 FF       2 ksoftirqd/2
   21 FF       2 ksoftirqd/3

Comment 24 Pei Zhang 2017-04-12 10:18:21 UTC
Hi Luiz,

Before QE verify the functionality, I'd like to confirm the testing method.

I noticed you mentioned the max latency of multi VMs in Comment 1, so is this the check point of this bug? If so, seems it will be hard to reproduce. Because I did the latency testing without this fix, and used tuned-2.7.1-5.20170314git92d558b8.el7.noarch, the max latency < 20us already.

Running 12h, Boot 4 VMs at same time, their latency values are:
min(us)  avg(us)  max(us)
00005    00006    00011
00005    00006    00012
00005    00006    00012
00005    00006    00012


Could you please share more details or methods about how to verify the functionality? Thanks.



Best Regards,
Pei

Comment 25 Luiz Capitulino 2017-04-12 19:39:21 UTC
Pei,

I'm not sure it makes sense what I said in comment 1. Before the fix for this issue, ktimersoftd had the same priority as the vCPU thread in the host. This can have to negative implications:

1. If ktimersoftd becomes runnable, it won't execute until the vCPU thread relinquishes the CPU, which could be forever. This can lead to a bad system state

2. If ktimersoftd becomes runnable and the vCPU relinquishes the CPU and the vCPU thread becomes runnable, then the vCPU thread will have to wait for the ktimersoftd thread to block

Item 2 could be the spike I was seeing, but I never confirmed it and it's an extremely hard to reproduce scenario.

The way I recommend to verify this BZ is just to check ktimersoftd has the expected priority. Which is what Tereza did and which is listed in comment 6.

Comment 26 Pei Zhang 2017-04-13 00:47:05 UTC
(In reply to Luiz Capitulino from comment #25)
> Pei,
> 
> I'm not sure it makes sense what I said in comment 1. Before the fix for
> this issue, ktimersoftd had the same priority as the vCPU thread in the
> host. This can have to negative implications:
> 
> 1. If ktimersoftd becomes runnable, it won't execute until the vCPU thread
> relinquishes the CPU, which could be forever. This can lead to a bad system
> state
> 
> 2. If ktimersoftd becomes runnable and the vCPU relinquishes the CPU and the
> vCPU thread becomes runnable, then the vCPU thread will have to wait for the
> ktimersoftd thread to block
> 
> Item 2 could be the spike I was seeing, but I never confirmed it and it's an
> extremely hard to reproduce scenario.
> 
> The way I recommend to verify this BZ is just to check ktimersoftd has the
> expected priority. Which is what Tereza did and which is listed in comment 6.

OK. Thanks Luiz for your confirmation about verification of this bug.

Comment 29 Tereza Cerna 2017-04-26 10:56:05 UTC
Tested manually (see c#22) and by automated test case  /CoreOS/tuned/Regression/create-new-nfv-profiles 

 
Verified in:
    tuned-2.8.0-2.el7.noarch
    tuned-profiles-nfv-host-2.8.0-2.el7.noarch
    tuned-profiles-nfv-guest-2.8.0-2.el7.noarch
PASS

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
:: [   LOG    ] :: Set priority of ktimersoftd process [BZ#1332563, BZ#1440663]
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

:: [   PASS   ] :: Command 'chmod +x ktimersoftd' (Expected 0, got 0)
:: [   PASS   ] :: Command './ktimersoftd &' (Expected 0, got 0)
:: [ 06:43:38 ] :: Priority of ktimersoft process is '3'
:: [ 06:43:38 ] :: Priority of ksoftirqd process is '2'
:: [   PASS   ] :: Ksoftirqd process shoud have bigger priority than ktimersoft process. (Assert: "3" should be greater than "2")
:: [   PASS   ] :: Command 'kill -9 143272' (Expected 0, got 0)
:: [  BEGIN   ] :: Running 'killall sleep'
:: [   PASS   ] :: Command 'killall sleep' (Expected 0, got 0)




Reproduced in:
    tuned-2.7.1-3.el7_3.1.noarch
    tuned-profiles-nfv-2.7.1-3.el7_3.1.noarch
FAIL

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
:: [   LOG    ] :: Set priority of ktimersoftd process [BZ#1332563, BZ#1440663]
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

:: [   PASS   ] :: Command 'chmod +x ktimersoftd' (Expected 0, got 0)
:: [   PASS   ] :: Command './ktimersoftd &' (Expected 0, got 0)
:: [ 06:48:47 ] :: Priority of ktimersoft process is '-'
:: [ 06:48:47 ] :: Priority of ksoftirqd process is '2'
/usr/share/beakerlib/testing.sh: line 289: [: -: integer expression expected
:: [   FAIL   ] :: Ksoftirqd process shoud have bigger priority than ktimersoft process. (Assert: "-" should be greater than "2")
:: [   PASS   ] :: Command 'kill -9 145964' (Expected 0, got 0)
:: [   PASS   ] :: Command 'killall sleep' (Expected 0, got 0)

Comment 30 errata-xmlrpc 2017-08-01 12:32:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2102


Note You need to log in before you can comment on or make changes to this bug.