Bug 1500894

Summary: sched/rt: Simplify the IPI rt balancing logic
Product: Red Hat Enterprise Linux 7 Reporter: Clark Williams <williams>
Component: kernel-rtAssignee: Clark Williams <williams>
kernel-rt sub component: Memory Management QA Contact: Jiri Kastner <jkastner>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: high CC: bhu, lgoncalv
Version: 7.5   
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-04-10 08:55:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1442258    
Attachments:
Description Flags
Upstream patch for IPI rt balancing logic
none
[PATCH 1/3] Revert "sched/rt: Avoid sending an IPI to a CPU already
none
[PATCH 2/3] Revert "sched/rt: Have the schedule IPI irq_work run in
none
[PATCH 3/3] sched/rt: Simplify the IPI rt balancing logic none

Description Clark Williams 2017-10-11 16:58:03 UTC
Created attachment 1337315 [details]
Upstream patch for IPI rt balancing logic

Description of problem:

From: "Steven Rostedt (Red Hat)" <rostedt>

When a CPU lowers its priority (schedules out a high priority task for a
lower priority one), a check is made to see if any other CPU has overloaded
RT tasks (more than one). It checks the rto_mask to determine this and if so
it will request to pull one of those tasks to itself if the non running RT
task is of higher priority than the new priority of the next task to run on
the current CPU.

When we deal with large number of CPUs, the original pull logic suffered
from large lock contention on a single CPU run queue, which caused a huge
latency across all CPUs. This was caused by only having one CPU having
overloaded RT tasks and a bunch of other CPUs lowering their priority. To
solve this issue, commit b6366f048e0c ("sched/rt: Use IPI to trigger RT task
push migration instead of pulling") changed the way to request a pull.
Instead of grabbing the lock of the overloaded CPU's runqueue, it simply
sent an IPI to that CPU to do the work.

Although the IPI logic worked very well in removing the large latency build
up, it still could suffer from a large number of IPIs being sent to a single
CPU. On a 80 CPU box, I measured over 200us of processing IPIs. Worse yet,
when I tested this on a 120 CPU box, with a stress test that had lots of
RT tasks scheduling on all CPUs, it actually triggered the hard lockup
detector! One CPU had so many IPIs sent to it, and due to the restart
mechanism that is triggered when the source run queue has a priority status
change, the CPU spent minutes! processing the IPIs.

Thinking about this further, I realized there's no reason for each run queue
to send its own IPI. As all CPUs with overloaded tasks must be scanned
regardless if there's one or many CPUs lowering their priority, because
there's no current way to find the CPU with the highest priority task that
can schedule to one of these CPUs, there really only needs to be one IPI
being sent around at a time.

This greatly simplifies the code!

The new approach is to have each root domain have its own irq work, as the
rto_mask is per root domain. The root domain has the following fields
attached to it:

  rto_push_work	 - the irq work to process each CPU set in rto_mask
  rto_lock	 - the lock to protect some of the other rto fields
  rto_loop_start - an atomic that keeps contention down on rto_lock
		    the first CPU scheduling in a lower priority task
		    is the one to kick off the process.
  rto_loop_next	 - an atomic that gets incremented for each CPU that
		    schedules in a lower priority task.
  rto_loop	 - a variable protected by rto_lock that is used to
		    compare against rto_loop_next
  rto_cpu	 - The cpu to send the next IPI to, also protected by
		    the rto_lock.

When a CPU schedules in a lower priority task and wants to make sure
overloaded CPUs know about it. It increments the rto_loop_next. Then it
atomically sets rto_loop_start with a cmpxchg. If the old value is not "0",
then it is done, as another CPU is kicking off the IPI loop. If the old
value is "0", then it will take the rto_lock to synchronize with a possible
IPI being sent around to the overloaded CPUs.

If rto_cpu is greater than or equal to nr_cpu_ids, then there's either no
IPI being sent around, or one is about to finish. Then rto_cpu is set to the
first CPU in rto_mask and an IPI is sent to that CPU. If there's no CPUs set
in rto_mask, then there's nothing to be done.

When the CPU receives the IPI, it will first try to push any RT tasks that is
queued on the CPU but can't run because a higher priority RT task is
currently running on that CPU.

Then it takes the rto_lock and looks for the next CPU in the rto_mask. If it
finds one, it simply sends an IPI to that CPU and the process continues.

If there's no more CPUs in the rto_mask, then rto_loop is compared with
rto_loop_next. If they match, everything is done and the process is over. If
they do not match, then a CPU scheduled in a lower priority task as the IPI
was being passed around, and the process needs to start again. The first CPU
in rto_mask is sent the IPI.

This change removes this duplication of work in the IPI logic, and greatly
lowers the latency caused by the IPIs. This removed the lockup happening on
the 120 CPU machine. It also simplifies the code tremendously. What else
could anyone ask for?

Thanks to Peter Zijlstra for simplifying the rto_loop_start atomic logic and
supplying me with the rto_start_trylock() and rto_start_unlock() helper
functions.

Link: http://lkml.kernel.org/r/20170424114732.1aac6dc4@gandalf.local.home
Signed-off-by: Steven Rostedt (VMware) <rostedt>
---

Changes since v2:

  - Used -1 for rto_cpu when no IPI is circulating. Makes it a little cleaner
     (suggested by Peter Zijlstra)

  - Use of atomic_cmpxchg_acquire() in rto_start_trylock()
     (suggested by Peter Zijlstra)

  - Use of this_rq() instead of cpu_rq(smp_processor_id())
     (suggested by Peter Ziljstra)

Comment 2 Beth Uptagrafft 2017-10-11 18:53:43 UTC
These patches improve latencies seen on systems with more than two sockets.

Comment 3 Clark Williams 2017-10-20 21:14:07 UTC
Created attachment 1341423 [details]
[PATCH 1/3] Revert "sched/rt: Avoid sending an IPI to a CPU already

First of two reverts to be able to apply new IPI RT balancing patch

Comment 4 Clark Williams 2017-10-20 21:15:01 UTC
Created attachment 1341424 [details]
[PATCH 2/3] Revert "sched/rt: Have the schedule IPI irq_work run in

Second of two reverts required to apply new IPI RT balancing patch

Comment 5 Clark Williams 2017-10-20 21:15:57 UTC
Created attachment 1341425 [details]
[PATCH 3/3] sched/rt: Simplify the IPI rt balancing logic

New IPI RT Balancing logic

Comment 6 Clark Williams 2017-10-20 21:17:47 UTC
The above patches (#c3, #c4 and #c5) were applied to:

      kernel-rt-3.10.0-742.rt56.671.el7

The the patches build and boot.

Comment 12 errata-xmlrpc 2018-04-10 08:55:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:0676