Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 607691 - UV - File allocation tends to allocate on alternate nodes
UV - File allocation tends to allocate on alternate nodes
Status: CLOSED DUPLICATE of bug 593154
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel (Show other bugs)
6.0
All Linux
high Severity high
: rc
: 6.0
Assigned To: George Beshers
Red Hat Kernel QE team
:
Depends On:
Blocks: 555548
  Show dependency treegraph
 
Reported: 2010-06-24 11:15 EDT by George Beshers
Modified: 2010-07-26 12:43 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2010-07-16 11:30:21 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description George Beshers 2010-06-24 11:15:21 EDT
Description of problem:
From the upstream patch---I have not had a chance to
integrate and test this yet.


We have observed several workloads running on multi-node systems where
memory is assigned unevenly across the nodes in the system. There are
numerous reasons for this but one is the round-robin rotor in
cpuset_mem_spread_node().

For example, a simple test that writes a multi-page file will allocate pages
on nodes 0 2 4 6 ... Odd nodes are skipped.  (Sometimes it allocates on
odd nodes & skips even nodes).

An example is shown below. The program "lfile" writes a file consisting of
10 pages. The program then mmaps the file & uses get_mempolicy(...,
MPOL_F_NODE) to determine the nodes where the file pages were allocated.
The output is shown below:

	# ./lfile
	 allocated on nodes: 2 4 6 0 1 2 6 0 2



There is a single rotor that is used for allocating both file pages & slab
pages.  Writing the file allocates both a data page & a slab page
(buffer_head).  This advances the RR rotor 2 nodes for each page
allocated.

A quick confirmation seems to confirm this is the cause of the uneven
allocation:

	# echo 0 >/dev/cpuset/memory_spread_slab
	# ./lfile
	 allocated on nodes: 6 7 8 9 0 1 2 3 4 5


This patch introduces a second rotor that is used for slab allocations.


Signed-off-by: Jack Steiner <steiner@sgi.com>


---
 include/linux/cpuset.h |    6 ++++++
 include/linux/sched.h  |    1 +
 kernel/cpuset.c        |   20 ++++++++++++++++----
 mm/slab.c              |    2 +-
 4 files changed, 24 insertions(+), 5 deletions(-)

Index: linux/include/linux/cpuset.h
===================================================================
--- linux.orig/include/linux/cpuset.h	2010-04-26 14:03:40.000000000 -0500
+++ linux/include/linux/cpuset.h	2010-04-26 15:05:02.574948748 -0500
@@ -69,6 +69,7 @@ extern void cpuset_task_status_allowed(s
 					struct task_struct *task);
 
 extern int cpuset_mem_spread_node(void);
+extern int cpuset_slab_spread_node(void);
 
 static inline int cpuset_do_page_mem_spread(void)
 {
@@ -158,6 +159,11 @@ static inline int cpuset_mem_spread_node
 {
 	return 0;
 }
+
+static inline int cpuset_slab_spread_node(void)
+{
+	return 0;
+}
 
 static inline int cpuset_do_page_mem_spread(void)
 {
Index: linux/include/linux/sched.h
===================================================================
--- linux.orig/include/linux/sched.h	2010-04-26 14:03:40.000000000 -0500
+++ linux/include/linux/sched.h	2010-04-26 15:04:38.208227585 -0500
@@ -1421,6 +1421,7 @@ struct task_struct {
 #ifdef CONFIG_CPUSETS
 	nodemask_t mems_allowed;	/* Protected by alloc_lock */
 	int cpuset_mem_spread_rotor;
+	int cpuset_slab_spread_rotor;
 #endif
 #ifdef CONFIG_CGROUPS
 	/* Control Group info protected by css_set_lock */
Index: linux/kernel/cpuset.c
===================================================================
--- linux.orig/kernel/cpuset.c	2010-04-26 14:03:40.000000000 -0500
+++ linux/kernel/cpuset.c	2010-04-26 15:04:38.246928404 -0500
@@ -2427,7 +2427,8 @@ void cpuset_unlock(void)
 }
 
 /**
- * cpuset_mem_spread_node() - On which node to begin search for a page
+ * cpuset_mem_spread_node() - On which node to begin search for a file page
+ * cpuset_slab_spread_node() - On which node to begin search for a slab page
  *
  * If a task is marked PF_SPREAD_PAGE or PF_SPREAD_SLAB (as for
  * tasks in a cpuset with is_spread_page or is_spread_slab set),
@@ -2452,16 +2453,27 @@ void cpuset_unlock(void)
  * See kmem_cache_alloc_node().
  */
 
-int cpuset_mem_spread_node(void)
+static int cpuset_spread_node(int *rotor)
 {
 	int node;
 
-	node = next_node(current->cpuset_mem_spread_rotor, current->mems_allowed);
+	node = next_node(*rotor, current->mems_allowed);
 	if (node == MAX_NUMNODES)
 		node = first_node(current->mems_allowed);
-	current->cpuset_mem_spread_rotor = node;
+	*rotor = node;
 	return node;
 }
+
+int cpuset_mem_spread_node(void)
+{
+	return cpuset_spread_node(&current->cpuset_mem_spread_rotor);
+}
+
+int cpuset_slab_spread_node(void)
+{
+	return cpuset_spread_node(&current->cpuset_slab_spread_rotor);
+}
+
 EXPORT_SYMBOL_GPL(cpuset_mem_spread_node);
 
 /**
Index: linux/mm/slab.c
===================================================================
--- linux.orig/mm/slab.c	2010-04-26 14:03:40.000000000 -0500
+++ linux/mm/slab.c	2010-04-26 15:05:34.343755521 -0500
@@ -3242,7 +3242,7 @@ static void *alternate_node_alloc(struct
 		return NULL;
 	nid_alloc = nid_here = numa_node_id();
 	if (cpuset_do_slab_mem_spread() && (cachep->flags & SLAB_MEM_SPREAD))
-		nid_alloc = cpuset_mem_spread_node();
+		nid_alloc = cpuset_slab_spread_node();
 	else if (current->mempolicy)
 		nid_alloc = slab_node(current->mempolicy);
 	if (nid_alloc != nid_here)


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:
Comment 2 RHEL Product and Program Management 2010-06-24 11:33:01 EDT
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.
Comment 4 Marizol Martinez 2010-07-08 11:28:39 EDT
George -- Reminder: You mentioned this may already be in into a .4? kernel. Please verify and update this BZ. Thanks!
Comment 6 Marizol Martinez 2010-07-16 11:30:21 EDT

*** This bug has been marked as a duplicate of bug 593154 ***
Comment 7 George Beshers 2010-07-26 12:43:27 EDT
I have verified this is in 2.6.32-52.

George

Note You need to log in before you can comment on or make changes to this bug.