RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 607691 - UV - File allocation tends to allocate on alternate nodes
Summary: UV - File allocation tends to allocate on alternate nodes
Keywords:
Status: CLOSED DUPLICATE of bug 593154
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel
Version: 6.0
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: 6.0
Assignee: George Beshers
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks: 555548
TreeView+ depends on / blocked
 
Reported: 2010-06-24 15:15 UTC by George Beshers
Modified: 2010-07-26 16:43 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-07-16 15:30:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description George Beshers 2010-06-24 15:15:21 UTC
Description of problem:
From the upstream patch---I have not had a chance to
integrate and test this yet.


We have observed several workloads running on multi-node systems where
memory is assigned unevenly across the nodes in the system. There are
numerous reasons for this but one is the round-robin rotor in
cpuset_mem_spread_node().

For example, a simple test that writes a multi-page file will allocate pages
on nodes 0 2 4 6 ... Odd nodes are skipped.  (Sometimes it allocates on
odd nodes & skips even nodes).

An example is shown below. The program "lfile" writes a file consisting of
10 pages. The program then mmaps the file & uses get_mempolicy(...,
MPOL_F_NODE) to determine the nodes where the file pages were allocated.
The output is shown below:

	# ./lfile
	 allocated on nodes: 2 4 6 0 1 2 6 0 2



There is a single rotor that is used for allocating both file pages & slab
pages.  Writing the file allocates both a data page & a slab page
(buffer_head).  This advances the RR rotor 2 nodes for each page
allocated.

A quick confirmation seems to confirm this is the cause of the uneven
allocation:

	# echo 0 >/dev/cpuset/memory_spread_slab
	# ./lfile
	 allocated on nodes: 6 7 8 9 0 1 2 3 4 5


This patch introduces a second rotor that is used for slab allocations.


Signed-off-by: Jack Steiner <steiner>


---
 include/linux/cpuset.h |    6 ++++++
 include/linux/sched.h  |    1 +
 kernel/cpuset.c        |   20 ++++++++++++++++----
 mm/slab.c              |    2 +-
 4 files changed, 24 insertions(+), 5 deletions(-)

Index: linux/include/linux/cpuset.h
===================================================================
--- linux.orig/include/linux/cpuset.h	2010-04-26 14:03:40.000000000 -0500
+++ linux/include/linux/cpuset.h	2010-04-26 15:05:02.574948748 -0500
@@ -69,6 +69,7 @@ extern void cpuset_task_status_allowed(s
 					struct task_struct *task);
 
 extern int cpuset_mem_spread_node(void);
+extern int cpuset_slab_spread_node(void);
 
 static inline int cpuset_do_page_mem_spread(void)
 {
@@ -158,6 +159,11 @@ static inline int cpuset_mem_spread_node
 {
 	return 0;
 }
+
+static inline int cpuset_slab_spread_node(void)
+{
+	return 0;
+}
 
 static inline int cpuset_do_page_mem_spread(void)
 {
Index: linux/include/linux/sched.h
===================================================================
--- linux.orig/include/linux/sched.h	2010-04-26 14:03:40.000000000 -0500
+++ linux/include/linux/sched.h	2010-04-26 15:04:38.208227585 -0500
@@ -1421,6 +1421,7 @@ struct task_struct {
 #ifdef CONFIG_CPUSETS
 	nodemask_t mems_allowed;	/* Protected by alloc_lock */
 	int cpuset_mem_spread_rotor;
+	int cpuset_slab_spread_rotor;
 #endif
 #ifdef CONFIG_CGROUPS
 	/* Control Group info protected by css_set_lock */
Index: linux/kernel/cpuset.c
===================================================================
--- linux.orig/kernel/cpuset.c	2010-04-26 14:03:40.000000000 -0500
+++ linux/kernel/cpuset.c	2010-04-26 15:04:38.246928404 -0500
@@ -2427,7 +2427,8 @@ void cpuset_unlock(void)
 }
 
 /**
- * cpuset_mem_spread_node() - On which node to begin search for a page
+ * cpuset_mem_spread_node() - On which node to begin search for a file page
+ * cpuset_slab_spread_node() - On which node to begin search for a slab page
  *
  * If a task is marked PF_SPREAD_PAGE or PF_SPREAD_SLAB (as for
  * tasks in a cpuset with is_spread_page or is_spread_slab set),
@@ -2452,16 +2453,27 @@ void cpuset_unlock(void)
  * See kmem_cache_alloc_node().
  */
 
-int cpuset_mem_spread_node(void)
+static int cpuset_spread_node(int *rotor)
 {
 	int node;
 
-	node = next_node(current->cpuset_mem_spread_rotor, current->mems_allowed);
+	node = next_node(*rotor, current->mems_allowed);
 	if (node == MAX_NUMNODES)
 		node = first_node(current->mems_allowed);
-	current->cpuset_mem_spread_rotor = node;
+	*rotor = node;
 	return node;
 }
+
+int cpuset_mem_spread_node(void)
+{
+	return cpuset_spread_node(&current->cpuset_mem_spread_rotor);
+}
+
+int cpuset_slab_spread_node(void)
+{
+	return cpuset_spread_node(&current->cpuset_slab_spread_rotor);
+}
+
 EXPORT_SYMBOL_GPL(cpuset_mem_spread_node);
 
 /**
Index: linux/mm/slab.c
===================================================================
--- linux.orig/mm/slab.c	2010-04-26 14:03:40.000000000 -0500
+++ linux/mm/slab.c	2010-04-26 15:05:34.343755521 -0500
@@ -3242,7 +3242,7 @@ static void *alternate_node_alloc(struct
 		return NULL;
 	nid_alloc = nid_here = numa_node_id();
 	if (cpuset_do_slab_mem_spread() && (cachep->flags & SLAB_MEM_SPREAD))
-		nid_alloc = cpuset_mem_spread_node();
+		nid_alloc = cpuset_slab_spread_node();
 	else if (current->mempolicy)
 		nid_alloc = slab_node(current->mempolicy);
 	if (nid_alloc != nid_here)


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 2 RHEL Program Management 2010-06-24 15:33:01 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 4 Marizol Martinez 2010-07-08 15:28:39 UTC
George -- Reminder: You mentioned this may already be in into a .4? kernel. Please verify and update this BZ. Thanks!

Comment 6 Marizol Martinez 2010-07-16 15:30:21 UTC

*** This bug has been marked as a duplicate of bug 593154 ***

Comment 7 George Beshers 2010-07-26 16:43:27 UTC
I have verified this is in 2.6.32-52.

George


Note You need to log in before you can comment on or make changes to this bug.