Bug 842230 - glusterfs: page allocation issue with swapper
glusterfs: page allocation issue with swapper
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: core (Show other bugs)
pre-release
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Raghavendra Bhat
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-07-23 04:14 EDT by Saurabh
Modified: 2016-01-19 01:10 EST (History)
3 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-24 13:27:15 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2012-07-23 04:14:11 EDT
Description of problem:

the issue may be related to the bug 842206,
but the back trace is different hence opening up a new bug

swapper: page allocation failure. order:1, mode:0x20
Pid: 0, comm: swapper Not tainted 2.6.32-220.23.1.el6.x86_64 #1
Call Trace:
 <IRQ>  [<ffffffff8112415f>] ? __alloc_pages_nodemask+0x77f/0x940
 [<ffffffff8115e152>] ? kmem_getpages+0x62/0x170
 [<ffffffff8115ed6a>] ? fallback_alloc+0x1ba/0x270
 [<ffffffff8115eae9>] ? ____cache_alloc_node+0x99/0x160
 [<ffffffff8115f8cb>] ? kmem_cache_alloc+0x11b/0x190
 [<ffffffff8141fcf8>] ? sk_prot_alloc+0x48/0x1c0
 [<ffffffff8141ff82>] ? sk_clone+0x22/0x2e0
 [<ffffffff8146d256>] ? inet_csk_clone+0x16/0xd0
 [<ffffffff81486143>] ? tcp_create_openreq_child+0x23/0x450
 [<ffffffff81483b2d>] ? tcp_v4_syn_recv_sock+0x4d/0x2a0
 [<ffffffff81485f01>] ? tcp_check_req+0x201/0x420
 [<ffffffff8148354b>] ? tcp_v4_do_rcv+0x35b/0x430
 [<ffffffffa0384557>] ? ipv4_confirm+0x87/0x1d0 [nf_conntrack_ipv4]
 [<ffffffff81484cc1>] ? tcp_v4_rcv+0x4e1/0x860
 [<ffffffff81462940>] ? ip_local_deliver_finish+0x0/0x2d0
 [<ffffffff81462a1d>] ? ip_local_deliver_finish+0xdd/0x2d0
 [<ffffffff81462ca8>] ? ip_local_deliver+0x98/0xa0
 [<ffffffff8146216d>] ? ip_rcv_finish+0x12d/0x440
 [<ffffffff814626f5>] ? ip_rcv+0x275/0x350
 [<ffffffff8142c6ab>] ? __netif_receive_skb+0x49b/0x6f0
 [<ffffffff8142e768>] ? netif_receive_skb+0x58/0x60
 [<ffffffffa01553ad>] ? virtnet_poll+0x5dd/0x8d0 [virtio_net]
 [<ffffffff8142c6ab>] ? __netif_receive_skb+0x49b/0x6f0
 [<ffffffff81431013>] ? net_rx_action+0x103/0x2f0
 [<ffffffffa01541b9>] ? skb_recv_done+0x39/0x40 [virtio_net]
 [<ffffffff81072291>] ? __do_softirq+0xc1/0x1d0
 [<ffffffff810d9740>] ? handle_IRQ_event+0x60/0x170
 [<ffffffff8100c24c>] ? call_softirq+0x1c/0x30
 [<ffffffff8100de85>] ? do_softirq+0x65/0xa0
 [<ffffffff81072075>] ? irq_exit+0x85/0x90
 [<ffffffff814f5515>] ? do_IRQ+0x75/0xf0
 [<ffffffff8100ba53>] ? ret_from_intr+0x0/0x11
 <EOI>  [<ffffffff810375eb>] ? native_safe_halt+0xb/0x10
 [<ffffffff810147dd>] ? default_idle+0x4d/0xb0
 [<ffffffff81009e06>] ? cpu_idle+0xb6/0x110
 [<ffffffff814e6736>] ? start_secondary+0x202/0x245


Version-Release number of selected component (if applicable):

[root@localhost ~]# glusterfs -V
glusterfs 3.3.0 built on Jul 19 2012 14:08:45
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.

How reproducible:
happens quite frequently on a machine

Steps to Reproduce:
1. send REST APIs in parallel.
2.
3.
  
Actual results:


Expected results:


Additional info:
Comment 1 Amar Tumballi 2012-07-25 00:50:07 EDT
doesn't look like obvious glusterfs issue. Will keep it open and work on fixing some leaks as and when we see it. Will re-run tests with same env after we find some leaks.
Comment 2 Amar Tumballi 2012-10-22 00:05:56 EDT
We will keep it open, and see if we are able to reproduce the issue with RHS2.0 (updates) or RHS2.1 testing... If not found till GA date of RHS2.1, will be closing the bug.
Comment 3 Amar Tumballi 2012-11-29 06:13:08 EST
Saurabh, not seeing it happen any more in our longevity testing. Can you please re-open if seen again.

Note You need to log in before you can comment on or make changes to this bug.