Bug 856457 - glusterfs: page fault issue
glusterfs: page fault issue
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.0
Unspecified Unspecified
medium Severity unspecified
: ---
: ---
Assigned To: Raghavendra Bhat
Saurabh
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-09-12 01:03 EDT by Saurabh
Modified: 2016-01-19 01:10 EST (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.3.0.5rhs-36
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-23 18:33:23 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2012-09-12 01:03:24 EDT
Description of problem:

vol type:-
Volume Name: test
Type: Distributed-Replicate
Volume ID: 91cefe12-6a34-4857-8c88-454f2982936d
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:

glusterfsd: page allocation failure. order:1, mode:0x20
Pid: 8311, comm: glusterfsd Not tainted 2.6.32-220.23.1.el6.x86_64 #1
Call Trace:
 <IRQ>  [<ffffffff8112415f>] ? __alloc_pages_nodemask+0x77f/0x940
 [<ffffffff8142cf00>] ? dev_hard_start_xmit+0x200/0x3f0
 [<ffffffff8115e152>] ? kmem_getpages+0x62/0x170
 [<ffffffff8115ed6a>] ? fallback_alloc+0x1ba/0x270
 [<ffffffff8115eae9>] ? ____cache_alloc_node+0x99/0x160
 [<ffffffff8115f8cb>] ? kmem_cache_alloc+0x11b/0x190
 [<ffffffff8141fcf8>] ? sk_prot_alloc+0x48/0x1c0
 [<ffffffff8141ff82>] ? sk_clone+0x22/0x2e0
 [<ffffffff8146d256>] ? inet_csk_clone+0x16/0xd0
 [<ffffffff81486143>] ? tcp_create_openreq_child+0x23/0x450
 [<ffffffff81483b2d>] ? tcp_v4_syn_recv_sock+0x4d/0x2a0
 [<ffffffff81485f01>] ? tcp_check_req+0x201/0x420
 [<ffffffff8148354b>] ? tcp_v4_do_rcv+0x35b/0x430
 [<ffffffff81484cc1>] ? tcp_v4_rcv+0x4e1/0x860
 [<ffffffff81462940>] ? ip_local_deliver_finish+0x0/0x2d0
 [<ffffffff81462940>] ? ip_local_deliver_finish+0x0/0x2d0
 [<ffffffff81462a1d>] ? ip_local_deliver_finish+0xdd/0x2d0
 [<ffffffff81462ca8>] ? ip_local_deliver+0x98/0xa0
 [<ffffffff8146216d>] ? ip_rcv_finish+0x12d/0x440
 [<ffffffff8103758c>] ? kvm_clock_read+0x1c/0x20
 [<ffffffff814626f5>] ? ip_rcv+0x275/0x350
 [<ffffffff8142c6ab>] ? __netif_receive_skb+0x49b/0x6f0
 [<ffffffff8142e768>] ? netif_receive_skb+0x58/0x60
 [<ffffffffa020c3ad>] ? virtnet_poll+0x5dd/0x8d0 [virtio_net]
 [<ffffffff81431013>] ? net_rx_action+0x103/0x2f0
 [<ffffffffa020b1b9>] ? skb_recv_done+0x39/0x40 [virtio_net]
 [<ffffffff81072291>] ? __do_softirq+0xc1/0x1d0
 [<ffffffff810d9740>] ? handle_IRQ_event+0x60/0x170
 [<ffffffff810722ea>] ? __do_softirq+0x11a/0x1d0
 [<ffffffff8100c24c>] ? call_softirq+0x1c/0x30
 [<ffffffff8100de85>] ? do_softirq+0x65/0xa0
 [<ffffffff81072075>] ? irq_exit+0x85/0x90
 [<ffffffff814f5515>] ? do_IRQ+0x75/0xf0
 [<ffffffff8100ba53>] ? ret_from_intr+0x0/0x11
 <EOI>  [<ffffffff8105663f>] ? finish_task_switch+0x4f/0xe0

Version-Release number of selected component (if applicable):
[root@localhost ~]# glusterfs -V
glusterfs 3.3.0rhs built on Aug 17 2012 07:06:58
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.


How reproducible:
happened while executing longevity tests

Steps to reproduce:
Keep executing different REST APIs over UFO setup for long time.

Additional info:

[root@localhost ~]# cat /etc/issue
Red Hat Storage release 2.0 for On-Premise
Kernel \r on an \m
Comment 2 Amar Tumballi 2012-10-11 03:12:48 EDT
i see no gluster stack msgs here, will keep it open, and i suspect this has to do with lower RAM size?
Comment 3 Amar Tumballi 2012-11-29 06:01:51 EST
not seen in a while in our testing, Saurabh, please re-assign if you see it in your testing. Some patches have gone in to fix some of the memory leaks which could have taken care of these.
Comment 4 Raghavendra Bhat 2013-01-11 04:33:00 EST
It is not seen in recent times. As mentioned in the previous comment some fixes have gone in for memory leaks. Please reopen if found again.
Comment 6 Scott Haines 2013-09-23 18:33:23 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.