Bug 1415632
Summary: | Systemic setup brick process got killed due to Out Of Memory (OOM) | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> |
Component: | core | Assignee: | Mohit Agrawal <moagrawa> |
Status: | CLOSED DUPLICATE | QA Contact: | Rahul Hinduja <rhinduja> |
Severity: | urgent | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.2 | CC: | amukherj, moagrawa, nchilaka, rcyriac, rhs-bugs, storage-qa-internal |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-01-24 15:29:14 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Nag Pavan Chilakam
2017-01-23 10:04:47 UTC
Can we please check if this is not duplicate of BZ 1413351 ? Hi, Is it possible to share statedump of glustershd process from other nodes in cluster if it is up? Regards Mohit Agrawal (In reply to Mohit Agrawal from comment #4) > Hi, > > Is it possible to share statedump of glustershd process from other nodes in > cluster if it is up? > > Regards > Mohit Agrawal Please share the statedump of brick process also those are running. Regards Mohit Agrawal (In reply to Mohit Agrawal from comment #4) > Hi, > > Is it possible to share statedump of glustershd process from other nodes in > cluster if it is up? > > Regards > Mohit Agrawal Please share the statedump of brick process also those are running. Regards Mohit Agrawal Hi Mohit, the statedump of the bricks are available on the volume. you can mount the volume and visit the below path /mnt/systemic/logs/rhs-client11.lab.eng.blr.redhat.com/statedumps it has all the dumps of all bricks of all nodes. Given this is systemic testing it may be highly difficult to point to one statedump Hi, Thank you for sharing the location of statedump.I have checked current brick process statedump in /var/run/gluster, for specific to dict leak it is similar to bugzilla (https://bugzilla.redhat.com/show_bug.cgi?id=1411329) and current memory consumption of brick process is also high ps -aef | grep glusterfsd root 4154 1 75 Jan18 ? 4-10:47:51 /usr/sbin/glusterfsd -s 10.70.35.20 --volfile-id systemic.10.70.35.20.rhs-brick1-systemic -p /var/lib/glusterd/vols/systemic/run/10.70.35.20-rhs-brick1-systemic.pid -S /var/run/gluster/ecb70f1c5d8ade863b15ff45abb9d46c.socket --brick-name /rhs/brick1/systemic -l /var/log/glusterfs/bricks/rhs-brick1-systemic.log --xlator-option *-posix.glusterd-uuid=c5954ab2-6283-4e02-a6f6-1fa6cd943f49 --brick-port 49152 --xlator-option systemic-server.listen-port=49152 root 24472 24391 0 17:56 pts/4 00:00:00 grep --color=auto glusterfsd [root@dhcp35-20 gluster]# pmap -x 4154 | grep "total" total kB 21724380 14717652 14714608 dict leak issue is already fixed from the bugzilla (1411329) Regards Mohit Agrawal |