Bug 971687 - quota: Segmentation fault (core dumped)
quota: Segmentation fault (core dumped)
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.1
x86_64 Linux
high Severity high
: ---
: ---
Assigned To: vpshastry
Saurabh
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-06-07 03:03 EDT by Saurabh
Modified: 2016-01-19 01:11 EST (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.12rhs.beta1-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-09-23 18:39:49 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2013-06-07 03:03:05 EDT
Description of problem:
quota command throws segmentation fault,

Version-Release number of selected component (if applicable):

[root@rhsauto033 ~]# rpm -qa | grep glusterfs
glusterfs-3.4rhs-1.el6rhs.x86_64
glusterfs-fuse-3.4rhs-1.el6rhs.x86_64
glusterfs-geo-replication-3.4rhs-1.el6rhs.x86_64
glusterfs-server-3.4rhs-1.el6rhs.x86_64
[root@rhsauto033 ~]# 

How reproducible:
Tried the new implementaion of quota and the issue is seen

Steps to Reproduce:
1. create a 6x2 volume, start it. enable quota, use limit-usage for path "/" and size 1GB.
2. gluster volume quota dist-rep soft-limit / 80%
3. gluster volume quota dist-rep list


Actual results:

[root@rhsauto033 ~]# gluster volume quota dist-rep soft-limit / 80%
soft limit set on /

[root@rhsauto033 ~]# gluster volume quota dist-rep list
	path		  limit_set (soft/hard)		     size
----------------------------------------------------------------------------------
Segmentation fault (core dumped)

Expected results:

Doesn't expect segfaults with command executions.

Additional info:
Comment 2 Saurabh 2013-06-07 03:10:48 EDT
Your comment was:

    collect sosreports from here,
    http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/971687/

    core file is not found in /var/log/core, rather in /root 


    [root@rhsauto033 ~]# gluster volume status
    Status of volume: dist-rep
    Gluster process						Port	Online	Pid
    ------------------------------------------------------------------------------
    Brick rhsauto033.lab.eng.blr.redhat.com:/rhs/brick1/d1	N/A	N	2841
    Brick rhsauto034.lab.eng.blr.redhat.com:/rhs/brick1/d1	N/A	N	2840
    Brick rhsauto035.lab.eng.blr.redhat.com:/rhs/brick1/d1	N/A	N	2779
    Brick rhsauto039.lab.eng.blr.redhat.com:/rhs/brick1/d1	N/A	N	4777
    Brick rhsauto033.lab.eng.blr.redhat.com:/rhs/brick2/d1	N/A	N	2850
    Brick rhsauto034.lab.eng.blr.redhat.com:/rhs/brick2/d1	N/A	N	2849
    Brick rhsauto035.lab.eng.blr.redhat.com:/rhs/brick2/d1	N/A	N	2788
    Brick rhsauto039.lab.eng.blr.redhat.com:/rhs/brick2/d1	N/A	N	4786
    Brick rhsauto033.lab.eng.blr.redhat.com:/rhs/brick3/d1	N/A	N	2859
    Brick rhsauto034.lab.eng.blr.redhat.com:/rhs/brick3/d1	N/A	N	2858
    Brick rhsauto035.lab.eng.blr.redhat.com:/rhs/brick3/d1	N/A	N	2797
    Brick rhsauto039.lab.eng.blr.redhat.com:/rhs/brick3/d1	N/A	N	4795
    NFS Server on localhost					2049	Y	2870
    Self-heal Daemon on localhost				N/A	N	N/A
    NFS Server on e70786de-676a-41fc-8db3-008e314a2da8	2049	Y	4805
    Self-heal Daemon on e70786de-676a-41fc-8db3-008e314a2da
    8							N/A	N	N/A
    NFS Server on f5e410cc-68cb-4bed-ba5b-e5af95114787	2049	Y	2868
    Self-heal Daemon on f5e410cc-68cb-4bed-ba5b-e5af9511478
    7							N/A	N	N/A
    NFS Server on d5c90985-5cf3-41d2-a6e1-977eb5a99ecf	2049	Y	2811
    Self-heal Daemon on d5c90985-5cf3-41d2-a6e1-977eb5a99ec
    f							N/A	N	N/A
     
    There are no active volume tasks
    Volume dist is not started
Comment 3 Scott Haines 2013-09-23 18:39:49 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html
Comment 4 Scott Haines 2013-09-23 18:43:48 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.