This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours
Bug 764886 - (GLUSTER-3154) glusterd fails to restart
glusterd fails to restart
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
3.2.1
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: Junaid
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2011-07-12 01:35 EDT by Saurabh
Modified: 2013-08-06 18:37 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions: release-3.2
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2011-07-12 01:35:06 EDT
we have a setup with a distribute volume with two bricks,

the issue happens is that glusterd fails to restart, 

the data set contains, a dir, under which we have the nested dirs and leaf is having set the quota limit set for 1MB , altogether it's around 400-500 dirs.

the log files says this:
[2011-07-11 20:41:10.132789] E [glusterd-store.c:990:glusterd_store_handle_retrieve] 0-glusterd: Unable to retrieve store handle for /etc/glusterd/vols/dist1/bricks/(null), error: No such file or directory
[2011-07-11 20:41:10.132800] E [glusterd-store.c:1634:glusterd_store_retrieve_volumes] 0-: Unable to restore volume: dist1
[2011-07-11 20:41:10.132816] E [xlator.c:1390:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
[2011-07-11 20:41:10.132826] E [graph.c:331:glusterfs_graph_init] 0-management: initializing translator failed
[2011-07-11 20:41:10.132833] E [graph.c:503:glusterfs_graph_activate] 0-graph: init failed
[2011-07-11 20:41:10.132966] W [glusterfsd.c:712:cleanup_and_exit] (-->/root/quota/inst/sbin/glusterd(main+0x3c2) [0x405262] (-->/root/quota/inst/sbin/glusterd(glusterfs_volumes_init+0x18b) [0x40426b] (-->/root/quota/inst/sbin/glusterd(glusterfs_process_volfp+0x17a) [0x4040ca]))) 0-: received signum (0), shutting down
Comment 1 Anand Avati 2011-07-12 22:09:32 EDT
PATCH: http://patches.gluster.com/patch/7854 in release-3.2 (mgmt/glusterd: write complete string contained in value in to the volinfo file.)
Comment 2 Vijay Bellur 2011-07-14 05:47:54 EDT
Re-opening as this requires more changes.
Comment 3 Anand Avati 2011-07-24 11:44:17 EDT
CHANGE: http://review.gluster.com/71 (Change-Id: I64d7e143eb4b0fda76a9b97134d0233763a1679a) merged in release-3.2 by Anand Avati (avati@gluster.com)
Comment 4 Anand Avati 2011-07-29 01:18:10 EDT
CHANGE: http://review.gluster.com/8 (Change-Id: I06173a4cf22e12bc543f8ff2d151078333b500e1) merged in master by Anand Avati (avati@gluster.com)
Comment 5 Raghavendra Bhat 2011-08-23 02:25:41 EDT
Created around 600 directories by name dir0/dir1/dir2/dir3...... And on the leaf directory set the quota limit. Killed glusterd and restarted it. glusterd started and was able execute further gluster commands such as quota remove etc.

Note You need to log in before you can comment on or make changes to this bug.