This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours
Bug 810829 - glusterd crashes on restart
glusterd crashes on restart
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
mainline
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: krishnan parthasarathi
:
: 810883 (view as bug list)
Depends On:
Blocks: 817967
  Show dependency treegraph
 
Reported: 2012-04-09 06:33 EDT by Shwetha Panduranga
Modified: 2015-11-03 18:04 EST (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-24 13:45:08 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
Backtrace of core (34.19 KB, application/octet-stream)
2012-04-09 06:33 EDT, Shwetha Panduranga
no flags Details
glusterd log file (1.77 MB, text/x-log)
2012-04-09 06:34 EDT, Shwetha Panduranga
no flags Details

  None (edit)
Description Shwetha Panduranga 2012-04-09 06:33:03 EDT
Created attachment 576163 [details]
Backtrace of core

Description of problem:
glusterd crashed when restarted. 

[04/09/12 - 20:50:05 root@APP-SERVER1 glusterfs]# gluster volume info
 
Volume Name: dstore
Type: Replicate
Volume ID: c0925823-327d-4222-9f8a-a2bbc3b6de96
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.2.36:/export2/dstore1
Brick2: 192.168.2.37:/export2/dstore1
 
Volume Name: dstore1
Type: Distribute
Volume ID: 825363db-4bb9-4dce-9c2f-9222b0e4e27f
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.2.36:/export2/dstore2


Version-Release number of selected component (if applicable):
mainline

How reproducible:


Steps to Reproduce:
1.create a distribute-replicate volume. brick1, brick3 from machine1 and brick2, brick4 from machine2. machine1 and machine2 are from the trusted storage pool. start the volume1.
2.create a distribute volume. brick1 from machine1 and brick2 from machine2.start the volume2
3.restart machine 2. 
4.start glusterd. 
  
Actual results:
glusterd crashed

Additional Info:- volume status when restarted the glusterd. 
-----------------

[04/09/12 - 20:35:03 root@APP-SERVER3 ~]# gluster volume status

Status of volume: dstore
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 192.168.2.36:/export2/dstore1			24014	N	N/A
Brick 192.168.2.37:/export2/dstore1			24011	Y	10795
NFS Server on localhost					38467	Y	10926
Self-heal Daemon on localhost				N/A	Y	10932
NFS Server on 192.168.2.36				38467	Y	2005
Self-heal Daemon on 192.168.2.36			N/A	Y	2014
NFS Server on 192.168.2.35				38467	Y	2923
Self-heal Daemon on 192.168.2.35			N/A	Y	2929
NFS Server on 192.168.2.34				38467	Y	14439
Self-heal Daemon on 192.168.2.34			N/A	Y	14445

Status of volume: dstore1
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 192.168.2.36:/export2/dstore2			24015	N	N/A
NFS Server on localhost					38467	Y	10926
NFS Server on 192.168.2.35				38467	Y	2923
NFS Server on 192.168.2.36				38467	Y	2005
NFS Server on 192.168.2.34				38467	Y	14439
Comment 1 Shwetha Panduranga 2012-04-09 06:34:02 EDT
Created attachment 576165 [details]
glusterd log file
Comment 2 Anand Avati 2012-04-13 01:19:28 EDT
CHANGE: http://review.gluster.com/3109 (glusterd: Removed 'unprotected' concurrent access of priv->volumes on glusterd restart) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 3 krishnan parthasarathi 2012-04-13 04:21:08 EDT
*** Bug 810883 has been marked as a duplicate of this bug. ***
Comment 4 Shwetha Panduranga 2012-04-25 05:49:12 EDT
Verified the bug on 3.3.0qa38. bug is fixed.

Note You need to log in before you can comment on or make changes to this bug.