Bug 810829 - glusterd crashes on restart
Summary: glusterd crashes on restart
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: krishnan parthasarathi
QA Contact:
URL:
Whiteboard:
: 810883 (view as bug list)
Depends On:
Blocks: 817967
TreeView+ depends on / blocked
 
Reported: 2012-04-09 10:33 UTC by Shwetha Panduranga
Modified: 2015-11-03 23:04 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-07-24 17:45:08 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
Backtrace of core (34.19 KB, application/octet-stream)
2012-04-09 10:33 UTC, Shwetha Panduranga
no flags Details
glusterd log file (1.77 MB, text/x-log)
2012-04-09 10:34 UTC, Shwetha Panduranga
no flags Details

Description Shwetha Panduranga 2012-04-09 10:33:03 UTC
Created attachment 576163 [details]
Backtrace of core

Description of problem:
glusterd crashed when restarted. 

[04/09/12 - 20:50:05 root@APP-SERVER1 glusterfs]# gluster volume info
 
Volume Name: dstore
Type: Replicate
Volume ID: c0925823-327d-4222-9f8a-a2bbc3b6de96
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.2.36:/export2/dstore1
Brick2: 192.168.2.37:/export2/dstore1
 
Volume Name: dstore1
Type: Distribute
Volume ID: 825363db-4bb9-4dce-9c2f-9222b0e4e27f
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.2.36:/export2/dstore2


Version-Release number of selected component (if applicable):
mainline

How reproducible:


Steps to Reproduce:
1.create a distribute-replicate volume. brick1, brick3 from machine1 and brick2, brick4 from machine2. machine1 and machine2 are from the trusted storage pool. start the volume1.
2.create a distribute volume. brick1 from machine1 and brick2 from machine2.start the volume2
3.restart machine 2. 
4.start glusterd. 
  
Actual results:
glusterd crashed

Additional Info:- volume status when restarted the glusterd. 
-----------------

[04/09/12 - 20:35:03 root@APP-SERVER3 ~]# gluster volume status

Status of volume: dstore
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 192.168.2.36:/export2/dstore1			24014	N	N/A
Brick 192.168.2.37:/export2/dstore1			24011	Y	10795
NFS Server on localhost					38467	Y	10926
Self-heal Daemon on localhost				N/A	Y	10932
NFS Server on 192.168.2.36				38467	Y	2005
Self-heal Daemon on 192.168.2.36			N/A	Y	2014
NFS Server on 192.168.2.35				38467	Y	2923
Self-heal Daemon on 192.168.2.35			N/A	Y	2929
NFS Server on 192.168.2.34				38467	Y	14439
Self-heal Daemon on 192.168.2.34			N/A	Y	14445

Status of volume: dstore1
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 192.168.2.36:/export2/dstore2			24015	N	N/A
NFS Server on localhost					38467	Y	10926
NFS Server on 192.168.2.35				38467	Y	2923
NFS Server on 192.168.2.36				38467	Y	2005
NFS Server on 192.168.2.34				38467	Y	14439

Comment 1 Shwetha Panduranga 2012-04-09 10:34:02 UTC
Created attachment 576165 [details]
glusterd log file

Comment 2 Anand Avati 2012-04-13 05:19:28 UTC
CHANGE: http://review.gluster.com/3109 (glusterd: Removed 'unprotected' concurrent access of priv->volumes on glusterd restart) merged in master by Vijay Bellur (vijay)

Comment 3 krishnan parthasarathi 2012-04-13 08:21:08 UTC
*** Bug 810883 has been marked as a duplicate of this bug. ***

Comment 4 Shwetha Panduranga 2012-04-25 09:49:12 UTC
Verified the bug on 3.3.0qa38. bug is fixed.


Note You need to log in before you can comment on or make changes to this bug.