Bug 923205 - glusterfsd fails to signin if first attempt to signing fails
Summary: glusterfsd fails to signin if first attempt to signing fails
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: ---
Assignee: krishnan parthasarathi
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-03-19 12:29 UTC by krishnan parthasarathi
Modified: 2015-11-03 23:06 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-22 15:46:38 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description krishnan parthasarathi 2013-03-19 12:29:11 UTC
Description of problem:

Bricks process is running but the gluster volume status command output says the PORT for the bicks is "N/A"

Bricks Log Message:
=====================
[2013-03-19 05:08:10.099163] W [socket.c:1512:__socket_proto_state_machine] 0-glusterfs: reading from socket failed. Error (Transport endpoint is not connected), peer (::1:24007)
[2013-03-19 05:08:10.099373] E [rpc-clnt.c:371:saved_frames_unwind] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x78) [0x333160fad8] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xb0) [0x333160f790] (-->/usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe) [0x333160f1fe]))) 0-glusterfs: forced unwinding frame type(Gluster Portmap) op(SIGNIN(4)) called at 2013-03-19 05:08:10.099147 (xid=0x2x)


Version-Release number of selected component (if applicable):
[root@rhsauto023:~] Mar-19-2013 04:47:47 $ gluster --version
glusterfs 3.3.0.6rhs built on Mar 17 2013 12:55:38

[root@rhsauto023:~] Mar-19-2013 02:42:28 $ rpm -qa | grep gluster
glusterfs-server-3.3.0.6rhs-4.el6rhs.x86_64


How reproducible:
-

Steps to Reproduce:
1. 1.Create a 1 x 2 replicate volume. (2 storage nodes and 1 brick on each node)

2.create fuse mount. create files and directories from mount point.
 
3.Bring down brick2 offline.

4.Try to remove the online brick. ( Refer to bug 923135 for the failure )

5.The removal of brick fails and the volume is changed to distribute type. 
  
6. restart the volume. 

7. The brick2 process didn't get started.

[root@rhsauto023:~] Mar-19-2013 05:00:38 $ gluster v status 
Status of volume: vol-rep-2
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick rhsauto023:/export2/brick0			24010	Y	31673
Brick rhsauto024:/export2/brick1			24010	N	N/A
NFS Server on localhost					38467	Y	32418
NFS Server on rhsauto024				38467	N	31670


8. restart glusterd on all storage nodes. 

  
Actual results:

[root@rhsauto023:~] Mar-19-2013 05:08:20 $ gluster v status
 
Status of volume: vol-rep-2
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick rhsauto023:/export2/brick0			24010	Y	31673
Brick rhsauto024:/export2/brick1			N/A	Y	757
NFS Server on localhost					38467	Y	2869
NFS Server on rhsauto024				38467	Y	824

Expected results:
When process is running, the process should be listening on the Port.

Additional info:

Comment 1 krishnan parthasarathi 2013-03-19 12:30:38 UTC
This bug was found in 3.3.0. Created this bug since the code path involved is present on upstream master as well. Ignore distracting references to 3.3.0.

Comment 2 Kaleb KEITHLEY 2015-10-22 15:46:38 UTC
because of the large number of bugs filed against mainline version\ is ambiguous and about to be removed as a choice.

If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.


Note You need to log in before you can comment on or make changes to this bug.