Bug 1277944 - "Transport endpoint not connected" in heal info though hot tier bricks are up
Summary: "Transport endpoint not connected" in heal info though hot tier bricks are up
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.1.2
Assignee: rjoseph
QA Contact: Bhaskarakiran
URL:
Whiteboard:
Depends On:
Blocks: 1191480 1294794 1294797 1467513
TreeView+ depends on / blocked
 
Reported: 2015-11-04 11:58 UTC by Bhaskarakiran
Modified: 2017-07-04 06:07 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.5-15
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1294794 (view as bug list)
Environment:
Last Closed: 2016-03-01 05:51:11 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 10:20:36 UTC

Description Bhaskarakiran 2015-11-04 11:58:45 UTC
Description of problem:
=======================
gluster volume heal info output shows "Transport end point not connected" for some of the bricks though they are up. Created files and directories and verified that they are getting created in those bricks. This is on EC volume with 2x2 tier volume.

[root@transformers ~]# gluster v heal vol1 info
Brick ninja:/rhs/brick2/vol1-tier4
Number of entries: 0

Brick vertigo:/rhs/brick2/vol1-tier3
Number of entries: 0

Brick ninja:/rhs/brick1/vol1-tier2
Status: Transport endpoint is not connected

Brick vertigo:/rhs/brick1/vol1-tier1
Status: Transport endpoint is not connected

Brick transformers:/rhs/brick1/b1
Number of entries: 0

Brick interstellar:/rhs/brick1/b2
Number of entries: 0

Brick transformers:/rhs/brick2/b3
Number of entries: 0

Brick interstellar:/rhs/brick2/b4
Number of entries: 0

Brick transformers:/rhs/brick3/b5
Number of entries: 0

Brick interstellar:/rhs/brick3/b6
Number of entries: 0

Brick transformers:/rhs/brick4/b7
Number of entries: 0

Brick interstellar:/rhs/brick4/b8
Number of entries: 0

Brick transformers:/rhs/brick5/b9
Number of entries: 0

Brick interstellar:/rhs/brick5/b10
Number of entries: 0

Brick transformers:/rhs/brick6/b11
Number of entries: 0

Brick interstellar:/rhs/brick6/b12
Number of entries: 0

[root@transformers ~]# 

Brick vertigo:/rhs/brick1/vol1-tier1 :

[root@vertigo ~]# ls -ltr /rhs/brick1/vol1-tier1/
total 9765628
-rw-r--r--. 2 root root 10000002053 Nov  4 15:09 file
drwxr-xr-x. 2 root root           6 Nov  4 15:28 d1
drwxr-xr-x. 2 root root           6 Nov  4 15:32 dir1
drwxr-xr-x. 2 root root           6 Nov  4 15:32 dir2
[root@vertigo ~]# 


Brick ninja:/rhs/brick1/vol1-tier2 :

[root@ninja ~]# ls -ltr /rhs/brick1/vol1-tier2/
total 9765628
-rw-r--r--. 2 root root 10000002053 Nov  4 15:09 file
drwxr-xr-x. 2 root root           6 Nov  4 15:28 d1
drwxr-xr-x. 2 root root           6 Nov  4 15:28 dir1
drwxr-xr-x. 2 root root           6 Nov  4 15:29 dir2
[root@ninja ~]# 


[root@transformers ~]# gluster v status vol1
Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Hot Bricks:
Brick ninja:/rhs/brick2/vol1-tier4          49153     0          Y       18692
Brick vertigo:/rhs/brick2/vol1-tier3        49153     0          Y       17088
Brick ninja:/rhs/brick1/vol1-tier2          49152     0          Y       18674
Brick vertigo:/rhs/brick1/vol1-tier1        49152     0          Y       17070
Cold Bricks:
Brick transformers:/rhs/brick1/b1           49175     0          Y       45094
Brick interstellar:/rhs/brick1/b2           49173     0          Y       64974
Brick transformers:/rhs/brick2/b3           49176     0          Y       45114
Brick interstellar:/rhs/brick2/b4           49174     0          Y       64992
Brick transformers:/rhs/brick3/b5           49177     0          Y       34340
Brick interstellar:/rhs/brick3/b6           49175     0          Y       54414
Brick transformers:/rhs/brick4/b7           49178     0          Y       34358
Brick interstellar:/rhs/brick4/b8           49176     0          Y       54432
Brick transformers:/rhs/brick5/b9           49179     0          Y       34376
Brick interstellar:/rhs/brick5/b10          49177     0          Y       54450
Brick transformers:/rhs/brick6/b11          49180     0          Y       34394
Brick interstellar:/rhs/brick6/b12          49178     0          Y       54468
Snapshot Daemon on localhost                49181     0          Y       34470
NFS Server on localhost                     2049      0          Y       45133
Self-heal Daemon on localhost               N/A       N/A        Y       45141
Quota Daemon on localhost                   N/A       N/A        Y       45151
Snapshot Daemon on ninja                    N/A       N/A        N       N/A  
NFS Server on ninja                         N/A       N/A        N       N/A  
Self-heal Daemon on ninja                   N/A       N/A        Y       28021
Quota Daemon on ninja                       N/A       N/A        Y       28033
Snapshot Daemon on vertigo                  N/A       N/A        N       N/A  
NFS Server on vertigo                       N/A       N/A        N       N/A  
Self-heal Daemon on vertigo                 N/A       N/A        Y       19514
Quota Daemon on vertigo                     N/A       N/A        Y       19525
Snapshot Daemon on interstellar.lab.eng.blr
.redhat.com                                 49179     0          Y       54544
NFS Server on interstellar.lab.eng.blr.redh
at.com                                      2049      0          Y       65011
Self-heal Daemon on interstellar.lab.eng.bl
r.redhat.com                                N/A       N/A        Y       65019
Quota Daemon on interstellar.lab.eng.blr.re
dhat.com                                    N/A       N/A        Y       65029
 
Task Status of Volume vol1
------------------------------------------------------------------------------
Task                 : Tier migration      
ID                   : ad7d8117-ada5-46e7-abdb-971634cee1fe
Status               : in progress         
 
[root@transformers ~]# 

Version-Release number of selected component (if applicable):
=============================================================
3.7.5-5

[root@transformers ~]# gluster --version
glusterfs 3.7.5 built on Oct 29 2015 10:11:53
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@transformers ~]# 

How reproducible:
=================
100%

Steps to Reproduce:
1. create 1x(8+4) ec volume, attach a 2x2 dist-rep tier volume
2. Bring down 4 of the ec bricks and create some files of 1GB
3. Bring them up and check the heal
4. list heal info 

Actual results:
===============
Transport endpoint not connected

Expected results:
=================
heal info should show correct status 


Additional info:

Comment 2 Pranith Kumar K 2015-12-24 10:03:41 UTC
Bhaskar,
     Could you provide with sosreports for the bug.

Pranith

Comment 11 Avra Sengupta 2016-01-07 14:47:59 UTC
Master URL : http://review.gluster.org/#/c/13118/
Release 3.7 URL : http://review.gluster.org/#/c/13119/
RHGS 3.1.2 URL : https://code.engineering.redhat.com/gerrit/#/c/65079/

Comment 12 Bhaskarakiran 2016-01-12 13:26:25 UTC
verified this on the latest build 3.7.5-15 and tried couple of tweaks but did not hit the reported problem. Marking this as verified.

Comment 14 errata-xmlrpc 2016-03-01 05:51:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html


Note You need to log in before you can comment on or make changes to this bug.