Bug 991333 - fuse mount logging information about trying to connect to bricks which are successfully removed from volume.
Summary: fuse mount logging information about trying to connect to bricks which are su...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-02 07:38 UTC by spandura
Modified: 2015-12-03 17:15 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-03 17:15:18 UTC
Embargoed:


Attachments (Terms of Use)
SOS Reports (5.63 MB, application/x-gzip)
2013-08-02 07:41 UTC, spandura
no flags Details

Description spandura 2013-08-02 07:38:25 UTC
Description of problem:
========================
Even after successful removal of bricks from volume, mount log reports about :

1. failing to get the port numbers of the brick process which was removed . 

2. Client process will keep trying to connect to glusterd until brick's port is available. 

Output from the fuse mount log:
================================
[2013-08-02 07:20:18.645379] E [client-handshake.c:1741:client_query_portmap_cbk] 0-vol_dis_rep-client-0: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2013-08-02 07:20:18.645499] I [client.c:2103:client_rpc_notify] 0-vol_dis_rep-client-0: disconnected from 10.70.34.119:24007. Client process will keep trying to connect to glusterd until brick's port is available. 
[2013-08-02 07:20:21.651015] E [client-handshake.c:1741:client_query_portmap_cbk] 0-vol_dis_rep-client-1: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2013-08-02 07:20:21.651119] I [client.c:2103:client_rpc_notify] 0-vol_dis_rep-client-1: disconnected from 10.70.34.118:24007. Client process will keep trying to connect to glusterd until brick's port is available. 

This information is confusing. Even after mount process receiving the new vol file, why is it referring to old bricks? 

Version-Release number of selected component (if applicable):
=============================================================
root@darrel [Aug-02-2013-12:55:34] >rpm -qa | grep glusterfs
glusterfs-fuse-3.4.0.14rhs-1.el6_4.x86_64
glusterfs-3.4.0.14rhs-1.el6_4.x86_64
glusterfs-debuginfo-3.4.0.14rhs-1.el6_4.x86_64

How reproducible:
=================
Often

Steps to Reproduce:
====================
1. Create 3x2 distribute replicate volume.  Start the volume

2. Create fuse mount. Create a file. Open file for editing. 

3. Remove bricks of replicate-subvolume from the volume which has the file in it. 

4. Once the remove brick is successfully complete (remove-brick status) , commit remove-brick operation.

5. Mount receives the new vol file . Even after successful remove-brick operation, mount process logs about it's reconnects to the removed-brick.

Comment 1 spandura 2013-08-02 07:41:03 UTC
Created attachment 781872 [details]
SOS Reports

Comment 2 spandura 2013-08-02 07:41:53 UTC
Volume info before removing the bricks:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
root@king [Aug-02-2013-11:57:27] >gluster v info
 
Volume Name: vol_dis_rep
Type: Distributed-Replicate
Volume ID: 00173faf-7c6e-403c-bc64-2bf644cfb293
Status: Created
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: king:/rhs/bricks/b0
Brick2: hicks:/rhs/bricks/b1
Brick3: king:/rhs/bricks/b2
Brick4: hicks:/rhs/bricks/b3
Brick5: king:/rhs/bricks/b4
Brick6: hicks:/rhs/bricks/b5

Volume info after removing the bricks:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

root@king [Aug-02-2013-13:10:34] >gluster v info
 
Volume Name: vol_dis_rep
Type: Distributed-Replicate
Volume ID: 00173faf-7c6e-403c-bc64-2bf644cfb293
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: king:/rhs/bricks/b2
Brick2: hicks:/rhs/bricks/b3
Brick3: king:/rhs/bricks/b4
Brick4: hicks:/rhs/bricks/b5

Comment 4 Vivek Agarwal 2015-12-03 17:15:18 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.


Note You need to log in before you can comment on or make changes to this bug.