Bug 1057450

Summary: [RHSC] Remove brick failed when RHS nodes in the cluster have multiple hostnames
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Shruti Sampat <ssampat>
Component: rhscAssignee: Sahina Bose <sabose>
Status: CLOSED EOL QA Contact: Shruti Sampat <ssampat>
Severity: unspecified Docs Contact:
Priority: medium    
Version: 2.1CC: asriram, dpati, knarra, mmahoney, mmccune, nlevinki, rhs-bugs, sabose, sdharane
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
Brick operations like adding and removing a brick from Red Hat Storage Console fails when Red Hat Storage nodes in the cluster have multiple FQDNs (Fully Qualified Domain Names). Workaround: Host with multiple interfaces should map to the same FQDN for both Red Hat Storage Console and gluster peer probe.
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-12-03 17:17:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1049994    
Bug Blocks: 1035040    

Description Shruti Sampat 2014-01-24 07:01:22 UTC
Description of problem:
--------------------------

Installed two RHS servers ( say host1 and host2 ) and added two network interfaces, hence had two IP addresses for each host. Added the following to /etc/hosts on both RHS servers and the management server - 

10.70.37.117 host1a
10.70.37.71 host1b
10.70.37.158 host2a
10.70.37.118 host2b

On s1, run 'gluster peer probe host2a'.

On s1 - 

[root@rhs glusterfs_latest]# gluster peer s
Number of Peers: 1

Hostname: host2a
Uuid: 72110b4b-b4af-41f9-ba5f-69051d5279d9
State: Peer in Cluster (Connected)

On s2 - 

[root@rhs glusterfs_latest]# gluster peer s
Number of Peers: 1

Hostname: 10.70.37.117
Uuid: d225e373-5036-4d81-b7b1-2019fec278f3
State: Peer in Cluster (Connected)

Create a volume as follows on one of the servers - 

# gluster v create test_vol host2a:/tmp/brick1 host1a:/tmp/brick2 force

# gluster volume info 

Number of Peers: 1

Hostname: host2a
Uuid: 72110b4b-b4af-41f9-ba5f-69051d5279d9
State: Peer in Cluster (Connected)
[root@rhs glusterfs_latest]# gluster v i
 
Volume Name: test_vol
Type: Distribute
Volume ID: 44903fed-0335-4fa2-a287-00e5959d4dab
Status: Stopped
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: host2a:/tmp/brick1
Brick2: host1a:/tmp/brick2


Import the cluster to RHSC using 'host1b' in the address field.

The cluster is imported. But the bricks subtab of the volume test_vol shows the following bricks in the UI - 

Server     Brick Directory
---------------------------
	
host2a     /tmp/brick1
host1b     /tmp/brick2

Try to remove the brick host1b:/tmp/brick2 from the UI by unchecking the migrate data option.

Remove brick fails with the following message - 

Error while executing action Remove Gluster Volume Bricks: Volume remove brick force failed
error: Commit failed on host2a_ Please check log file for details_
return code: 17

Version-Release number of selected component (if applicable):
Red Hat Storage Console Version: 2.1.2-0.33.el6rhs 

How reproducible:
Always

Steps to Reproduce:
As explained above.


Actual results:
Remove brick failed.

Expected results:
Remove brick should have worked.

Additional info:

Comment 3 Shalaka 2014-02-11 10:22:07 UTC
Please review the edited Doc Text and sign off.

Comment 4 Sahina Bose 2014-02-18 08:25:56 UTC
Looks good

Comment 5 Sahina Bose 2014-09-03 15:24:41 UTC
This depends on multiple IP support from RHS - and being able to differentiate the data and management network

Comment 6 Vivek Agarwal 2015-12-03 17:17:58 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.