Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1057450 - [RHSC] Remove brick failed when RHS nodes in the cluster have multiple hostnames
[RHSC] Remove brick failed when RHS nodes in the cluster have multiple hostnames
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc (Show other bugs)
2.1
Unspecified Unspecified
medium Severity unspecified
: ---
: ---
Assigned To: Sahina Bose
Shruti Sampat
:
Depends On: 1049994
Blocks: 1035040
  Show dependency treegraph
 
Reported: 2014-01-24 02:01 EST by Shruti Sampat
Modified: 2015-12-03 12:17 EST (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
Brick operations like adding and removing a brick from Red Hat Storage Console fails when Red Hat Storage nodes in the cluster have multiple FQDNs (Fully Qualified Domain Names). Workaround: Host with multiple interfaces should map to the same FQDN for both Red Hat Storage Console and gluster peer probe.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:17:58 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Shruti Sampat 2014-01-24 02:01:22 EST
Description of problem:
--------------------------

Installed two RHS servers ( say host1 and host2 ) and added two network interfaces, hence had two IP addresses for each host. Added the following to /etc/hosts on both RHS servers and the management server - 

10.70.37.117 host1a
10.70.37.71 host1b
10.70.37.158 host2a
10.70.37.118 host2b

On s1, run 'gluster peer probe host2a'.

On s1 - 

[root@rhs glusterfs_latest]# gluster peer s
Number of Peers: 1

Hostname: host2a
Uuid: 72110b4b-b4af-41f9-ba5f-69051d5279d9
State: Peer in Cluster (Connected)

On s2 - 

[root@rhs glusterfs_latest]# gluster peer s
Number of Peers: 1

Hostname: 10.70.37.117
Uuid: d225e373-5036-4d81-b7b1-2019fec278f3
State: Peer in Cluster (Connected)

Create a volume as follows on one of the servers - 

# gluster v create test_vol host2a:/tmp/brick1 host1a:/tmp/brick2 force

# gluster volume info 

Number of Peers: 1

Hostname: host2a
Uuid: 72110b4b-b4af-41f9-ba5f-69051d5279d9
State: Peer in Cluster (Connected)
[root@rhs glusterfs_latest]# gluster v i
 
Volume Name: test_vol
Type: Distribute
Volume ID: 44903fed-0335-4fa2-a287-00e5959d4dab
Status: Stopped
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: host2a:/tmp/brick1
Brick2: host1a:/tmp/brick2


Import the cluster to RHSC using 'host1b' in the address field.

The cluster is imported. But the bricks subtab of the volume test_vol shows the following bricks in the UI - 

Server     Brick Directory
---------------------------
	
host2a     /tmp/brick1
host1b     /tmp/brick2

Try to remove the brick host1b:/tmp/brick2 from the UI by unchecking the migrate data option.

Remove brick fails with the following message - 

Error while executing action Remove Gluster Volume Bricks: Volume remove brick force failed
error: Commit failed on host2a_ Please check log file for details_
return code: 17

Version-Release number of selected component (if applicable):
Red Hat Storage Console Version: 2.1.2-0.33.el6rhs 

How reproducible:
Always

Steps to Reproduce:
As explained above.


Actual results:
Remove brick failed.

Expected results:
Remove brick should have worked.

Additional info:
Comment 3 Shalaka 2014-02-11 05:22:07 EST
Please review the edited Doc Text and sign off.
Comment 4 Sahina Bose 2014-02-18 03:25:56 EST
Looks good
Comment 5 Sahina Bose 2014-09-03 11:24:41 EDT
This depends on multiple IP support from RHS - and being able to differentiate the data and management network
Comment 6 Vivek Agarwal 2015-12-03 12:17:58 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.