Bug 1881823 - add-brick: Getting an error message while adding a brick from different node to the volume.
Summary: add-brick: Getting an error message while adding a brick from different node ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: cli
Version: rhgs-3.5
Hardware: x86_64
OS: All
urgent
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 3
Assignee: Sheetal Pamecha
QA Contact: Arthy Loganathan
URL:
Whiteboard:
Depends On:
Blocks: 1763124
TreeView+ depends on / blocked
 
Reported: 2020-09-23 07:09 UTC by Arthy Loganathan
Modified: 2020-12-17 04:52 UTC (History)
8 users (show)

Fixed In Version: glusterfs-6.0-46
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-17 04:51:53 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:5603 0 None None None 2020-12-17 04:52:15 UTC

Description Arthy Loganathan 2020-09-23 07:09:01 UTC
Getting the following error message while adding a brick from different node to the volume.

"volume add-brick: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant configuration. Use 'force' at the end of the command if you want to override this behavior. "


Provide version-Release number of selected component (if applicable):
glusterfs-server-6.0-45.el8rhgs.x86_64


Is this issue reproducible? If yes, share more details.:
Always

Steps to Reproduce:
1.Create a replica 2 volume.
2.Add a brick from different node to convert it to replica 3 volume.

Actual results:
Getting an error message.
"volume add-brick: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant configuration. Use 'force' at the end of the command if you want to override this behavior. "
However, with force option, add-brick is successful.
 
Expected results:
Error message should not be seen as the brick to be added is from a different node.
 
 

Additional info:

[root@dhcp47-141 ~]# gluster vol create vol2 replica 2 10.70.47.141:/bricks/brick2/vol2_brick2 10.70.47.41:/bricks/brick2/vol2_brick2
Support for replica 2 volumes stands deprecated as they are prone to split-brain. Use Arbiter or Replica 3 to avoid this.
Do you still want to continue?
 (y/n) y
volume create: vol2: success: please start the volume to access data
[root@dhcp47-141 ~]# gluster vol start vol2
volume start: vol2: success
[root@dhcp47-141 ~]# gluster vol info vol2
 
Volume Name: vol2
Type: Replicate
Volume ID: dba4e3a0-b82a-41a6-b55e-9c4dac678718
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.70.47.141:/bricks/brick2/vol2_brick2
Brick2: 10.70.47.41:/bricks/brick2/vol2_brick2
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off


[root@dhcp47-141 ~]# gluster vol add-brick vol2 replica 3 10.70.47.178:/bricks/brick2/vol2_brick2
volume add-brick: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant configuration. Use 'force' at the end of the command if you want to override this behavior.

Comment 2 Arthy Loganathan 2020-09-23 07:32:10 UTC
This bug is not a blocker for the release, with force option add-brick is successful.

Comment 10 Arthy Loganathan 2020-10-28 11:21:45 UTC
[root@dhcp46-157 ~]# gluster vol create vol8 replica 2 10.70.46.157:/bricks/brick5/vol7_brick2 10.70.46.56:/bricks/brick5/vol7_brick2 
Support for replica 2 volumes stands deprecated as they are prone to split-brain. Use Arbiter or Replica 3 to avoid this.
Do you still want to continue?
 (y/n) y
volume create: vol8: success: please start the volume to access data
[root@dhcp46-157 ~]# gluster vol start vol8
volume start: vol8: success
[root@dhcp46-157 ~]# 
[root@dhcp46-157 ~]# gluster vol add-brick vol8 replica 3 10.70.47.142:/bricks/brick0/vol8_brick2
volume add-brick: success

[root@dhcp46-157 ~]# gluster vol info vol8
 
Volume Name: vol8
Type: Replicate
Volume ID: 2dc220ad-6ea0-42e6-be6e-298990ea9e87
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.46.157:/bricks/brick5/vol7_brick2
Brick2: 10.70.46.56:/bricks/brick5/vol7_brick2
Brick3: 10.70.47.142:/bricks/brick0/vol8_brick2
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.brick-multiplex: off
[root@dhcp46-157 ~]# 

Error message is not seen when the newly added brick is from a different node.

Verified the fix in,
glusterfs-server-6.0-46.el7rhgs.x86_64
glusterfs-server-6.0-46.el8rhgs.x86_64

Comment 12 errata-xmlrpc 2020-12-17 04:51:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5603


Note You need to log in before you can comment on or make changes to this bug.