Bug 1763124
Summary: | clearly state that replica 2 is deprecated in cli warning when user tries to create replica 2 volume | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> | |
Component: | cli | Assignee: | Srijan Sivakumar <ssivakum> | |
Status: | CLOSED ERRATA | QA Contact: | Arthy Loganathan <aloganat> | |
Severity: | medium | Docs Contact: | ||
Priority: | urgent | |||
Version: | rhgs-3.5 | CC: | musoni, pprakash, puebele, ravishankar, rhs-bugs, rkothiya, saraut, sheggodu, storage-qa-internal | |
Target Milestone: | --- | |||
Target Release: | RHGS 3.5.z Batch Update 3 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-6.0-38 | Doc Type: | No Doc Update | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1763134 (view as bug list) | Environment: | ||
Last Closed: | 2020-12-17 04:50:17 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1881823 | |||
Bug Blocks: | 1763134 |
Description
Nag Pavan Chilakam
2019-10-18 10:04:14 UTC
upstream patch: https://review.gluster.org/#/c/glusterfs/+/23565 test plan for verifying this bug https://docs.google.com/document/d/1QmBmM7mwOy22q2fl9vf3XXwzeUWckq1M5pkeIIbkb4M/edit Performed following tests to verify this bug. Warning messages are seen in cli as expected. X1 to X2: ------------ [root@dhcp46-157 ~]# gluster v add-brick vol1 replica 2 10.70.46.56:/bricks/brick0/vol1_brick0 Support for replica 2 volumes stands deprecated as they are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) y volume add-brick: success x2 replica 2 remove brick -------------------------------- [root@dhcp46-157 ~]# gluster v remove-brick vol1 replica 2 10.70.46.56:/bricks/brick0/vol1_brick0 start Support for replica 2 volumes stands deprecated as they are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) y volume remove-brick start: failed: number of bricks provided (1) is not valid. need at least 2 (or 2xN) 2*2 to 1*2: -------------- [root@dhcp46-157 ~]# gluster v remove-brick vol1 replica 2 10.70.47.142:/bricks/brick0/vol1_brick0 10.70.47.175:/bricks/brick0/vol1_brick0 start Support for replica 2 volumes stands deprecated as they are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) y volume remove-brick start: success ID: 04a64564-f544-4432-ad07-d2e87fab12d0 [root@dhcp46-157 ~]# gluster v remove-brick vol1 replica 2 10.70.47.142:/bricks/brick0/vol1_brick0 10.70.47.175:/bricks/brick0/vol1_brick0 status Support for replica 2 volumes stands deprecated as they are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) y Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- 10.70.47.142 0 0Bytes 0 0 0 completed 0:00:00 10.70.47.175 0 0Bytes 0 0 0 completed 0:00:00 [root@dhcp46-157 ~]# gluster v remove-brick vol1 replica 2 10.70.47.142:/bricks/brick0/vol1_brick0 10.70.47.175:/bricks/brick0/vol1_brick0 commit Support for replica 2 volumes stands deprecated as they are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) y volume remove-brick commit: success Check the removed bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. 1*2 to 2*2: -------------- [root@dhcp46-157 ~]# gluster v add-brick vol1 replica 2 10.70.47.142:/bricks/brick0/vol1_brick0 10.70.47.175:/bricks/brick0/vol1_brick0 Support for replica 2 volumes stands deprecated as they are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) y volume add-brick: success 1*2 to 1*3 -------------- Volume Name: vol1 Type: Replicate Volume ID: e21d94cd-b693-4dd8-b432-286e2df5f34a Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 10.70.46.157:/bricks/brick0/vol1_brick0 Brick2: 10.70.46.56:/bricks/brick0/vol1_brick0 Options Reconfigured: performance.client-io-threads: off storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on [root@dhcp46-157 ~]# gluster v add-brick vol1 replica 3 10.70.47.175:/bricks/brick0/vol1_brick0 converting from *3 to arbiter: ----------------------------------- [root@dhcp46-157 ~]# gluster v remove-brick vol1 replica 2 10.70.47.175:/bricks/brick0/vol1_brick0 start Support for replica 2 volumes stands deprecated as they are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) y volume remove-brick start: failed: Migration of data is not needed when reducing replica count. Use the 'force' option [root@dhcp46-157 ~]# gluster v remove-brick vol1 replica 2 10.70.47.175:/bricks/brick0/vol1_brick0 force Remove-brick force will not migrate files from the removed bricks, so they will no longer be available on the volume. Do you want to continue? (y/n) y volume remove-brick commit force: success [root@dhcp46-157 ~]# [root@dhcp46-157 ~]# gluster v add-brick vol1 replica 3 arbiter 1 10.70.47.175:/bricks/brick0/vol1_brick0 volume add-brick: success Converting from arbiter to replica2 and then to *3: ------------------------------------------------------------------ [root@dhcp46-157 ~]# gluster v remove-brick vol1 replica 2 10.70.46.56:/bricks/brick0/vol1_brick0 start Support for replica 2 volumes stands deprecated as they are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) y volume remove-brick start: failed: Remove arbiter brick(s) only when converting from arbiter to replica 2 subvolume. [root@dhcp46-157 ~]# [root@dhcp46-157 ~]# [root@dhcp46-157 ~]# gluster v remove-brick vol1 replica 2 10.70.47.175:/bricks/brick0/vol1_brick0 start Support for replica 2 volumes stands deprecated as they are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) y volume remove-brick start: failed: Migration of data is not needed when reducing replica count. Use the 'force' option [root@dhcp46-157 ~]# gluster v remove-brick vol1 replica 2 10.70.47.175:/bricks/brick0/vol1_brick0 force Remove-brick force will not migrate files from the removed bricks, so they will no longer be available on the volume. Do you want to continue? (y/n) y volume remove-brick commit force: success [root@dhcp46-157 ~]# gluster vol info Volume Name: vol1 Type: Replicate Volume ID: e21d94cd-b693-4dd8-b432-286e2df5f34a Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 10.70.46.157:/bricks/brick0/vol1_brick0 Brick2: 10.70.46.56:/bricks/brick0/vol1_brick0 Options Reconfigured: performance.client-io-threads: off storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on [root@dhcp46-157 ~]# gluster v add-brick vol1 replica 3 10.70.47.175:/bricks/brick0/vol1_brick0 force volume add-brick: success [root@dhcp46-157 ~]# gluster v remove-brick vol1 replica 2 10.70.47.175:/bricks/brick0/vol1_brick0 Support for replica 2 volumes stands deprecated as they are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) y Usage: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force> [root@dhcp46-157 ~]# gluster v remove-brick vol1 replica 2 10.70.47.175:/bricks/brick0/vol1_brick0 start Support for replica 2 volumes stands deprecated as they are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) y volume remove-brick start: failed: Migration of data is not needed when reducing replica count. Use the 'force' option [root@dhcp46-157 ~]# gluster v remove-brick vol1 replica 2 10.70.47.175:/bricks/brick0/vol1_brick0 force Remove-brick force will not migrate files from the removed bricks, so they will no longer be available on the volume. Do you want to continue? (y/n) y volume remove-brick commit force: success Verified the fix in, glusterfs-server-6.0-46.el8rhgs.x86_64 glusterfs-server-6.0-46.el7rhgs.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5603 |