Bug 1881823
| Summary: | add-brick: Getting an error message while adding a brick from different node to the volume. | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Arthy Loganathan <aloganat> |
| Component: | cli | Assignee: | Sheetal Pamecha <spamecha> |
| Status: | CLOSED ERRATA | QA Contact: | Arthy Loganathan <aloganat> |
| Severity: | high | Docs Contact: | |
| Priority: | urgent | ||
| Version: | rhgs-3.5 | CC: | nchilaka, pprakash, puebele, rhs-bugs, rkothiya, sajmoham, sheggodu, storage-qa-internal |
| Target Milestone: | --- | Keywords: | Regression, ZStream |
| Target Release: | RHGS 3.5.z Batch Update 3 | ||
| Hardware: | x86_64 | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-6.0-46 | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-12-17 04:51:53 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1763124 | ||
|
Description
Arthy Loganathan
2020-09-23 07:09:01 UTC
This bug is not a blocker for the release, with force option add-brick is successful. [root@dhcp46-157 ~]# gluster vol create vol8 replica 2 10.70.46.157:/bricks/brick5/vol7_brick2 10.70.46.56:/bricks/brick5/vol7_brick2 Support for replica 2 volumes stands deprecated as they are prone to split-brain. Use Arbiter or Replica 3 to avoid this. Do you still want to continue? (y/n) y volume create: vol8: success: please start the volume to access data [root@dhcp46-157 ~]# gluster vol start vol8 volume start: vol8: success [root@dhcp46-157 ~]# [root@dhcp46-157 ~]# gluster vol add-brick vol8 replica 3 10.70.47.142:/bricks/brick0/vol8_brick2 volume add-brick: success [root@dhcp46-157 ~]# gluster vol info vol8 Volume Name: vol8 Type: Replicate Volume ID: 2dc220ad-6ea0-42e6-be6e-298990ea9e87 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.70.46.157:/bricks/brick5/vol7_brick2 Brick2: 10.70.46.56:/bricks/brick5/vol7_brick2 Brick3: 10.70.47.142:/bricks/brick0/vol8_brick2 Options Reconfigured: storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off cluster.brick-multiplex: off [root@dhcp46-157 ~]# Error message is not seen when the newly added brick is from a different node. Verified the fix in, glusterfs-server-6.0-46.el7rhgs.x86_64 glusterfs-server-6.0-46.el8rhgs.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5603 |