Bug 1815987
Summary: | [RHEL-8.1] Volume creation via gdeploy is failing. | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Mugdha Soni <musoni> |
Component: | gdeploy | Assignee: | Prajith <pkesavap> |
Status: | CLOSED ERRATA | QA Contact: | Mugdha Soni <musoni> |
Severity: | high | Docs Contact: | |
Priority: | urgent | ||
Version: | rhgs-3.5 | CC: | godas, pprakash, puebele, rhs-bugs, sabose, storage-qa-internal |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | RHGS 3.5.z Batch Update 2 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | gdeploy-3.0.0-5 | Doc Type: | No Doc Update |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-06-16 05:56:04 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Mugdha Soni
2020-03-23 05:09:05 UTC
##Tested with the following :- 1.Red Hat Enterprise Linux release 8.2 (Ootpa) 2.gdeploy-3.0.0-5.el8rhgs.noarch 3.glusterfs-server-6.0-32.el8rhgs.x86_64 4.ansible-2.9.6-1.el8ae.noarch ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.Created Pure Replicate Volume :- [root@dhcp47-62 gdeploy]# cat volume_rep.conf [hosts] 10.70.47.62 10.70.47.31 10.70.46.85 [volume] action=create volname=rep1 replica=yes replica_count=3 brick_dirs=/glus/brick1/b1,/glus/brick1/b1,/glus/brick1/b1 force=yes ------------------------------------------------------------------------------ [root@dhcp47-62 gdeploy]# gluster volume info Volume Name: rep1 Type: Replicate Volume ID: cb797f37-30ab-4d42-a353-7d5dbc41f1a6 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.70.47.62:/glus/brick1/b1 Brick2: 10.70.47.31:/glus/brick1/b1 Brick3: 10.70.46.85:/glus/brick1/b1 Options Reconfigured: transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on performance.client-io-threads: off ------------------------------------------------------------------------------- [root@dhcp47-62 gdeploy]# gluster volume status Status of volume: rep1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.47.62:/glus/brick1/b1 49152 0 Y 16771 Brick 10.70.47.31:/glus/brick1/b1 49152 0 Y 13873 Brick 10.70.46.85:/glus/brick1/b1 49152 0 Y 13453 Self-heal Daemon on localhost N/A N/A Y 16792 Self-heal Daemon on 10.70.47.31 N/A N/A Y 13894 Self-heal Daemon on 10.70.46.85 N/A N/A Y 13474 Task Status of Volume rep1 ------------------------------------------------------------------------------ There are no active volume tasks ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2.Creaing dis-rep [root@dhcp47-62 gdeploy]# cat volume.conf [hosts] 10.70.47.62 10.70.47.31 10.70.46.85 [volume] action=create volname=rep2 replica=yes replica_count=3 brick_dirs=/glus/brick1/b1,/glus/brick2/b2,/glus/brick3/b3 force=yes [root@dhcp47-62 gdeploy]# gluster v list rep2 [root@dhcp47-62 gdeploy]# gluster volume info Volume Name: rep2 Type: Distributed-Replicate Volume ID: 7821f221-16ca-4e57-867b-60488c2b1e80 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.70.47.62:/glus/brick1/b1 Brick2: 10.70.47.31:/glus/brick1/b1 Brick3: 10.70.46.85:/glus/brick1/b1 Brick4: 10.70.47.62:/glus/brick2/b2 Brick5: 10.70.47.31:/glus/brick2/b2 Brick6: 10.70.46.85:/glus/brick2/b2 Brick7: 10.70.47.62:/glus/brick3/b3 Brick8: 10.70.47.31:/glus/brick3/b3 Brick9: 10.70.46.85:/glus/brick3/b3 Options Reconfigured: transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on performance.client-io-threads: off [root@dhcp47-62 gdeploy]# gluster volume status Status of volume: rep2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.47.62:/glus/brick1/b1 49152 0 Y 15896 Brick 10.70.47.31:/glus/brick1/b1 49152 0 Y 13510 Brick 10.70.46.85:/glus/brick1/b1 49152 0 Y 13087 Brick 10.70.47.62:/glus/brick2/b2 49153 0 Y 15916 Brick 10.70.47.31:/glus/brick2/b2 49153 0 Y 13530 Brick 10.70.46.85:/glus/brick2/b2 49153 0 Y 13107 Brick 10.70.47.62:/glus/brick3/b3 49154 0 Y 15936 Brick 10.70.47.31:/glus/brick3/b3 49154 0 Y 13550 Brick 10.70.46.85:/glus/brick3/b3 49154 0 Y 13127 Self-heal Daemon on localhost N/A N/A Y 15957 Self-heal Daemon on 10.70.46.85 N/A N/A Y 13148 Self-heal Daemon on 10.70.47.31 N/A N/A Y 13571 Task Status of Volume rep2 The volume creation via gdeploy is successful . Hence moving the bug to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:2577 |