Bug 1815987 - [RHEL-8.1] Volume creation via gdeploy is failing.
Summary: [RHEL-8.1] Volume creation via gdeploy is failing.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gdeploy
Version: rhgs-3.5
Hardware: x86_64
OS: Linux
urgent
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 2
Assignee: Prajith
QA Contact: Mugdha Soni
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-23 05:09 UTC by Mugdha Soni
Modified: 2020-06-16 05:56 UTC (History)
6 users (show)

Fixed In Version: gdeploy-3.0.0-5
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-06-16 05:56:04 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2020:2577 0 None None None 2020-06-16 05:56:20 UTC

Description Mugdha Soni 2020-03-23 05:09:05 UTC
Description of problem:
----------------------------------------------------------------
On RHEL 8.1 installed 
ansible-2.9.6-1.el8ae.noarch
gdeploy-3.0.0-4.el8rhgs.noarch
sshpass-1.06-3.el8ae.x86_64

and ran the conf file to create a replica3 volume . The following error was shown when the conf file was run "AttributeError: module 'yaml' has no attribute 'FullLoader'".

Further created replica 3 volume manually and tried starting the volume via gdeploy it worked fine . Hence there is an error encountered during creation of volume via gdeploy.


Version-Release number of selected component:
---------------------------------------------------------------
gdeploy-3.0.0-4.el8rhgs.noarch

glusterfs-fuse-6.0-31.el8rhgs.x86_64
glusterfs-server-6.0-31.el8rhgs.x86_64
glusterfs-libs-6.0-31.el8rhgs.x86_64
glusterfs-6.0-31.el8rhgs.x86_64
glusterfs-cli-6.0-31.el8rhgs.x86_64
glusterfs-api-6.0-31.el8rhgs.x86_64



How reproducible:
----------------------------------------------------------------
4/4


Steps to Reproduce:
-----------------------------------------------------------------
1.On a rhel8.1 machine installed gdeploy, ansible, sshpass
2.Created conf file volume.conf
    
[hosts]
10.70.47.62
10.70.47.31
10.70.46.85

[volume]
action=create
volname=vol1
replica=yes
replica_count=3
force=yes


Actual results:
-----------------------------------------------------------------
[root@dhcp47-62 gdeploy]#  gdeploy -c volume.conf
Traceback (most recent call last):
  File "/usr/bin/gdeploy", line 211, in <module>
    main(sys.argv[1:])
  File "/usr/bin/gdeploy", line 200, in main
    call_features()
  File "/usr/lib/python3.6/site-packages/gdeploylib/call_features.py", line 39, in call_features
    list(map(get_feature_dir, Global.sections))
  File "/usr/lib/python3.6/site-packages/gdeploylib/call_features.py", line 95, in get_feature_dir
    section_dict, yml = feature_call(section_dict)
  File "/usr/lib/python3.6/site-packages/gdeployfeatures/volume/volume.py", line 19, in volume_create
    section_dict = get_common_brick_dirs(section_dict)
  File "/usr/lib/python3.6/site-packages/gdeployfeatures/volume/volume.py", line 108, in get_common_brick_dirs
    ret = read_brick_dir_from_file(Global.group_file)
  File "/usr/lib/python3.6/site-packages/gdeployfeatures/volume/volume.py", line 131, in read_brick_dir_from_file
    if helpers.is_present_in_yaml(filename, 'mountpoints'):
  File "/usr/lib/python3.6/site-packages/gdeploylib/helpers.py", line 57, in is_present_in_yaml
    doc = self.read_yaml(filename)
  File "/usr/lib/python3.6/site-packages/gdeploylib/helpers.py", line 68, in read_yaml
    return yaml.load(f, Loader=yaml.FullLoader)
AttributeError: module 'yaml' has no attribute 'FullLoader'

Expected results:
------------------------------------------------------------------
Gdeploy should successfully create a volume .


Additional info:
-------------------------------------------------------------------
After the creation of volume manually , tried starting the volume through gdeploy. Below are the conf file used and the results.

Conf file used :-

[root@dhcp47-62 gdeploy]# cat vol_start.conf 
[hosts]
10.70.47.62
10.70.47.31
10.70.46.85

[volume]
action=start
volname=10.70.47.62:vol1

[root@dhcp47-62 gdeploy]# gdeploy -c vol_start.conf

PLAY [master] *****************************************************************************************************************************************************************************************************

TASK [Starts a volume] ********************************************************************************************************************************************************************************************
changed: [10.70.47.62]

PLAY RECAP ********************************************************************************************************************************************************************************************************
10.70.47.62                : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Comment 4 Mugdha Soni 2020-04-06 11:28:01 UTC
##Tested with the following :-

1.Red Hat Enterprise Linux release 8.2 (Ootpa)
2.gdeploy-3.0.0-5.el8rhgs.noarch
3.glusterfs-server-6.0-32.el8rhgs.x86_64
4.ansible-2.9.6-1.el8ae.noarch

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1.Created Pure Replicate Volume :-

[root@dhcp47-62 gdeploy]# cat volume_rep.conf
[hosts]
10.70.47.62
10.70.47.31
10.70.46.85

[volume]
action=create
volname=rep1
replica=yes
replica_count=3
brick_dirs=/glus/brick1/b1,/glus/brick1/b1,/glus/brick1/b1
force=yes
------------------------------------------------------------------------------
[root@dhcp47-62 gdeploy]# gluster volume info
 
Volume Name: rep1
Type: Replicate
Volume ID: cb797f37-30ab-4d42-a353-7d5dbc41f1a6
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.47.62:/glus/brick1/b1
Brick2: 10.70.47.31:/glus/brick1/b1
Brick3: 10.70.46.85:/glus/brick1/b1
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
-------------------------------------------------------------------------------
[root@dhcp47-62 gdeploy]# gluster volume status
Status of volume: rep1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.47.62:/glus/brick1/b1           49152     0          Y       16771
Brick 10.70.47.31:/glus/brick1/b1           49152     0          Y       13873
Brick 10.70.46.85:/glus/brick1/b1           49152     0          Y       13453
Self-heal Daemon on localhost               N/A       N/A        Y       16792
Self-heal Daemon on 10.70.47.31             N/A       N/A        Y       13894
Self-heal Daemon on 10.70.46.85             N/A       N/A        Y       13474
 
Task Status of Volume rep1
------------------------------------------------------------------------------
There are no active volume tasks

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2.Creaing dis-rep 
 [root@dhcp47-62 gdeploy]# cat volume.conf
[hosts]
10.70.47.62
10.70.47.31
10.70.46.85

[volume]
action=create
volname=rep2
replica=yes
replica_count=3
brick_dirs=/glus/brick1/b1,/glus/brick2/b2,/glus/brick3/b3
force=yes

[root@dhcp47-62 gdeploy]# gluster v list
rep2

[root@dhcp47-62 gdeploy]# gluster volume info
Volume Name: rep2
Type: Distributed-Replicate
Volume ID: 7821f221-16ca-4e57-867b-60488c2b1e80
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: 10.70.47.62:/glus/brick1/b1
Brick2: 10.70.47.31:/glus/brick1/b1
Brick3: 10.70.46.85:/glus/brick1/b1
Brick4: 10.70.47.62:/glus/brick2/b2
Brick5: 10.70.47.31:/glus/brick2/b2
Brick6: 10.70.46.85:/glus/brick2/b2
Brick7: 10.70.47.62:/glus/brick3/b3
Brick8: 10.70.47.31:/glus/brick3/b3
Brick9: 10.70.46.85:/glus/brick3/b3
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

[root@dhcp47-62 gdeploy]# gluster volume status
Status of volume: rep2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.47.62:/glus/brick1/b1           49152     0          Y       15896
Brick 10.70.47.31:/glus/brick1/b1           49152     0          Y       13510
Brick 10.70.46.85:/glus/brick1/b1           49152     0          Y       13087
Brick 10.70.47.62:/glus/brick2/b2           49153     0          Y       15916
Brick 10.70.47.31:/glus/brick2/b2           49153     0          Y       13530
Brick 10.70.46.85:/glus/brick2/b2           49153     0          Y       13107
Brick 10.70.47.62:/glus/brick3/b3           49154     0          Y       15936
Brick 10.70.47.31:/glus/brick3/b3           49154     0          Y       13550
Brick 10.70.46.85:/glus/brick3/b3           49154     0          Y       13127
Self-heal Daemon on localhost               N/A       N/A        Y       15957
Self-heal Daemon on 10.70.46.85             N/A       N/A        Y       13148
Self-heal Daemon on 10.70.47.31             N/A       N/A        Y       13571
 
Task Status of Volume rep2



The volume creation via gdeploy is successful .
Hence moving the bug to verified state.

Comment 6 errata-xmlrpc 2020-06-16 05:56:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:2577


Note You need to log in before you can comment on or make changes to this bug.