Bug 1805164 - [RHEL-8.1] gdeploy is throwing AttributeError: module 'yaml' has no attribute 'FullLoader' when creating bricks
Summary: [RHEL-8.1] gdeploy is throwing AttributeError: module 'yaml' has no attribute...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gdeploy
Version: rhgs-3.5
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ---
: RHGS 3.5.z Batch Update 2
Assignee: Prajith
QA Contact: Mugdha Soni
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-02-20 11:57 UTC by Bala Konda Reddy M
Modified: 2020-06-16 05:56 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-06-16 05:56:04 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2020:2577 0 None None None 2020-06-16 05:56:20 UTC

Description Bala Konda Reddy M 2020-02-20 11:57:51 UTC
Description of problem:
On RHEL8.1 machine, installed ansible (ansible-2.9.5-1.el8ae.noarch)
sshpass (sshpass-1.06-3.el8ae.x86_64). Running gdeploy -c <conf file> is throwing "AttributeError: module 'yaml' has no attribute 'FullLoader'"


Version-Release number of selected component (if applicable):
gdeploy-3.0.0-2.noarch

How reproducible:
everytime

Steps to Reproduce:
1. On a rhel8.1 machine installed gdeploy, ansible, sshpass
2. Created a config file and ran "gdeploy -c gdep.conf"

Actual results:
[root@dhcp37-134 ~]# gdeploy -c gdep.conf 
Traceback (most recent call last):
  File "/usr/bin/gdeploy", line 211, in <module>
    main(sys.argv[1:])
  File "/usr/bin/gdeploy", line 199, in main
    call_core_functions()
  File "/usr/lib/python3.6/site-packages/gdeploycore/core_function_caller.py", line 31, in call_core_functions
    BackendSetup()
  File "/usr/lib/python3.6/site-packages/gdeploycore/backend_setup.py", line 47, in __init__
    self.write_sections()
  File "/usr/lib/python3.6/site-packages/gdeploycore/backend_setup.py", line 71, in write_sections
    self.new_backend_setup(hosts)
  File "/usr/lib/python3.6/site-packages/gdeploycore/backend_setup.py", line 105, in new_backend_setup
    self.call_gen_methods()
  File "/usr/lib/python3.6/site-packages/gdeploycore/backend_setup.py", line 159, in call_gen_methods
    self.perf_spec_data_write()
  File "/usr/lib/python3.6/site-packages/gdeploylib/helpers.py", line 479, in perf_spec_data_write
    self.create_var_files(perf, False, Global.group_file)
  File "/usr/lib/python3.6/site-packages/gdeploylib/yaml_writer.py", line 45, in create_var_files
    self.create_yaml_dict(key, value, filename, keep_format)
  File "/usr/lib/python3.6/site-packages/gdeploylib/yaml_writer.py", line 54, in create_yaml_dict
    self.write_yaml(data_dict, keep_format, filename)
  File "/usr/lib/python3.6/site-packages/gdeploylib/yaml_writer.py", line 63, in write_yaml
    list_doc = yaml.load(f, Loader=yaml.FullLoader) or {}
AttributeError: module 'yaml' has no attribute 'FullLoader'


Expected results:
gdeploy should succced in creating bricks

Additional info:
As the pyyaml version is 
>>> import yaml
>>> yaml.__version__
'3.12'

Pip installed latest version of pyyaml

pip install -U PyYAML
-bash: pip: command not found
[root@dhcp37-134 ~]# pip3 install -U PyYAML
WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead.
Collecting PyYAML
  Downloading https://files.pythonhosted.org/packages/3d/d9/ea9816aea31beeadccd03f1f8b625ecf8f645bd66744484d162d84803ce5/PyYAML-5.3.tar.gz (268kB)
    100% |████████████████████████████████| 276kB 704kB/s 
Installing collected packages: PyYAML
  Running setup.py install for PyYAML ... done
Successfully installed PyYAML-5.3

Comment 3 Prajith 2020-02-22 09:29:21 UTC
could you try installing "python3-pyyamlx86_64- 5.1.2-3.el8" instead of PyYaml?

Comment 7 Prajith 2020-02-25 11:07:35 UTC
Hi Prasanth,

can you kindly try this and let me know if you are facing the issue?

can you change your .conf or playbook from :-

devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi
instead of the above we need to give
devices=/dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi
in the .conf or playbook

basically instead of comma, can you try giving space ?

Comment 8 Prasanth 2020-02-26 07:01:40 UTC
(In reply to Prajith from comment #7)
> Hi Prasanth,
> 
> can you kindly try this and let me know if you are facing the issue?
> 
> can you change your .conf or playbook from :-
> 
> devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/
> sdi
> instead of the above we need to give
> devices=/dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh
> /dev/sdi
> in the .conf or playbook
> 
> basically instead of comma, can you try giving space ?

Make use of the needinfo flag in BZ for similar requests in future so that it get's the proper attention from the QA_Contact.

@Bala, @Manisha, can you give this a try and report back?

Comment 10 Prajith 2020-02-26 15:01:05 UTC
The problem is, in the config file, it is not taking multiple values which is parsed.

I guess the current work around would be to create a script that would generate a config file which would contain   [backend-reset] for each device. 
 
I will open an issue at the GitHub and post the link here as well.

Comment 11 Prajith 2020-02-27 06:55:28 UTC
I have opened an issue in upstream, https://github.com/gluster/gdeploy/issues/544, right now the suggested fix from my side is to edit the conf file for each device. I will be looking at the code and post finding the error will update the bug with the fix.

Comment 19 Mugdha Soni 2020-03-11 11:41:07 UTC
Hi Bala 

I tested with the following :-

1.Last login: Wed Mar 11 16:21:32 2020 from 10.70.47.39
[root@dhcp47-39 ~]# rpm -qa | grep ansible
ansible-2.9.5-1.el8ae.noarch

2[root@dhcp47-39 ~]# rpm -qa | grep gdeploy
gdeploy-3.0.0-3.el8rhgs.noarch

3.[root@dhcp47-39 ~]# rpm -qa | grep glusterfs
glusterfs-client-xlators-6.0-30.1.el8rhgs.x86_64
glusterfs-6.0-30.1.el8rhgs.x86_64
glusterfs-cli-6.0-30.1.el8rhgs.x86_64
glusterfs-fuse-6.0-30.1.el8rhgs.x86_64
glusterfs-server-6.0-30.1.el8rhgs.x86_64
glusterfs-libs-6.0-30.1.el8rhgs.x86_64
glusterfs-api-6.0-30.1.el8rhgs.x86_64
----------------------------------------------------------------------
##conf file used is##

[hosts]
10.70.47.39
10.70.47.94
10.70.47.151

[backend-setup]
devices=/dev/sdb,/dev/sdc
vgs=vg1,vg2
pools=gfs_pool1,gfs_pool2
lvs=lv1,lv2
mountpoints=/bricks/brick1,/bricks/brick2


------------------------------------------------------------------------
[root@dhcp47-39 gdeploy]# gdeploy -c ackend.conf 

PLAY [gluster_servers] ********************************************************************************************************************************************************************************************

TASK [Clean up filesystem signature] ******************************************************************************************************************************************************************************
skipping: [10.70.47.39] => (item=/dev/sdb) 
skipping: [10.70.47.39] => (item=/dev/sdc) 
skipping: [10.70.47.94] => (item=/dev/sdb) 
skipping: [10.70.47.94] => (item=/dev/sdc) 
skipping: [10.70.47.151] => (item=/dev/sdb) 
skipping: [10.70.47.151] => (item=/dev/sdc) 

TASK [Create Physical Volume] *************************************************************************************************************************************************************************************
changed: [10.70.47.94] => (item=/dev/sdb)
changed: [10.70.47.151] => (item=/dev/sdb)
changed: [10.70.47.39] => (item=/dev/sdb)
changed: [10.70.47.94] => (item=/dev/sdc)
changed: [10.70.47.151] => (item=/dev/sdc)
changed: [10.70.47.39] => (item=/dev/sdc)

PLAY RECAP ********************************************************************************************************************************************************************************************************
10.70.47.151               : ok=1    changed=1    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
10.70.47.39                : ok=1    changed=1    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
10.70.47.94                : ok=1    changed=1    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   


PLAY [gluster_servers] ********************************************************************************************************************************************************************************************

TASK [Create volume group on the disks] ***************************************************************************************************************************************************************************
changed: [10.70.47.94] => (item={'brick': '/dev/sdb', 'vg': 'vg1'})
changed: [10.70.47.151] => (item={'brick': '/dev/sdb', 'vg': 'vg1'})
changed: [10.70.47.39] => (item={'brick': '/dev/sdb', 'vg': 'vg1'})
changed: [10.70.47.151] => (item={'brick': '/dev/sdc', 'vg': 'vg2'})
[WARNING]: The value 0 (type int) in a string field was converted to '0' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.
[WARNING]: The value 256 (type int) in a string field was converted to '256' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.
changed: [10.70.47.39] => (item={'brick': '/dev/sdc', 'vg': 'vg2'})
changed: [10.70.47.94] => (item={'brick': '/dev/sdc', 'vg': 'vg2'})

PLAY RECAP ********************************************************************************************************************************************************************************************************
10.70.47.151               : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.70.47.39                : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.70.47.94                : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   


PLAY [gluster_servers] ********************************************************************************************************************************************************************************************

TASK [Create logical volume named metadata] ***********************************************************************************************************************************************************************
changed: [10.70.47.94] => (item=vg1)
changed: [10.70.47.39] => (item=vg1)
changed: [10.70.47.151] => (item=vg1)
changed: [10.70.47.151] => (item=vg2)
[WARNING]: The value 0 (type int) in a string field was converted to '0' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.
changed: [10.70.47.94] => (item=vg2)
changed: [10.70.47.39] => (item=vg2)

TASK [create data LV that has a size which is a multiple of stripe width] *****************************************************************************************************************************************
changed: [10.70.47.151] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.94] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.39] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.151] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.94] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.39] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})

TASK [Convert the logical volume] *********************************************************************************************************************************************************************************
changed: [10.70.47.94] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.151] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.39] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.151] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.94] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.39] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})

TASK [create stripe-aligned thin volume] **************************************************************************************************************************************************************************
changed: [10.70.47.94] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.151] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.39] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.151] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.94] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.39] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})

TASK [Change the attributes of the logical volume] ****************************************************************************************************************************************************************
changed: [10.70.47.151] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.94] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.39] => (item={'lv': 'lv1', 'pool': 'gfs_pool1', 'vg': 'vg1'})
changed: [10.70.47.94] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.151] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})
changed: [10.70.47.39] => (item={'lv': 'lv2', 'pool': 'gfs_pool2', 'vg': 'vg2'})

PLAY RECAP ********************************************************************************************************************************************************************************************************
10.70.47.151               : ok=5    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.70.47.39                : ok=5    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.70.47.94                : ok=5    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   


PLAY [gluster_servers] ********************************************************************************************************************************************************************************************

TASK [Create an xfs filesystem] ***********************************************************************************************************************************************************************************
changed: [10.70.47.151] => (item=/dev/vg1/lv1)
changed: [10.70.47.94] => (item=/dev/vg1/lv1)
changed: [10.70.47.39] => (item=/dev/vg1/lv1)
changed: [10.70.47.94] => (item=/dev/vg2/lv2)
changed: [10.70.47.151] => (item=/dev/vg2/lv2)
changed: [10.70.47.39] => (item=/dev/vg2/lv2)

PLAY RECAP ********************************************************************************************************************************************************************************************************
10.70.47.151               : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.70.47.39                : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.70.47.94                : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   


PLAY [gluster_servers] ********************************************************************************************************************************************************************************************

TASK [Create the mountpoints, skips if present] *******************************************************************************************************************************************************************
changed: [10.70.47.94] => (item={'device': '/dev/vg1/lv1', 'path': '/bricks/brick1'})
changed: [10.70.47.151] => (item={'device': '/dev/vg1/lv1', 'path': '/bricks/brick1'})
changed: [10.70.47.39] => (item={'device': '/dev/vg1/lv1', 'path': '/bricks/brick1'})
changed: [10.70.47.39] => (item={'device': '/dev/vg2/lv2', 'path': '/bricks/brick2'})
changed: [10.70.47.151] => (item={'device': '/dev/vg2/lv2', 'path': '/bricks/brick2'})
changed: [10.70.47.94] => (item={'device': '/dev/vg2/lv2', 'path': '/bricks/brick2'})

TASK [Set mount options for VDO] **********************************************************************************************************************************************************************************
skipping: [10.70.47.39]
skipping: [10.70.47.94]
skipping: [10.70.47.151]

TASK [Mount the vdo disks (if any)] *******************************************************************************************************************************************************************************
skipping: [10.70.47.39] => (item={'device': '/dev/vg1/lv1', 'path': '/bricks/brick1'}) 
skipping: [10.70.47.39] => (item={'device': '/dev/vg2/lv2', 'path': '/bricks/brick2'}) 
skipping: [10.70.47.94] => (item={'device': '/dev/vg1/lv1', 'path': '/bricks/brick1'}) 
skipping: [10.70.47.94] => (item={'device': '/dev/vg2/lv2', 'path': '/bricks/brick2'}) 
skipping: [10.70.47.151] => (item={'device': '/dev/vg1/lv1', 'path': '/bricks/brick1'}) 
skipping: [10.70.47.151] => (item={'device': '/dev/vg2/lv2', 'path': '/bricks/brick2'}) 

TASK [Mount the disks (non-vdo)] **********************************************************************************************************************************************************************************
changed: [10.70.47.151] => (item={'device': '/dev/vg1/lv1', 'path': '/bricks/brick1'})
changed: [10.70.47.39] => (item={'device': '/dev/vg1/lv1', 'path': '/bricks/brick1'})
changed: [10.70.47.94] => (item={'device': '/dev/vg1/lv1', 'path': '/bricks/brick1'})
changed: [10.70.47.151] => (item={'device': '/dev/vg2/lv2', 'path': '/bricks/brick2'})
changed: [10.70.47.39] => (item={'device': '/dev/vg2/lv2', 'path': '/bricks/brick2'})
changed: [10.70.47.94] => (item={'device': '/dev/vg2/lv2', 'path': '/bricks/brick2'})

PLAY RECAP ********************************************************************************************************************************************************************************************************
10.70.47.151               : ok=2    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   
10.70.47.39                : ok=2    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   
10.70.47.94                : ok=2    changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0   


PLAY [gluster_servers] ********************************************************************************************************************************************************************************************

TASK [Set SELinux labels on the bricks] ***************************************************************************************************************************************************************************



changed: [10.70.47.94] => (item={'device': '/dev/vg1/lv1', 'path': '/bricks/brick1'})
changed: [10.70.47.151] => (item={'device': '/dev/vg1/lv1', 'path': '/bricks/brick1'})
changed: [10.70.47.39] => (item={'device': '/dev/vg1/lv1', 'path': '/bricks/brick1'})
changed: [10.70.47.94] => (item={'device': '/dev/vg2/lv2', 'path': '/bricks/brick2'})
changed: [10.70.47.151] => (item={'device': '/dev/vg2/lv2', 'path': '/bricks/brick2'})
changed: [10.70.47.39] => (item={'device': '/dev/vg2/lv2', 'path': '/bricks/brick2'})

TASK [Restore the SELinux context] ********************************************************************************************************************************************************************************
changed: [10.70.47.94] => (item={'device': '/dev/vg1/lv1', 'path': '/bricks/brick1'})
changed: [10.70.47.39] => (item={'device': '/dev/vg1/lv1', 'path': '/bricks/brick1'})
changed: [10.70.47.151] => (item={'device': '/dev/vg1/lv1', 'path': '/bricks/brick1'})
changed: [10.70.47.94] => (item={'device': '/dev/vg2/lv2', 'path': '/bricks/brick2'})
changed: [10.70.47.39] => (item={'device': '/dev/vg2/lv2', 'path': '/bricks/brick2'})
changed: [10.70.47.151] => (item={'device': '/dev/vg2/lv2', 'path': '/bricks/brick2'})

PLAY RECAP ********************************************************************************************************************************************************************************************************
10.70.47.151               : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.70.47.39                : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.70.47.94                : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

Error: No sections found in config file. Exiting!

-----------------------------------------------------------------------------------------------------------------------------

[root@dhcp47-39 bricks]# df -h
Filesystem                        Size  Used Avail Use% Mounted on
devtmpfs                          900M     0  900M   0% /dev
tmpfs                             915M     0  915M   0% /dev/shm
tmpfs                             915M   17M  899M   2% /run
tmpfs                             915M     0  915M   0% /sys/fs/cgroup
/dev/mapper/rhel_dhcp47--39-root   14G  2.1G   12G  16% /
/dev/sda1                        1014M  214M  801M  22% /boot
tmpfs                             183M     0  183M   0% /run/user/0
/dev/mapper/vg1-lv1                10G  104M  9.9G   2% /bricks/brick1
/dev/mapper/vg2-lv2                10G  104M  9.9G   2% /bricks/brick2

[root@dhcp47-39 ~]# ls -ldZ /bricks/brick*
drwxr-xr-x. 2 root root system_u:object_r:glusterd_brick_t:s0 6 Mar 11 16:20 /bricks/brick1
drwxr-xr-x. 2 root root system_u:object_r:glusterd_brick_t:s0 6 Mar 11 16:20 /bricks/brick2



Bala , I request you to please verify and let me know if something else needs to be done for the verification of the bug.

Thanks,
Mugdha Soni

Comment 21 Mugdha Soni 2020-03-12 06:28:03 UTC
Based on comment#20  moving this bug to verified state.

Comment 25 errata-xmlrpc 2020-06-16 05:56:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:2577


Note You need to log in before you can comment on or make changes to this bug.