Bug 1329071

Summary: using gdeploy, unable to reset the pvs, vgs, lvs created
Product: Red Hat Gluster Storage Reporter: SATHEESARAN <sasundar>
Component: gdeployAssignee: Nandaja Varma <nvarma>
Status: CLOSED NOTABUG QA Contact: Anush Shetty <ashetty>
Severity: medium Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: nandaja.varma, nvarma, rhinduja, sasundar, smohan, surs
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
RHEV RHGS HCI RHEL 7.2 RHEV 3.6.4
Last Closed: 2016-04-21 12:24:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
console log file while performing brick-reset using gdeploy none

Description SATHEESARAN 2016-04-21 06:38:14 UTC
Description of problem:
-----------------------
gdeploy provides the section [brick-reset] to unmount the bricks, delete lvs,vgs,pvs. While using this section, gdeploy throws error and doesn't delete lvs,vgs,pvs

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
gdeploy-2.0-5

How reproducible:
-----------------
Always/Consistent

Steps to Reproduce:
-------------------
1. Create a pv, vg using gedeploy 
2. Try deleting the vgs, pvs using gdeploy

Actual results:
---------------
vgs & pvs doesn't get cleaned up and gdeploy throws error

Expected results:
-----------------
vgs & pvs should get deleted successfully

Additional info:
----------------

1. I have created pvs & vgs with the following conf 
[hosts]
host1.example.com

[pv]
action=create
devices=vdb

[vg]
action=create
vgname=RHS_vg1
pvname=/dev/vdb

2. Tried deleting the vgs & pvs created using the following conf
[hosts]
host1.example.com

[brick-reset]
vgs=RHS_vg1
pvs=/dev/vdb
mountpoints=/rhs/brick1
unmount=yes

3. Errors seen
TASK [Cleans up backend] *******************************************************
fatal: [dhcp37-128.lab.eng.blr.redhat.com]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "Traceback (most recent call last):\r\n  File \"/root/.ansible/tmp/ansible-tmp-1461220387.32-69638506000404/backend_reset\", line 2127, in <module>\r\n    BackendReset(module)\r\n  File \"/root/.ansible/tmp/ansible-tmp-1461220387.32-69638506000404/backend_reset\", line 2001, in __init__\r\n    self.remove_lvs()\r\n  File \"/root/.ansible/tmp/ansible-tmp-1461220387.32-69638506000404/backend_reset\", line 2044, in remove_lvs\r\n    self.get_lvs()\r\n  File \"/root/.ansible/tmp/ansible-tmp-1461220387.32-69638506000404/backend_reset\", line 2080, in get_lvs\r\n    self.format_lvnames()\r\n  File \"/root/.ansible/tmp/ansible-tmp-1461220387.32-69638506000404/backend_reset\", line 2098, in format_lvnames\r\n    formatted_lvname = [True for lv in self.lvs if lv.startswith(\r\nTypeError: 'NoneType' object is not iterable\r\n", "msg": "MODULE FAILURE", "parsed": false}

Comment 1 SATHEESARAN 2016-04-21 06:41:31 UTC
Following are some observations :
1. In [brick-reset] section, when vgs & pvs are used, gdeploy asks for mountpoints.

2. In [brick-reset] section, when vgs,pvs & mountpoints are used, gdeploy throws error.

Attached the console-log file too

Comment 2 SATHEESARAN 2016-04-21 06:42:07 UTC
Created attachment 1149311 [details]
console log file while performing brick-reset using gdeploy

Comment 3 Sachidananda Urs 2016-04-21 09:54:55 UTC
sas, this error is due to the Ansible version. 
You have installed latest 2.0, we do not support 2.0, we are on
version 1.9.

Downgrade should fix the issue. Please check and close the bug.

Comment 4 SATHEESARAN 2016-04-21 10:39:26 UTC
Sac,

I have tested with ansible1.9 and it works.

1. So this means gdeploy features may or may not work with ansible updates/upgrades ? 
2. So ansible doesn't provide backward compatibility ?

Comment 5 Nandaja Varma 2016-04-21 10:50:55 UTC
Hey Sas,

A lot of Ansible features changed by the 2.* releases and a lot of them aren't backward compatible. Gdeploy not working with a higher version of Ansible was an issue that was found earlier. We had a discussion about this with Sean Murphy and Bill Nottingham. We decided that we will continue development of gdeploy using 1.9* version of Ansible as upgrading it to the newest version would mean that we practically need to work on some of the modules from scratch and also a lot more testing needs to be done on gdeploy.

Comment 6 Sachidananda Urs 2016-04-21 11:19:48 UTC
sas if that answers your question, please move the bug to verified.

Comment 7 SATHEESARAN 2016-04-21 12:23:05 UTC
(In reply to Nandaja Varma from comment #5)
> Hey Sas,
> 
> A lot of Ansible features changed by the 2.* releases and a lot of them
> aren't backward compatible. Gdeploy not working with a higher version of
> Ansible was an issue that was found earlier. We had a discussion about this
> with Sean Murphy and Bill Nottingham. We decided that we will continue
> development of gdeploy using 1.9* version of Ansible as upgrading it to the
> newest version would mean that we practically need to work on some of the
> modules from scratch and also a lot more testing needs to be done on gdeploy.

Nandaja,

Thanks for the information.

We may require this info to be documented in our admin_guide, recommending users to use Ansible 1.9*.

I will open a doc bug for the same, please provide the required information.

Comment 8 SATHEESARAN 2016-04-21 12:24:05 UTC
(In reply to Sachidananda Urs from comment #6)
> sas if that answers your question, please move the bug to verified.

Sac, 
Thanks. comment5 makes sense, I am closing this bug