Bug 1480567 - backend-setup fails while changing the attributes of the logical volume
backend-setup fails while changing the attributes of the logical volume
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gdeploy (Show other bugs)
3.3
x86_64 Linux
unspecified Severity medium
: ---
: RHGS 3.3.1
Assigned To: Sachidananda Urs
SATHEESARAN
: ZStream
Depends On:
Blocks: 1475688
  Show dependency treegraph
 
Reported: 2017-08-11 08:04 EDT by SATHEESARAN
Modified: 2017-11-28 22:27 EST (History)
6 users (show)

See Also:
Fixed In Version: gdeploy-2.0.2-16
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1494041 (view as bug list)
Environment:
Last Closed: 2017-11-28 22:27:19 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
console output while creating bricks (14.03 KB, application/octet-stream)
2017-08-11 08:06 EDT, SATHEESARAN
no flags Details
gdeploy configuration file used (135 bytes, application/octet-stream)
2017-08-11 08:09 EDT, SATHEESARAN
no flags Details

  None (edit)
Description SATHEESARAN 2017-08-11 08:04:55 EDT
Description of problem:
------------------------
While creating gluster bricks, using backend-setup of gdeploy, I observed that gdeploy failed in the step to change attributes of the logical volume. 

Note:
This happens consistently with RHEL 7.4 platform, and works well with RHEL 7.3

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHEL 7.4
gdeploy-2.0.2-14

How reproducible:
------------------
Always

Steps to Reproduce:
-------------------
1. Create bricks using backend-setup module of gdeploy with RHGS node based on RHEL 7.4 

Actual results:
---------------
gdeploy throws error while changing attributes of the logical volume

Expected results:
------------------
All the steps for backend-setup should work successfully and should not fail


Additional info:
-----------------
Here is the conf file used to create bricks:

[hosts]
host1.example.com
host2.example.com
host3.example.com

[backend-setup]
devices=vdb,vdc

Here is the error snip
<snip>
TASK [Change the attributes of the logical volume] *****************************
failed: [dhcp37-77.lab.eng.blr.redhat.com] (item={u'lv': u'GLUSTER_lv1', u'pool': u'GLUSTER_pool1', u'vg': u'GLUSTER_vg1'}) => {"failed": true, "item": {"lv": "GLUSTER_lv1", "pool": "GLUSTER_pool1", "vg": "GLUSTER_vg1"}, "msg": "  \"GLUSTER_vg1/GLUSTER_vg1/GLUSTER_pool1\": Invalid path for Logical Volume.\n", "rc": 5}
failed: [dhcp37-214.lab.eng.blr.redhat.com] (item={u'lv': u'GLUSTER_lv1', u'pool': u'GLUSTER_pool1', u'vg': u'GLUSTER_vg1'}) => {"failed": true, "item": {"lv": "GLUSTER_lv1", "pool": "GLUSTER_pool1", "vg": "GLUSTER_vg1"}, "msg": "  \"GLUSTER_vg1/GLUSTER_vg1/GLUSTER_pool1\": Invalid path for Logical Volume.\n", "rc": 5}
failed: [dhcp37-209.lab.eng.blr.redhat.com] (item={u'lv': u'GLUSTER_lv1', u'pool': u'GLUSTER_pool1', u'vg': u'GLUSTER_vg1'}) => {"failed": true, "item": {"lv": "GLUSTER_lv1", "pool": "GLUSTER_pool1", "vg": "GLUSTER_vg1"}, "msg": "  \"GLUSTER_vg1/GLUSTER_vg1/GLUSTER_pool1\": Invalid path for Logical Volume.\n", "rc": 5}
...ignoring
failed: [dhcp37-77.lab.eng.blr.redhat.com] (item={u'lv': u'GLUSTER_lv2', u'pool': u'GLUSTER_pool2', u'vg': u'GLUSTER_vg2'}) => {"failed": true, "item": {"lv": "GLUSTER_lv2", "pool": "GLUSTER_pool2", "vg": "GLUSTER_vg2"}, "msg": "  \"GLUSTER_vg2/GLUSTER_vg2/GLUSTER_pool2\": Invalid path for Logical Volume.\n", "rc": 5}
...ignoring
failed: [dhcp37-214.lab.eng.blr.redhat.com] (item={u'lv': u'GLUSTER_lv2', u'pool': u'GLUSTER_pool2', u'vg': u'GLUSTER_vg2'}) => {"failed": true, "item": {"lv": "GLUSTER_lv2", "pool": "GLUSTER_pool2", "vg": "GLUSTER_vg2"}, "msg": "  \"GLUSTER_vg2/GLUSTER_vg2/GLUSTER_pool2\": Invalid path for Logical Volume.\n", "rc": 5}
...ignoring
failed: [dhcp37-209.lab.eng.blr.redhat.com] (item={u'lv': u'GLUSTER_lv2', u'pool': u'GLUSTER_pool2', u'vg': u'GLUSTER_vg2'}) => {"failed": true, "item": {"lv": "GLUSTER_lv2", "pool": "GLUSTER_pool2", "vg": "GLUSTER_vg2"}, "msg": "  \"GLUSTER_vg2/GLUSTER_vg2/GLUSTER_pool2\": Invalid path for Logical Volume.\n", "rc": 5}

</snip>
Comment 1 SATHEESARAN 2017-08-11 08:06:44 EDT
Created attachment 1312106 [details]
console output while creating bricks
Comment 2 SATHEESARAN 2017-08-11 08:09:04 EDT
Created attachment 1312108 [details]
gdeploy configuration file used
Comment 3 SATHEESARAN 2017-08-11 08:10:55 EDT
With the testing that I did, I could not estimate the consequence of this issue.
Sac will investigating, what had failed, why it had failed and its consequence.
Comment 5 SATHEESARAN 2017-09-21 07:39:27 EDT
I'm seeing the same problem, with RHHI product while setting up lvmcache with the help of gdeploy.

Here is the configuration file:

[hosts]
host1
host2
host3

[lv]
action=setup-cache
ssd=sdc
vgname=vg1
poolname=thinpool1
cache_lv=lvcache
cache_lvsize=10GB

While executing the above configuration file, gdeploy fails with error as follows:
# gdeploy -c lvmcache.conf 

PLAY [gluster_servers] ********************************************************************************************************************************************************************************************

TASK [Setup SSD for caching | Create the physical volume] *********************************************************************************************************************************************************
changed: [10.70.36.73] => (item=/dev/sdc)
changed: [10.70.36.75] => (item=/dev/sdc)
changed: [10.70.36.74] => (item=/dev/sdc)

TASK [Setup SSD for caching | Extend the Volume Group] ************************************************************************************************************************************************************
changed: [10.70.36.75] => (item=/dev/sdc)
changed: [10.70.36.73] => (item=/dev/sdc)
changed: [10.70.36.74] => (item=/dev/sdc)

TASK [Setup SSD for caching | Change the attributes of the logical volume] ****************************************************************************************************************************************
fatal: [10.70.36.73]: FAILED! => {"changed": false, "failed": true, "msg": "  \"gluster_vg_sdb/gluster_vg_sdb/gluster_thinpool_sdb\": Invalid path for Logical Volume.\n", "rc": 5}
fatal: [10.70.36.74]: FAILED! => {"changed": false, "failed": true, "msg": "  \"gluster_vg_sdb/gluster_vg_sdb/gluster_thinpool_sdb\": Invalid path for Logical Volume.\n", "rc": 5}
fatal: [10.70.36.75]: FAILED! => {"changed": false, "failed": true, "msg": "  \"gluster_vg_sdb/gluster_vg_sdb/gluster_thinpool_sdb\": Invalid path for Logical Volume.\n", "rc": 5}
	to retry, use: --limit @/tmp/tmpSxgQ0n/cache_setup.retry

PLAY RECAP ********************************************************************************************************************************************************************************************************
10.70.36.73                : ok=2    changed=2    unreachable=0    failed=1   
10.70.36.74                : ok=2    changed=2    unreachable=0    failed=1   
10.70.36.75                : ok=2    changed=2    unreachable=0    failed=1   

Ignoring errors...


Re-proposing this bug for RHGS 3.3.1, as this is needed for RHHI 1.1
Comment 6 Sachidananda Urs 2017-09-22 03:27:57 EDT
Commit: https://github.com/gluster/gdeploy/commit/be2c9ddbecb47c fixes the issue.
Comment 7 SATHEESARAN 2017-09-22 12:03:03 EDT
Verified with gdeploy-2.0.2-16.el7rhgs and this issue is not seen.

The configuration file used is:

[host]
host1

[backend-setup]
devices=vdb,vdc

Observed that the bricks are created successfully and mounted as expected


Note:
Not marking this bug as **VERIFIED** as the gdeploy errata is not yet available.
Once the gdeploy errata is available with this specified build, I will move this bug to VERIFIED state.
Comment 8 SATHEESARAN 2017-10-06 03:07:06 EDT
The build - gdeploy-2.0.2-16.el7rhgs is available with the errata and with comment7 marking this bug as VERIFIED
Comment 11 errata-xmlrpc 2017-11-28 22:27:19 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3274

Note You need to log in before you can comment on or make changes to this bug.