Bug 1720261 - oVirt 4.3.4 hyperconverged deployment error during wizard setup
Summary: oVirt 4.3.4 hyperconverged deployment error during wizard setup
Keywords:
Status: CLOSED DUPLICATE of bug 1728225
Alias: None
Product: ovirt-hosted-engine-setup
Classification: oVirt
Component: Plugins.Gluster
Version: 2.3.10
Hardware: x86_64
OS: Linux
medium
low
Target Milestone: ovirt-4.4.0
: ---
Assignee: Prajith
QA Contact: meital avital
URL:
Whiteboard:
Depends On:
Blocks: 1728225 1769811
TreeView+ depends on / blocked
 
Reported: 2019-06-13 14:14 UTC by Adrian Quintero
Modified: 2020-03-05 09:32 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-05 09:32:39 UTC
oVirt Team: Gluster
Embargoed:
sbonazzo: ovirt-4.4?


Attachments (Terms of Use)
Deployment file /etc/ansible/hc_wizard_inventory.yml (5.95 KB, text/plain)
2019-06-13 14:14 UTC, Adrian Quintero
no flags Details

Description Adrian Quintero 2019-06-13 14:14:26 UTC
Created attachment 1580311 [details]
Deployment file /etc/ansible/hc_wizard_inventory.yml

Description of problem:
While trying to do a hyperconverged setup and trying to use "configure LV Cache" on my SSD disk (/dev/sdf) the deployment fails when going thru the WEB UI. If I don't use the LV cache SSD Disk the setup succeeds, thought you might want to know, for now I retested with 4.3.3 and all worked fine, so reverting to 4.3.3 unless you know of a workaround? 

Version-Release number of selected component (if applicable):
oVirt node 4.3.4

How reproducible:


Steps to Reproduce:
1.Follow the hypeconverged setup wizzard from within the first node i.e. https://vmm10.mydomain.com:9090/
2.Select "Hosted Engine" >  Select "Hyperconverged (Configure Gluster storage and oVirt hosted engine)"
3.For the "Gluster Configuration" select "Run Gluster Wizard"
4.-For the Hosts section:
   Fill in the FQDNs for each host (in my case: vmm10.mydomain.com, vmm11.mydomain.com, vmm12.mydomain.com)
5.-For the FQDN section:
   Select "Use same hostnames as in previous step"
6.-For the "Packages" section: <leave empty>
7.-For the "Volumes" Section:
   All volume types as "Replicate":
   a) engine
   b) vmstore1
   c) data1
   d) data2
8.-For the "Bricks" Section:
   Raid type = JBOD
   a) engine    /dev/sdb 500Gb  (thinp = no)  /gluster_bricks/engine     
   b) vmstore1  /dev/sdc 2700Gb (thinp = yes) /gluster_bricks/vmstore1   
   c) data1     /dev/sdd 2700Gb (thinp = yes) /gluster_bricks/data1 
   d) data2     /dev/sdd 2700Gb (thinp = yes) /gluster_bricks/data2 
9.-From the "Review" Section (Generated Ansible inventory : /etc/ansible/hc_wizard_inventory.yml):
   Select "Deploy"



Actual results:
Error:
TASK [gluster.infra/roles/backend_setup : Extend volume group] *****************
failed: [vmm11.mydomain.com] (item={u'vgname': u'gluster_vg_sdb', u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname': u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf', u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode': u'writethrough', u'cachemetalvsize': u'0.1G', u'cachelvsize': u'0.9G'}) => {"ansible_loop_var": "item", "changed": false, "err": "  Physical volume \"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf", "cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize": "0.9G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb", "cachemetalvsize": "0.1G", "cachemode": "writethrough", "cachethinpoolname": "gluster_thinpool_gluster_vg_sdb", "vgname": "gluster_vg_sdb"}, "msg": "Unable to reduce gluster_vg_sdb by /dev/sdb.", "rc": 5}

failed: [vmm12.mydomain.com] (item={u'vgname': u'gluster_vg_sdb', u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname': u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf', u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode': u'writethrough', u'cachemetalvsize': u'0.1G', u'cachelvsize': u'0.9G'}) => {"ansible_loop_var": "item", "changed": false, "err": "  Physical volume \"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf", "cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize": "0.9G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb", "cachemetalvsize": "0.1G", "cachemode": "writethrough", "cachethinpoolname": "gluster_thinpool_gluster_vg_sdb", "vgname": "gluster_vg_sdb"}, "msg": "Unable to reduce gluster_vg_sdb by /dev/sdb.", "rc": 5}

failed: [vmm10.mydomain.com] (item={u'vgname': u'gluster_vg_sdb', u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname': u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf', u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode': u'writethrough', u'cachemetalvsize': u'30G', u'cachelvsize': u'270G'}) => {"ansible_loop_var": "item", "changed": false, "err": "  Physical volume \"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf", "cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize": "270G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb", "cachemetalvsize": "30G", "cachemode": "writethrough", "cachethinpoolname": "gluster_thinpool_gluster_vg_sdb", "vgname": "gluster_vg_sdb"}, "msg": "Unable to reduce gluster_vg_sdb by /dev/sdb.", "rc": 5}

 PLAY RECAP *********************************************************************
 vmm10.mydomain.com           : ok=13   changed=4    unreachable=0    failed=1    skipped=10   rescued=0    ignored=0
 vmm11.mydomain.com           : ok=13   changed=4    unreachable=0    failed=1    skipped=10   rescued=0    ignored=0
 vmm12.mydomain.com           : ok=13   changed=4    unreachable=0    failed=1    skipped=10   rescued=0    ignored=0


Expected results:
"Succefully Deployed Gluster"
"Continue to Hosted Engine Deployment"

Additional info:
/etc/ansible/hc_wizard_inventory.yml file attached.

Comment 1 Sahina Bose 2019-07-19 06:35:59 UTC
Is there a workaround?

Comment 2 Sachidananda Urs 2019-07-19 09:20:36 UTC
(In reply to Sahina Bose from comment #1)
> Is there a workaround?

Yes. As explained by this issue: https://github.com/gluster/gluster-ansible/issues/71
Since Ansible 2.8, the behavior for extending a volume group has changed.

In this case, user has to provide the existing disk as well for the VG.
i.e for setting up the cache. Instead of just /dev/sdf, provide '/dev/sdb,/dev/sdf'.

Comment 3 Sachidananda Urs 2019-07-19 14:41:26 UTC
This is the upstream report I had filed in Ansible project: https://github.com/ansible/ansible/issues/56501

Comment 4 Adrian Quintero 2019-07-22 21:10:37 UTC
Just wanted to mention that for now I have tested using the recommended by Sachidananda and we were able to move forward.

thanks.

Comment 6 Gobinda Das 2019-11-20 12:04:15 UTC
This is already taken care in 4.3.6. So closing this.

Comment 7 Gobinda Das 2019-12-30 10:20:40 UTC
Reopening as the issue shown with different sectior size.

Comment 8 Gobinda Das 2020-03-05 09:32:39 UTC
This issue will be fixed with https://bugzilla.redhat.com/show_bug.cgi?id=1728225  . So closing this bug.

*** This bug has been marked as a duplicate of bug 1728225 ***


Note You need to log in before you can comment on or make changes to this bug.