Created attachment 1580311 [details] Deployment file /etc/ansible/hc_wizard_inventory.yml Description of problem: While trying to do a hyperconverged setup and trying to use "configure LV Cache" on my SSD disk (/dev/sdf) the deployment fails when going thru the WEB UI. If I don't use the LV cache SSD Disk the setup succeeds, thought you might want to know, for now I retested with 4.3.3 and all worked fine, so reverting to 4.3.3 unless you know of a workaround? Version-Release number of selected component (if applicable): oVirt node 4.3.4 How reproducible: Steps to Reproduce: 1.Follow the hypeconverged setup wizzard from within the first node i.e. https://vmm10.mydomain.com:9090/ 2.Select "Hosted Engine" > Select "Hyperconverged (Configure Gluster storage and oVirt hosted engine)" 3.For the "Gluster Configuration" select "Run Gluster Wizard" 4.-For the Hosts section: Fill in the FQDNs for each host (in my case: vmm10.mydomain.com, vmm11.mydomain.com, vmm12.mydomain.com) 5.-For the FQDN section: Select "Use same hostnames as in previous step" 6.-For the "Packages" section: <leave empty> 7.-For the "Volumes" Section: All volume types as "Replicate": a) engine b) vmstore1 c) data1 d) data2 8.-For the "Bricks" Section: Raid type = JBOD a) engine /dev/sdb 500Gb (thinp = no) /gluster_bricks/engine b) vmstore1 /dev/sdc 2700Gb (thinp = yes) /gluster_bricks/vmstore1 c) data1 /dev/sdd 2700Gb (thinp = yes) /gluster_bricks/data1 d) data2 /dev/sdd 2700Gb (thinp = yes) /gluster_bricks/data2 9.-From the "Review" Section (Generated Ansible inventory : /etc/ansible/hc_wizard_inventory.yml): Select "Deploy" Actual results: Error: TASK [gluster.infra/roles/backend_setup : Extend volume group] ***************** failed: [vmm11.mydomain.com] (item={u'vgname': u'gluster_vg_sdb', u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname': u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf', u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode': u'writethrough', u'cachemetalvsize': u'0.1G', u'cachelvsize': u'0.9G'}) => {"ansible_loop_var": "item", "changed": false, "err": " Physical volume \"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf", "cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize": "0.9G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb", "cachemetalvsize": "0.1G", "cachemode": "writethrough", "cachethinpoolname": "gluster_thinpool_gluster_vg_sdb", "vgname": "gluster_vg_sdb"}, "msg": "Unable to reduce gluster_vg_sdb by /dev/sdb.", "rc": 5} failed: [vmm12.mydomain.com] (item={u'vgname': u'gluster_vg_sdb', u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname': u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf', u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode': u'writethrough', u'cachemetalvsize': u'0.1G', u'cachelvsize': u'0.9G'}) => {"ansible_loop_var": "item", "changed": false, "err": " Physical volume \"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf", "cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize": "0.9G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb", "cachemetalvsize": "0.1G", "cachemode": "writethrough", "cachethinpoolname": "gluster_thinpool_gluster_vg_sdb", "vgname": "gluster_vg_sdb"}, "msg": "Unable to reduce gluster_vg_sdb by /dev/sdb.", "rc": 5} failed: [vmm10.mydomain.com] (item={u'vgname': u'gluster_vg_sdb', u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb', u'cachelvname': u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk': u'/dev/sdf', u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb', u'cachemode': u'writethrough', u'cachemetalvsize': u'30G', u'cachelvsize': u'270G'}) => {"ansible_loop_var": "item", "changed": false, "err": " Physical volume \"/dev/sdb\" still in use\n", "item": {"cachedisk": "/dev/sdf", "cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb", "cachelvsize": "270G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_sdb", "cachemetalvsize": "30G", "cachemode": "writethrough", "cachethinpoolname": "gluster_thinpool_gluster_vg_sdb", "vgname": "gluster_vg_sdb"}, "msg": "Unable to reduce gluster_vg_sdb by /dev/sdb.", "rc": 5} PLAY RECAP ********************************************************************* vmm10.mydomain.com : ok=13 changed=4 unreachable=0 failed=1 skipped=10 rescued=0 ignored=0 vmm11.mydomain.com : ok=13 changed=4 unreachable=0 failed=1 skipped=10 rescued=0 ignored=0 vmm12.mydomain.com : ok=13 changed=4 unreachable=0 failed=1 skipped=10 rescued=0 ignored=0 Expected results: "Succefully Deployed Gluster" "Continue to Hosted Engine Deployment" Additional info: /etc/ansible/hc_wizard_inventory.yml file attached.
Is there a workaround?
(In reply to Sahina Bose from comment #1) > Is there a workaround? Yes. As explained by this issue: https://github.com/gluster/gluster-ansible/issues/71 Since Ansible 2.8, the behavior for extending a volume group has changed. In this case, user has to provide the existing disk as well for the VG. i.e for setting up the cache. Instead of just /dev/sdf, provide '/dev/sdb,/dev/sdf'.
This is the upstream report I had filed in Ansible project: https://github.com/ansible/ansible/issues/56501
Just wanted to mention that for now I have tested using the recommended by Sachidananda and we were able to move forward. thanks.
This is already taken care in 4.3.6. So closing this.
Reopening as the issue shown with different sectior size.
This issue will be fixed with https://bugzilla.redhat.com/show_bug.cgi?id=1728225 . So closing this bug. *** This bug has been marked as a duplicate of bug 1728225 ***