+++ This bug was initially created as a clone of Bug #1547619 +++ Description of problem: In the cockpit UI, the gdeploy conf should have a uniform volume name. Currently the VDO,PV and VG makes use of same devices. Since the VDO volume is mounted in the first and when the PV makes use of same device it fails telling the device has already been mounted. Version-Release number of selected component (if applicable): cockpit-ovirt-dashboard-0.11.11-0.1.el7ev.noarch kmod-kvdo-6.1.0.146-13.el7.x86_64 vdo-6.1.0.146-16.x86_64 gdeploy-2.0.2-22.el7rhgs.noarch How reproducible: Everytime Steps to Reproduce: 1. Go to the Cockpit UI 2. Select the gluster deployment and go to the end(preview) 3. In the gdeploy conf file the vdo and pv section make use of same device Actual results: The PV volume creation fails since the device has already been mounted Expected results: The volume creation shouldn't fail Additional info: Snippet from the gdeploy conf file: PLAY [gluster_servers] ********************************************************* TASK [Create VDO with specified size] ****************************************** changed: [10.70.36.243] => (item={u'disk': u'/dev/sdb', u'logicalsize': u'200000', u'name': u'engine'}) changed: [10.70.36.241] => (item={u'disk': u'/dev/sdb', u'logicalsize': u'200000', u'name': u'engine'}) changed: [10.70.36.242] => (item={u'disk': u'/dev/sdb', u'logicalsize': u'200000', u'name': u'engine'}) <..> <..> PLAY [gluster_servers] ********************************************************* TASK [Clean up filesystem signature] ******************************************* skipping: [10.70.36.241] => (item=/dev/sdb) skipping: [10.70.36.242] => (item=/dev/sdb) skipping: [10.70.36.243] => (item=/dev/sdb) TASK [Create Physical Volume] ************************************************** failed: [10.70.36.241] (item=/dev/sdb) => {"changed": false, "failed_when_result": true, "item": "/dev/sdb", "msg": " Can't open /dev/sdb exclusively. Mounted filesystem?\n", "rc": 5} failed: [10.70.36.243] (item=/dev/sdb) => {"changed": false, "failed_when_result": true, "item": "/dev/sdb", "msg": " Can't open /dev/sdb exclusively. Mounted filesystem?\n", "rc": 5} failed: [10.70.36.242] (item=/dev/sdb) => {"changed": false, "failed_when_result": true, "item": "/dev/sdb", "msg": " Can't open /dev/sdb exclusively. Mounted filesystem?\n", "rc": 5} to retry, use: --limit @/tmp/tmp9hqOHr/pvcreate.retry --- Additional comment from Red Hat Bugzilla Rules Engine on 2018-02-21 11:04:43 EST --- This bug is automatically being proposed for the current release of Red Hat Hyperconverged Infrastructure (RHHI) under active development, by setting the release flag 'rhhi‑2.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag.
In this case, PV and VG components should make use of VDO volume's name and not the device name directly.
Verified the bug in the below component version's successfully. Component versions's: gdeploy-2.0.2-23.el7rhgs.noarch cockpit-ovirt-dashboard-0.11.20-1.el7ev.noarch Steps: 1. Install VDO 2. Go to the cockpit UI and check the compression and deduplication tab. 3. Proceed to the review tab. 4. Check the PV,LV sections with appropriate parameters like below Result: [vdo1:10.70.45.29] action=create devices=sdb names=vdo_sdb logicalsize=200000G blockmapcachesize=128M readcache=enabled readcachesize=20M emulate512=enabled writepolicy=sync ignore_vdo_errors=no slabsize=32G [pv1:10.70.45.29] action=create devices=/dev/mapper/vdo_sdb ignore_pv_errors=no [vg1:10.70.45.29] action=create vgname=gluster_vg_sdb pvname=/dev/mapper/vdo_sdb ignore_vg_errors=no
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.