+++ This bug was initially created as a clone of Bug #1686258 +++ Description of problem: ------------------------ When multiple thinp volumes are carved out of same disk then in inventory file there is presence of multiple thinpools. Version-Release number of selected component : ----------------------------------------------- rhvh-4.3.0.5-0.20190305 gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch gluster-ansible-roles-1.0.4-4.el7rhgs.noarch gluster-ansible-infra-1.0.3-3.el7rhgs.noarch gluster-ansible-repositories-1.0-1.el7rhgs.noarch gluster-ansible-features-1.0.4-5.el7rhgs.noarch gluster-ansible-cluster-1.0-1.el7rhgs.noarch How reproducible: ------------------- Everytime Steps to Reproduce: -------------------- 1.In gluster deployment create multiple thinp volumes on the same disk. Actual results: ------------------- Multiple thinpools are present in inventory file Expected results: --------------------- Single thinpool section should be present under gluster_infra_thinpools if thinp volumes are carved out of same disk. Additional info: -------------------- Here is the snippet of the gluster infra thinpool section. gluster_infra_thick_lvs: - vgname: gluster_vg_sdb lvname: gluster_lv_engine size: 100G - vgname: gluster_vg_sdc lvname: gluster_lv_data size: 4000G - vgname: gluster_vg_sdc lvname: gluster_lv_vmstore size: 2000G gluster_infra_thinpools: - vgname: gluster_vg_sdd thinpoolname: gluster_thinpool_gluster_vg_sdd poolmetadatasize: 3 - vgname: gluster_vg_sdd thinpoolname: gluster_thinpool_gluster_vg_sdd poolmetadatasize: 3
Parth, Can you please take a look?
(In reply to Gobinda Das from comment #1) > Parth, > Can you please take a look? Sure!
While regression testing, ran in to a test scenario, 1. No dedupe & compression on the brick. So brick is a thin LV 2. Create multiple bricks on the same disk 3. One of them is arbiter. In this scenario, the gluster ansible playbook ends up like this: <snip> gluster_infra_thinpools: - vgname: gluster_vg_sdc thinpoolname: gluster_thinpool_gluster_vg_sdc poolmetadatasize: 1 - vgname: gluster_vg_sdc thinpoolname: gluster_thinpool_gluster_vg_sdc poolmetadatasize: 16 </snip> So as per the original intent behind the bug, for each brick on the same disk, creation of dedicated thinpool is attempted. In this case, thinpool 'gluster_thinpool_gluster_vg_sdc' is created with poolmetadatasize of 1, which looks too small for the other thin LV and LV complains about it and fails. This bug is now considered as a blocker from QE perspective
Tested with cockpit-ovirt-dashboard-0.12.6 When there are multiple thin LVs are created on the same disk, there only one thinpool created. Snip from generated ansible vars file -------------------------------------- gluster_infra_thinpools: - vgname: gluster_vg_sdc thinpoolname: gluster_thinpool_gluster_vg_sdc poolmetadatasize: 5G gluster_infra_lv_logicalvols: - vgname: gluster_vg_sdc thinpool: gluster_thinpool_gluster_vg_sdc lvname: gluster_lv_data lvsize: 300G - vgname: gluster_vg_sdc thinpool: gluster_thinpool_gluster_vg_sdc lvname: gluster_lv_vmstore lvsize: 600G - vgname: gluster_vg_sdc thinpool: gluster_thinpool_gluster_vg_sdc lvname: gluster_lv_newvol lvsize: 900G
This bugzilla is included in oVirt 4.3.3 release, published on April 16th 2019. Since the problem described in this bug report should be resolved in oVirt 4.3.3 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.