Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1686259

Summary: Creation of multiple bricks,one of them being arbiter, on the same disk without dedupe & compression, results in failure
Product: [oVirt] cockpit-ovirt Reporter: Mugdha Soni <musoni>
Component: gluster-ansibleAssignee: Gobinda Das <godas>
Status: CLOSED CURRENTRELEASE QA Contact: SATHEESARAN <sasundar>
Severity: medium Docs Contact:
Priority: high    
Version: 0.12.5CC: bugs, dparth, irosenzw, rcyriac, rhs-bugs, sabose, sankarshan, sasundar
Target Milestone: ovirt-4.3.3Flags: sasundar: ovirt-4.3?
sasundar: blocker?
godas: devel_ack+
Target Release: 0.12.6   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: cockpit-ovirt-0.12.6 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1686258 Environment:
Last Closed: 2019-04-16 13:58:32 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Gluster RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1686258    

Description Mugdha Soni 2019-03-07 06:05:57 UTC
+++ This bug was initially created as a clone of Bug #1686258 +++

Description of problem:
------------------------
When multiple thinp volumes are carved out of same disk then in inventory file there is presence of multiple thinpools.

Version-Release number of selected component :
-----------------------------------------------
rhvh-4.3.0.5-0.20190305
gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch
gluster-ansible-roles-1.0.4-4.el7rhgs.noarch
gluster-ansible-infra-1.0.3-3.el7rhgs.noarch
gluster-ansible-repositories-1.0-1.el7rhgs.noarch
gluster-ansible-features-1.0.4-5.el7rhgs.noarch
gluster-ansible-cluster-1.0-1.el7rhgs.noarch


How reproducible:
-------------------
Everytime


Steps to Reproduce:
--------------------
1.In gluster deployment create multiple thinp volumes on the same disk.


Actual results:
-------------------
Multiple thinpools are present in inventory file

Expected results:
---------------------
Single thinpool section should be present under gluster_infra_thinpools if thinp volumes are carved out of same disk.

Additional info:
--------------------
Here is the snippet of the gluster infra thinpool section.

 gluster_infra_thick_lvs:
        - vgname: gluster_vg_sdb
          lvname: gluster_lv_engine
          size: 100G
        - vgname: gluster_vg_sdc
          lvname: gluster_lv_data
          size: 4000G
        - vgname: gluster_vg_sdc
          lvname: gluster_lv_vmstore
          size: 2000G
      gluster_infra_thinpools:
        - vgname: gluster_vg_sdd
          thinpoolname: gluster_thinpool_gluster_vg_sdd
          poolmetadatasize: 3
        - vgname: gluster_vg_sdd
          thinpoolname: gluster_thinpool_gluster_vg_sdd
          poolmetadatasize: 3

Comment 1 Gobinda Das 2019-03-07 06:33:04 UTC
Parth,
 Can you please take a look?

Comment 2 Parth Dhanjal 2019-03-07 06:36:32 UTC
(In reply to Gobinda Das from comment #1)
> Parth,
>  Can you please take a look?

Sure!

Comment 3 SATHEESARAN 2019-03-20 08:01:16 UTC
While regression testing, ran in to a test scenario,
1. No dedupe & compression on the brick. So brick is a thin LV
2. Create multiple bricks on the same disk
3. One of them is arbiter.

In this scenario, the gluster ansible playbook ends up like this:
<snip>
gluster_infra_thinpools:
        - vgname: gluster_vg_sdc
          thinpoolname: gluster_thinpool_gluster_vg_sdc
          poolmetadatasize: 1
        - vgname: gluster_vg_sdc
          thinpoolname: gluster_thinpool_gluster_vg_sdc
          poolmetadatasize: 16

</snip>

So as per the original intent behind the bug, for each brick on the same disk, 
creation of dedicated thinpool is attempted. In this case, thinpool 'gluster_thinpool_gluster_vg_sdc'
is created with poolmetadatasize of 1, which looks too small for the other thin LV
and LV complains about it and fails.

This bug is now considered as a blocker from QE perspective

Comment 4 SATHEESARAN 2019-03-29 18:50:17 UTC
Tested with cockpit-ovirt-dashboard-0.12.6

When there are multiple thin LVs are created on the same disk, 
there only one thinpool created.

Snip from generated ansible vars file
--------------------------------------
    gluster_infra_thinpools:
        - vgname: gluster_vg_sdc
          thinpoolname: gluster_thinpool_gluster_vg_sdc
          poolmetadatasize: 5G
      gluster_infra_lv_logicalvols:
        - vgname: gluster_vg_sdc
          thinpool: gluster_thinpool_gluster_vg_sdc
          lvname: gluster_lv_data
          lvsize: 300G
        - vgname: gluster_vg_sdc
          thinpool: gluster_thinpool_gluster_vg_sdc
          lvname: gluster_lv_vmstore
          lvsize: 600G
        - vgname: gluster_vg_sdc
          thinpool: gluster_thinpool_gluster_vg_sdc
          lvname: gluster_lv_newvol
          lvsize: 900G

Comment 5 Sandro Bonazzola 2019-04-16 13:58:32 UTC
This bugzilla is included in oVirt 4.3.3 release, published on April 16th 2019.

Since the problem described in this bug report should be
resolved in oVirt 4.3.3 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.