Bug 1686259 - Creation of multiple bricks,one of them being arbiter, on the same disk without dedupe & compression, results in failure
Summary: Creation of multiple bricks,one of them being arbiter, on the same disk witho...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: cockpit-ovirt
Classification: oVirt
Component: gluster-ansible
Version: 0.12.5
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ovirt-4.3.3
: 0.12.6
Assignee: Gobinda Das
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1686258
TreeView+ depends on / blocked
 
Reported: 2019-03-07 06:05 UTC by Mugdha Soni
Modified: 2019-04-16 13:58 UTC (History)
8 users (show)

Fixed In Version: cockpit-ovirt-0.12.6
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1686258
Environment:
Last Closed: 2019-04-16 13:58:32 UTC
oVirt Team: Gluster
Embargoed:
sasundar: ovirt-4.3?
sasundar: blocker?
godas: devel_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 98772 0 master MERGED Adding single thinpool entry per device 2019-03-22 11:01:04 UTC
oVirt gerrit 98778 0 ovirt-4.3 MERGED Adding single thinpool entry per device 2019-03-22 13:27:50 UTC

Description Mugdha Soni 2019-03-07 06:05:57 UTC
+++ This bug was initially created as a clone of Bug #1686258 +++

Description of problem:
------------------------
When multiple thinp volumes are carved out of same disk then in inventory file there is presence of multiple thinpools.

Version-Release number of selected component :
-----------------------------------------------
rhvh-4.3.0.5-0.20190305
gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch
gluster-ansible-roles-1.0.4-4.el7rhgs.noarch
gluster-ansible-infra-1.0.3-3.el7rhgs.noarch
gluster-ansible-repositories-1.0-1.el7rhgs.noarch
gluster-ansible-features-1.0.4-5.el7rhgs.noarch
gluster-ansible-cluster-1.0-1.el7rhgs.noarch


How reproducible:
-------------------
Everytime


Steps to Reproduce:
--------------------
1.In gluster deployment create multiple thinp volumes on the same disk.


Actual results:
-------------------
Multiple thinpools are present in inventory file

Expected results:
---------------------
Single thinpool section should be present under gluster_infra_thinpools if thinp volumes are carved out of same disk.

Additional info:
--------------------
Here is the snippet of the gluster infra thinpool section.

 gluster_infra_thick_lvs:
        - vgname: gluster_vg_sdb
          lvname: gluster_lv_engine
          size: 100G
        - vgname: gluster_vg_sdc
          lvname: gluster_lv_data
          size: 4000G
        - vgname: gluster_vg_sdc
          lvname: gluster_lv_vmstore
          size: 2000G
      gluster_infra_thinpools:
        - vgname: gluster_vg_sdd
          thinpoolname: gluster_thinpool_gluster_vg_sdd
          poolmetadatasize: 3
        - vgname: gluster_vg_sdd
          thinpoolname: gluster_thinpool_gluster_vg_sdd
          poolmetadatasize: 3

Comment 1 Gobinda Das 2019-03-07 06:33:04 UTC
Parth,
 Can you please take a look?

Comment 2 Parth Dhanjal 2019-03-07 06:36:32 UTC
(In reply to Gobinda Das from comment #1)
> Parth,
>  Can you please take a look?

Sure!

Comment 3 SATHEESARAN 2019-03-20 08:01:16 UTC
While regression testing, ran in to a test scenario,
1. No dedupe & compression on the brick. So brick is a thin LV
2. Create multiple bricks on the same disk
3. One of them is arbiter.

In this scenario, the gluster ansible playbook ends up like this:
<snip>
gluster_infra_thinpools:
        - vgname: gluster_vg_sdc
          thinpoolname: gluster_thinpool_gluster_vg_sdc
          poolmetadatasize: 1
        - vgname: gluster_vg_sdc
          thinpoolname: gluster_thinpool_gluster_vg_sdc
          poolmetadatasize: 16

</snip>

So as per the original intent behind the bug, for each brick on the same disk, 
creation of dedicated thinpool is attempted. In this case, thinpool 'gluster_thinpool_gluster_vg_sdc'
is created with poolmetadatasize of 1, which looks too small for the other thin LV
and LV complains about it and fails.

This bug is now considered as a blocker from QE perspective

Comment 4 SATHEESARAN 2019-03-29 18:50:17 UTC
Tested with cockpit-ovirt-dashboard-0.12.6

When there are multiple thin LVs are created on the same disk, 
there only one thinpool created.

Snip from generated ansible vars file
--------------------------------------
    gluster_infra_thinpools:
        - vgname: gluster_vg_sdc
          thinpoolname: gluster_thinpool_gluster_vg_sdc
          poolmetadatasize: 5G
      gluster_infra_lv_logicalvols:
        - vgname: gluster_vg_sdc
          thinpool: gluster_thinpool_gluster_vg_sdc
          lvname: gluster_lv_data
          lvsize: 300G
        - vgname: gluster_vg_sdc
          thinpool: gluster_thinpool_gluster_vg_sdc
          lvname: gluster_lv_vmstore
          lvsize: 600G
        - vgname: gluster_vg_sdc
          thinpool: gluster_thinpool_gluster_vg_sdc
          lvname: gluster_lv_newvol
          lvsize: 900G

Comment 5 Sandro Bonazzola 2019-04-16 13:58:32 UTC
This bugzilla is included in oVirt 4.3.3 release, published on April 16th 2019.

Since the problem described in this bug report should be
resolved in oVirt 4.3.3 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.