Bug 1686259
| Summary: | Creation of multiple bricks,one of them being arbiter, on the same disk without dedupe & compression, results in failure | ||
|---|---|---|---|
| Product: | [oVirt] cockpit-ovirt | Reporter: | Mugdha Soni <musoni> |
| Component: | gluster-ansible | Assignee: | Gobinda Das <godas> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | SATHEESARAN <sasundar> |
| Severity: | medium | Docs Contact: | |
| Priority: | high | ||
| Version: | 0.12.5 | CC: | bugs, dparth, irosenzw, rcyriac, rhs-bugs, sabose, sankarshan, sasundar |
| Target Milestone: | ovirt-4.3.3 | Flags: | sasundar:
ovirt-4.3?
sasundar: blocker? godas: devel_ack+ |
| Target Release: | 0.12.6 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | cockpit-ovirt-0.12.6 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1686258 | Environment: | |
| Last Closed: | 2019-04-16 13:58:32 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | Gluster | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1686258 | ||
|
Description
Mugdha Soni
2019-03-07 06:05:57 UTC
Parth, Can you please take a look? (In reply to Gobinda Das from comment #1) > Parth, > Can you please take a look? Sure! While regression testing, ran in to a test scenario,
1. No dedupe & compression on the brick. So brick is a thin LV
2. Create multiple bricks on the same disk
3. One of them is arbiter.
In this scenario, the gluster ansible playbook ends up like this:
<snip>
gluster_infra_thinpools:
- vgname: gluster_vg_sdc
thinpoolname: gluster_thinpool_gluster_vg_sdc
poolmetadatasize: 1
- vgname: gluster_vg_sdc
thinpoolname: gluster_thinpool_gluster_vg_sdc
poolmetadatasize: 16
</snip>
So as per the original intent behind the bug, for each brick on the same disk,
creation of dedicated thinpool is attempted. In this case, thinpool 'gluster_thinpool_gluster_vg_sdc'
is created with poolmetadatasize of 1, which looks too small for the other thin LV
and LV complains about it and fails.
This bug is now considered as a blocker from QE perspective
Tested with cockpit-ovirt-dashboard-0.12.6
When there are multiple thin LVs are created on the same disk,
there only one thinpool created.
Snip from generated ansible vars file
--------------------------------------
gluster_infra_thinpools:
- vgname: gluster_vg_sdc
thinpoolname: gluster_thinpool_gluster_vg_sdc
poolmetadatasize: 5G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdc
thinpool: gluster_thinpool_gluster_vg_sdc
lvname: gluster_lv_data
lvsize: 300G
- vgname: gluster_vg_sdc
thinpool: gluster_thinpool_gluster_vg_sdc
lvname: gluster_lv_vmstore
lvsize: 600G
- vgname: gluster_vg_sdc
thinpool: gluster_thinpool_gluster_vg_sdc
lvname: gluster_lv_newvol
lvsize: 900G
This bugzilla is included in oVirt 4.3.3 release, published on April 16th 2019. Since the problem described in this bug report should be resolved in oVirt 4.3.3 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report. |