Bug 1634551 - [gluster-ansible] Update the VDO systemd unit file to support dm-thin pool on top of VDO volumes
Summary: [gluster-ansible] Update the VDO systemd unit file to support dm-thin pool on...
Keywords:
Status: CLOSED DUPLICATE of bug 1693653
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhi-1.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: ---
Assignee: Gobinda Das
QA Contact: SATHEESARAN
URL:
Whiteboard:
: 1690608 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-10-01 08:54 UTC by SATHEESARAN
Modified: 2019-06-26 07:56 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-05-07 14:33:50 UTC
Embargoed:


Attachments (Terms of Use)

Description SATHEESARAN 2018-10-01 08:54:15 UTC
Description of problem:
-----------------------
dm-thinpool disables discards if configured to use with VDO. Its because of this reason, user can't do reclaim space, if VDO volume is used along with thinpool.

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHHI 2.0

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Create a VDO volume
2. Create a thinpool on top of VDO volume
3. Create thin LVs, format it with XFS bricks and use them as gluster bricks
4. Create gluster volume and use it to store VM Images ( using RHV )
5. Try to perform discard on the fuse mounted filesystem

Actual results:
---------------
Discard will never reclaim the discarded space

Expected results:
-----------------
Able to reclaim the discard storage space, to improve storage utilization

Additional info:
------------------
There is a RHEL bug - BZ 1600156 - tracking the original issue, this bug is a RHHI side of bug

Comment 1 SATHEESARAN 2018-10-25 06:45:57 UTC
Removing the devel ack as this bug is not targeted for RHHI 2.0

Comment 2 Sahina Bose 2018-12-18 07:05:56 UTC
If this bug is fixed, should we remove the restriction on creating vdo volume with dm-thinp

Comment 3 Gobinda Das 2019-01-03 10:56:38 UTC
The dependent RHEL bug is on Modified state. We need to test with RHEL bug fix then will remove otherwise the complete flow will break.

Comment 4 Sahina Bose 2019-02-18 06:36:11 UTC
(In reply to Gobinda Das from comment #3)
> The dependent RHEL bug is on Modified state. We need to test with RHEL bug
> fix then will remove otherwise the complete flow will break.

The bug is closed wontfix - with a workaround. So what are the next steps on RHHI?

Comment 5 Gobinda Das 2019-02-18 07:09:11 UTC
I think the issue will be fixed in RHEL-8.0 and will have to test with RHEL-8.0 before removing restriction for vdo volume from RHHI deployment side.

Comment 6 Guillaume Pavese 2019-02-19 03:32:48 UTC
As I understand, Configuring LVCache on RHHI is only possible with thin LV's ;
Thin LV also are necessary to enable volume snapshots and are currently not possible on VDO volumes (force-created on thick LV by ovirt deployment setup).

Does the above comment mean that it will not be possible to configure thin LV for VDO volumes with ovirt-4.3/Centos-7.6 either? 
We have been postponing rolling out oVirt-4.2 in production because of these two major cons.


Please advise if you think RHEL-8 will be necessary in order to have VDO volumes on par with normal volumes (snapshots and LVCache). In that case we will deploy production asap without VDO.

Comment 7 SATHEESARAN 2019-03-19 19:37:51 UTC
*** Bug 1690608 has been marked as a duplicate of this bug. ***

Comment 8 SATHEESARAN 2019-03-19 19:39:30 UTC
(In reply to Gobinda Das from comment #5)
> I think the issue will be fixed in RHEL-8.0 and will have to test with
> RHEL-8.0 before removing restriction for vdo volume from RHHI deployment
> side.

RHEL 8 support in RHV is going to be very late. To support thinp devices on top of VDO devices,
there exists a simple workaround to update the VDO max_discard_sectors.

This can be added as part of gluster-ansible and this can fix the issue

Comment 9 SATHEESARAN 2019-03-19 19:40:11 UTC
The fix is not yet posted and updating the bug status

Comment 10 SATHEESARAN 2019-03-20 08:02:51 UTC
Updated VDO systemd unit file as per https://access.redhat.com/solutions/3562021


[Unit]
Description=VDO volume services
After=systemd-remount-fs.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStartPre=/sbin/modprobe -a uds 
ExecStartPre=/sbin/modprobe -a kvdo
ExecStartPre=/bin/sh -c "echo 4096 > /sys/kvdo/max_discard_sectors"
ExecStart=/usr/bin/vdo start --all --confFile /etc/vdoconf.yml
ExecStop=/usr/bin/vdo stop --all --confFile /etc/vdoconf.yml

[Install]
WantedBy=multi-user.target

Comment 11 Sahina Bose 2019-05-07 14:33:50 UTC
Addressed via Bug 1693653

*** This bug has been marked as a duplicate of bug 1693653 ***


Note You need to log in before you can comment on or make changes to this bug.