Bug 1734386

Summary: Update the VDO systemd service file to support thinp bricks
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: SATHEESARAN <sasundar>
Component: rhhiAssignee: Sahina Bose <sabose>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhhiv-1.6CC: amukherj, godas, guillaume.pavese, pasik, rhs-bugs, sabose, sasundar, surs
Target Milestone: ---Keywords: ZStream
Target Release: RHHI-V 1.6.z Async Update   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Thinly provisioned bricks are now supported on top of Virtual Disk Optimization (VDO) devices.
Story Points: ---
Clone Of: 1690606 Environment:
Last Closed: 2019-10-03 12:24:01 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1690606    
Bug Blocks:    

Description SATHEESARAN 2019-07-30 12:17:50 UTC
+++ This bug was initially created as a clone of Bug #1690606 +++

Description of problem:
------------------------
The current problem with VDO is that when thinpool is created on top of VDO devices, then the discard is not possible, because of the misaligned discard size between thinpool and VDO.

When VDO max_discard_sectors is updated, this problem is solved. So this bug should update the VDO service file and restart VDO service, at the start of the RHHI deployment

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHHI-V 1.6 & RHVH 4.3
RHEL 7.6+
gluster-ansible-roles-1.0.4.4
vdo-6.1.1.125-3.el7.x86_64
kmod-kvdo-6.1.1.125-5.el7.x86_64

How reproducible:
----------------
Not Applicable

Steps to Reproduce:
--------------------
Not Applicable

Actual results:
----------------
systemd unit file vdo service doesn't call out changes for vdo max_discard_sector

Expected results:
-----------------
Drop in the updated VDO systemd unit file as mentioned in the KCS Article[1]

[1] - https://access.redhat.com/solutions/3562021

--- Additional comment from SATHEESARAN on 2019-03-19 19:32:38 UTC ---

Updated VDO systemd unit file as per https://access.redhat.com/solutions/3562021


[Unit]
Description=VDO volume services
After=systemd-remount-fs.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStartPre=/sbin/modprobe -a uds 
ExecStartPre=/sbin/modprobe -a kvdo
ExecStartPre=/bin/sh -c "echo 4096 > /sys/kvdo/max_discard_sectors"
ExecStart=/usr/bin/vdo start --all --confFile /etc/vdoconf.yml
ExecStop=/usr/bin/vdo stop --all --confFile /etc/vdoconf.yml

[Install]
WantedBy=multi-user.target

--- Additional comment from SATHEESARAN on 2019-03-19 19:33:16 UTC ---

Once this issue is fixed, cockpit-ovirt should allow creation of thinp bricks on top of VDO devices

--- Additional comment from SATHEESARAN on 2019-03-28 12:13:23 UTC ---

This bug is the must for the forthcoming RHHI-V release and providing the qa_ack for the same

--- Additional comment from Sachidananda Urs on 2019-03-29 10:00:00 UTC ---

https://github.com/gluster/gluster-ansible-infra/pull/56

--- Additional comment from errata-xmlrpc on 2019-05-08 13:34:22 UTC ---

Bug report changed to ON_QA status by Errata System.
A QE request has been submitted for advisory RHEA-2019:41946-01
https://errata.devel.redhat.com/advisory/41946

--- Additional comment from SATHEESARAN on 2019-06-14 14:03:30 UTC ---

When testing with gluster-ansible-infra-1.0.4-2, couple of params in VDO systemd unit file is missing.
Because of this UDS & kvdo kernel modules are not loaded, and VDO fails to start

Missing lines in vdo.service
<snip>
ExecStartPre=/sbin/modprobe -a uds
ExecStartPre=/sbin/modprobe -a kvdo
</snip>

Full content of VDO systemd unit file:
----------------------------------------
[Unit]
Description=VDO volume services
After=systemd-remount-fs.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStartPre=/sbin/modprobe -a uds
ExecStartPre=/sbin/modprobe -a kvdo
ExecStartPre=/bin/sh -c "echo 4096 > /sys/kvdo/max_discard_sectors"
ExecStart=/usr/bin/vdo start --all --confFile /etc/vdoconf.yml
ExecStop=/usr/bin/vdo stop --all --confFile /etc/vdoconf.yml

[Install]
WantedBy=multi-user.target

--- Additional comment from Atin Mukherjee on 2019-06-16 14:39:38 UTC ---

Need an estimate on how soon this fix can land into upstream. This is to assess to current state to meet to the development freeze milestone.

--- Additional comment from Sachidananda Urs on 2019-06-18 04:18:27 UTC ---

(In reply to Atin Mukherjee from comment #7)
> Need an estimate on how soon this fix can land into upstream. This is to
> assess to current state to meet to the development freeze milestone.

This will be done by today (18/Jun) EOD.

--- Additional comment from Sachidananda Urs on 2019-06-18 05:51:12 UTC ---

https://github.com/gluster/gluster-ansible-infra/pull/66 resolves the issue.

--- Additional comment from SATHEESARAN on 2019-06-26 02:43:38 UTC ---

Tested with RHVH 4.3.5 + RHEL 7.7 + RHGS 3.4.4 ( interim build - glusterfs-6.0-6 ) with ansible 2.8.1-1
with:
gluster-ansible-features-1.0.5-2.el7rhgs.noarch
gluster-ansible-roles-1.0.5-2.el7rhgs.noarch
gluster-ansible-infra-1.0.4-3.el7rhgs.noarch

The required values are now added to vdo.service systemd file

<snip>

[root@ ~]# cat /etc/systemd/system/multi-user.target.wants/vdo.service 
[Unit]
Description=VDO volume services
After=systemd-remount-fs.service

[Service]
Type=oneshot
RemainAfterExit=yes
#BEGIN ANSIBLE MANAGED BLOCK - DO NOT EDIT THIS LINE
ExecStartPre=/sbin/modprobe -a uds
ExecStartPre=/sbin/modprobe -a kvdo
ExecStartPre=/bin/sh -c "echo 4096 > /sys/kvdo/max_discard_sectors"
#END ANSIBLE MANAGED BLOCK - DO NOT EDIT THIS LINE
ExecStart=/usr/bin/vdo start --all --confFile /etc/vdoconf.yml
ExecStop=/usr/bin/vdo stop --all --confFile /etc/vdoconf.yml

[Install]
WantedBy=multi-user.target
</snip>

--- Additional comment from Sunil Kumar Acharya on 2019-06-26 04:16:58 UTC ---

Please update the doc text.

Comment 2 errata-xmlrpc 2019-10-03 12:24:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2963