Bug 1690606 - Update the VDO systemd service file to support thinp bricks
Summary: Update the VDO systemd service file to support thinp bricks
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-ansible
Version: rhgs-3.4
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.4.z Async Update
Assignee: Sachidananda Urs
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1734386
TreeView+ depends on / blocked
 
Reported: 2019-03-19 19:31 UTC by SATHEESARAN
Modified: 2019-10-03 07:58 UTC (History)
7 users (show)

Fixed In Version: gluster-ansible-infra-1.0.4-3
Doc Type: Enhancement
Doc Text:
Thinly provisioned bricks are now supported on top of Virtual Disk Optimization (VDO) devices.
Clone Of:
: 1690608 1734386 (view as bug list)
Environment:
Last Closed: 2019-10-03 07:58:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2557 0 None None None 2019-10-03 07:58:34 UTC

Description SATHEESARAN 2019-03-19 19:31:43 UTC
Description of problem:
------------------------
The current problem with VDO is that when thinpool is created on top of VDO devices, then the discard is not possible, because of the misaligned discard size between thinpool and VDO.

When VDO max_discard_sectors is updated, this problem is solved. So this bug should update the VDO service file and restart VDO service, at the start of the RHHI deployment

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHHI-V 1.6 & RHVH 4.3
RHEL 7.6+
gluster-ansible-roles-1.0.4.4
vdo-6.1.1.125-3.el7.x86_64
kmod-kvdo-6.1.1.125-5.el7.x86_64

How reproducible:
----------------
Not Applicable

Steps to Reproduce:
--------------------
Not Applicable

Actual results:
----------------
systemd unit file vdo service doesn't call out changes for vdo max_discard_sector

Expected results:
-----------------
Drop in the updated VDO systemd unit file as mentioned in the KCS Article[1]

[1] - https://access.redhat.com/solutions/3562021

Comment 1 SATHEESARAN 2019-03-19 19:32:38 UTC
Updated VDO systemd unit file as per https://access.redhat.com/solutions/3562021


[Unit]
Description=VDO volume services
After=systemd-remount-fs.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStartPre=/sbin/modprobe -a uds 
ExecStartPre=/sbin/modprobe -a kvdo
ExecStartPre=/bin/sh -c "echo 4096 > /sys/kvdo/max_discard_sectors"
ExecStart=/usr/bin/vdo start --all --confFile /etc/vdoconf.yml
ExecStop=/usr/bin/vdo stop --all --confFile /etc/vdoconf.yml

[Install]
WantedBy=multi-user.target

Comment 2 SATHEESARAN 2019-03-19 19:33:16 UTC
Once this issue is fixed, cockpit-ovirt should allow creation of thinp bricks on top of VDO devices

Comment 4 Sachidananda Urs 2019-03-29 10:00:00 UTC
https://github.com/gluster/gluster-ansible-infra/pull/56

Comment 6 SATHEESARAN 2019-06-14 14:03:30 UTC
When testing with gluster-ansible-infra-1.0.4-2, couple of params in VDO systemd unit file is missing.
Because of this UDS & kvdo kernel modules are not loaded, and VDO fails to start

Missing lines in vdo.service
<snip>
ExecStartPre=/sbin/modprobe -a uds
ExecStartPre=/sbin/modprobe -a kvdo
</snip>

Full content of VDO systemd unit file:
----------------------------------------
[Unit]
Description=VDO volume services
After=systemd-remount-fs.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStartPre=/sbin/modprobe -a uds
ExecStartPre=/sbin/modprobe -a kvdo
ExecStartPre=/bin/sh -c "echo 4096 > /sys/kvdo/max_discard_sectors"
ExecStart=/usr/bin/vdo start --all --confFile /etc/vdoconf.yml
ExecStop=/usr/bin/vdo stop --all --confFile /etc/vdoconf.yml

[Install]
WantedBy=multi-user.target

Comment 9 Sachidananda Urs 2019-06-18 05:51:12 UTC
https://github.com/gluster/gluster-ansible-infra/pull/66 resolves the issue.

Comment 10 SATHEESARAN 2019-06-26 02:43:38 UTC
Tested with RHVH 4.3.5 + RHEL 7.7 + RHGS 3.4.4 ( interim build - glusterfs-6.0-6 ) with ansible 2.8.1-1
with:
gluster-ansible-features-1.0.5-2.el7rhgs.noarch
gluster-ansible-roles-1.0.5-2.el7rhgs.noarch
gluster-ansible-infra-1.0.4-3.el7rhgs.noarch

The required values are now added to vdo.service systemd file

<snip>

[root@ ~]# cat /etc/systemd/system/multi-user.target.wants/vdo.service 
[Unit]
Description=VDO volume services
After=systemd-remount-fs.service

[Service]
Type=oneshot
RemainAfterExit=yes
#BEGIN ANSIBLE MANAGED BLOCK - DO NOT EDIT THIS LINE
ExecStartPre=/sbin/modprobe -a uds
ExecStartPre=/sbin/modprobe -a kvdo
ExecStartPre=/bin/sh -c "echo 4096 > /sys/kvdo/max_discard_sectors"
#END ANSIBLE MANAGED BLOCK - DO NOT EDIT THIS LINE
ExecStart=/usr/bin/vdo start --all --confFile /etc/vdoconf.yml
ExecStop=/usr/bin/vdo stop --all --confFile /etc/vdoconf.yml

[Install]
WantedBy=multi-user.target
</snip>

Comment 14 errata-xmlrpc 2019-10-03 07:58:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2557


Note You need to log in before you can comment on or make changes to this bug.