Description of problem: ------------------------ The current problem with VDO is that when thinpool is created on top of VDO devices, then the discard is not possible, because of the misaligned discard size between thinpool and VDO. When VDO max_discard_sectors is updated, this problem is solved. So this bug should update the VDO service file and restart VDO service, at the start of the RHHI deployment Version-Release number of selected component (if applicable): ------------------------------------------------------------- RHHI-V 1.6 & RHVH 4.3 RHEL 7.6+ gluster-ansible-roles-1.0.4.4 vdo-6.1.1.125-3.el7.x86_64 kmod-kvdo-6.1.1.125-5.el7.x86_64 How reproducible: ---------------- Not Applicable Steps to Reproduce: -------------------- Not Applicable Actual results: ---------------- systemd unit file vdo service doesn't call out changes for vdo max_discard_sector Expected results: ----------------- Drop in the updated VDO systemd unit file as mentioned in the KCS Article[1] [1] - https://access.redhat.com/solutions/3562021
Updated VDO systemd unit file as per https://access.redhat.com/solutions/3562021 [Unit] Description=VDO volume services After=systemd-remount-fs.service [Service] Type=oneshot RemainAfterExit=yes ExecStartPre=/sbin/modprobe -a uds ExecStartPre=/sbin/modprobe -a kvdo ExecStartPre=/bin/sh -c "echo 4096 > /sys/kvdo/max_discard_sectors" ExecStart=/usr/bin/vdo start --all --confFile /etc/vdoconf.yml ExecStop=/usr/bin/vdo stop --all --confFile /etc/vdoconf.yml [Install] WantedBy=multi-user.target
Once this issue is fixed, cockpit-ovirt should allow creation of thinp bricks on top of VDO devices
https://github.com/gluster/gluster-ansible-infra/pull/56
When testing with gluster-ansible-infra-1.0.4-2, couple of params in VDO systemd unit file is missing. Because of this UDS & kvdo kernel modules are not loaded, and VDO fails to start Missing lines in vdo.service <snip> ExecStartPre=/sbin/modprobe -a uds ExecStartPre=/sbin/modprobe -a kvdo </snip> Full content of VDO systemd unit file: ---------------------------------------- [Unit] Description=VDO volume services After=systemd-remount-fs.service [Service] Type=oneshot RemainAfterExit=yes ExecStartPre=/sbin/modprobe -a uds ExecStartPre=/sbin/modprobe -a kvdo ExecStartPre=/bin/sh -c "echo 4096 > /sys/kvdo/max_discard_sectors" ExecStart=/usr/bin/vdo start --all --confFile /etc/vdoconf.yml ExecStop=/usr/bin/vdo stop --all --confFile /etc/vdoconf.yml [Install] WantedBy=multi-user.target
https://github.com/gluster/gluster-ansible-infra/pull/66 resolves the issue.
Tested with RHVH 4.3.5 + RHEL 7.7 + RHGS 3.4.4 ( interim build - glusterfs-6.0-6 ) with ansible 2.8.1-1 with: gluster-ansible-features-1.0.5-2.el7rhgs.noarch gluster-ansible-roles-1.0.5-2.el7rhgs.noarch gluster-ansible-infra-1.0.4-3.el7rhgs.noarch The required values are now added to vdo.service systemd file <snip> [root@ ~]# cat /etc/systemd/system/multi-user.target.wants/vdo.service [Unit] Description=VDO volume services After=systemd-remount-fs.service [Service] Type=oneshot RemainAfterExit=yes #BEGIN ANSIBLE MANAGED BLOCK - DO NOT EDIT THIS LINE ExecStartPre=/sbin/modprobe -a uds ExecStartPre=/sbin/modprobe -a kvdo ExecStartPre=/bin/sh -c "echo 4096 > /sys/kvdo/max_discard_sectors" #END ANSIBLE MANAGED BLOCK - DO NOT EDIT THIS LINE ExecStart=/usr/bin/vdo start --all --confFile /etc/vdoconf.yml ExecStop=/usr/bin/vdo stop --all --confFile /etc/vdoconf.yml [Install] WantedBy=multi-user.target </snip>
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2557