Bug 1811991

Summary: [vdo] VDO creation needs new VDO option maxDiscardSize
Product: [oVirt] cockpit-ovirt Reporter: SATHEESARAN <sasundar>
Component: gluster-ansibleAssignee: Gobinda Das <godas>
Status: CLOSED CURRENTRELEASE QA Contact: SATHEESARAN <sasundar>
Severity: high Docs Contact:
Priority: unspecified    
Version: 0.14.1CC: bugs, godas, rhs-bugs
Target Milestone: ovirt-4.4.0Flags: sbonazzo: ovirt-4.4?
sasundar: blocker?
sasundar: planning_ack?
godas: devel_ack+
sasundar: testing_ack+
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: rhv-4.4.0-29 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1811988 Environment:
Last Closed: 2020-05-20 20:04:16 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Gluster RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1811988    

Description SATHEESARAN 2020-03-10 10:47:13 UTC
Description of problem:
-----------------------
RHHI-V uses VDO for dedupe & compression functionality. On top of VDO, thin LVs are created. In this combination, we found a problem (BZ 1600156), where discards weren't passed down, because of vdo_discard_sector(size) is smaller than thinpool's. To avoid that RHHI deployment at cockpit made custom changes to VDO systemd unit file to have the following:
---
ExecStartPre=/bin/sh -c "echo 4096 > /sys/kvdo/max_discard_sectors"
---
in /usr/lib/systemd/system/vdo.service.

But as per VDO instructions, with kernel > 4.6 (i.e) RHEL 8 kernel && from VDO 6.2, VDO exposes the option 'maxDiscardSize' per VDO volume.

RHHI-V 1.8 will be using RHEL 8.2 hosts, with RHV 4.4, with VDO 6.2.z.
This change needs to be updated in cockpit-wizard and that's the reason for existence of this bug

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
kmod-kvdo-6.2.2.117-64.el8.x86_64
vdo-6.2.2.117-13.el8.x86_64
Kernel - 4.18.0-187.el8.x86_64
cockpit-ovirt-dashboard-0.14.1-1.el8ev.noarch0

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Start with RHHI-V deployment with VDO

Actual results:
---------------
systemd unit file for VDO (/usr/lib/systemd/system/vdo.service) has the following:
---
ExecStartPre=/bin/sh -c "echo 4096 > /sys/kvdo/max_discard_sectors"
---

Expected results:
-----------------
VDO option maxDiscardSize should be used instead

Additional info:
----------------
cockpit-wizard: to set up the new VDO option while creating.
Add a new VDO volume option --maxDiscardSize=16M when new VDO volume is created

Comment 1 SATHEESARAN 2020-03-10 10:51:33 UTC
Following example explains the usage of VDO option 'maxDiscardSize'

[root@ ~]# vdo create -n vdo_sdd --vdoLogicalSize=1T --vdoSlabSize=32G --device /dev/sdd --maxDiscardSize=16M
Creating VDO vdo_sdd
      The VDO volume can address 1 TB in 58 data slabs, each 32 GB.
      It can grow to address at most 256 TB of physical storage in 8192 slabs.
Starting VDO vdo_sdd
Starting compression on VDO vdo_sdd
VDO instance 2 volume is ready at /dev/mapper/vdo_sdd

[root@ ~]# vdo status -n vdo_sdd | grep -i 'discard size'
    Max discard size: 16M

[root@ ~]# dmsetup table | grep vdo_sdd
vdo_sdd: 0 2147483648 vdo V2 /dev/disk/by-id/scsi-3600304801a48610125f9a60819a7d829 488243200 4096 32768 16380 on auto vdo_sdd maxDiscard 4096 ack 1 bio 4 bioRotationInterval 64 cpu 2 hash 1 logical 1 physical 1

Comment 3 SATHEESARAN 2020-04-20 05:17:41 UTC
Tested with gluster-ansible-infra-1.0.4-7.el8 and cockpit-ovirt-dashboard-0.14.3
When role was execute, it no longer tries to add changes to vdo systemd unit file,
instead the per VDO volume option 'maxDiscardSize' is set to 16M

Comment 4 Sandro Bonazzola 2020-05-20 20:04:16 UTC
This bugzilla is included in oVirt 4.4.0 release, published on May 20th 2020.

Since the problem described in this bug report should be
resolved in oVirt 4.4.0 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.