Bug 1765912 - [downstream clone - 4.3.8] Align volume size to 4k block size in hsm module for file based storage
Summary: [downstream clone - 4.3.8] Align volume size to 4k block size in hsm module f...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: unspecified
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.3.8
: 4.3.8
Assignee: Vojtech Juranek
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1753235
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-10-27 10:12 UTC by RHV bug bot
Modified: 2020-02-13 15:25 UTC (History)
14 users (show)

Fixed In Version: vdsm-4.30.40
Doc Type: Bug Fix
Doc Text:
Previously, VDSM v4.30.26 added support for 4K block size with file-based storage domains to Red Hat Virtualization 4.3. However, there were a few instances where 512B block size remained hard-coded. These instances impacted the support of 4K block size under certain conditions. This release fixes these issues.
Clone Of: 1753235
Environment:
Last Closed: 2020-02-13 15:25:15 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:0499 0 None None None 2020-02-13 15:25:20 UTC
oVirt gerrit 103386 0 'None' MERGED storage: align volumes always to 4k 2020-05-21 13:46:58 UTC
oVirt gerrit 104125 0 'None' MERGED storage: align volumes always to 4k 2020-05-21 13:46:58 UTC

Description RHV bug bot 2019-10-27 10:12:19 UTC
+++ This bug is an upstream to downstream clone. The original bug is: +++
+++   bug 1753235 +++
======================================================================

Description of problem:

When swithcing to 4k support, we missed couple of place where there is still hardcoded 512B block size. All these pieces of code has to be fixed or rewritten. 

directio module:
* DirectFile.read()
* DirectFile.write()

misc module:
* readblock()
* _alignData()

(Originally by Vojtech Juranek)

Comment 1 RHV bug bot 2019-10-27 10:12:21 UTC
The included patch is needed for file based storage which should be supported
in 4.3.7, but the other cases mentioned here are all used only in block storage,
which will not be supported in 4.3.7.

Lets handle only file based storage in this bug, and file another bug for 4k
support in block storage, targeted to 4.4.

(Originally by Nir Soffer)

Comment 2 RHV bug bot 2019-10-27 10:12:23 UTC
RFE for 4k support in block SD: https://bugzilla.redhat.com/1753263

(Originally by Vojtech Juranek)

Comment 3 RHV bug bot 2019-10-27 10:12:25 UTC
Please provide a clear scenario of how to verify this bug.

(Originally by Avihai Efrat)

Comment 4 RHV bug bot 2019-10-27 10:12:26 UTC
There isn't any particular test how to verify it, the verification from QA point of view IMHO should be that there isn't any regression in vdsm.

(Originally by Vojtech Juranek)

Comment 9 Avihai 2019-10-27 12:01:43 UTC
I will QA_ACKED this once also Sas give his OK to test it at RHHI ENV as well to see basic functionality.

Just to be clear we only have the HW to test gluster+4K enabled and for non-4K enabled storage we will test regression on both gluster and NFS.

As I stated before: 
verification will take time as we need to make sure that:

1) We will run TierX regressions(with regular non-4K storages) anyway when this fix will be introduced for downstream and see no issues are seen.
2) RHHI team will consume this fix and see if it affects RHHI operations(which are made on 4K enabled storage).

Hi Sas,
Please ack that this part(2) will be handled by your team so we can QA_ACK this bug.

Comment 10 SATHEESARAN 2019-10-27 12:34:29 UTC
(In reply to Avihai from comment #9)
> I will QA_ACKED this once also Sas give his OK to test it at RHHI ENV as
> well to see basic functionality.
> 
> Just to be clear we only have the HW to test gluster+4K enabled and for
> non-4K enabled storage we will test regression on both gluster and NFS.
> 
> As I stated before: 
> verification will take time as we need to make sure that:
> 
> 1) We will run TierX regressions(with regular non-4K storages) anyway when
> this fix will be introduced for downstream and see no issues are seen.
> 2) RHHI team will consume this fix and see if it affects RHHI
> operations(which are made on 4K enabled storage).
> 
> Hi Sas,
> Please ack that this part(2) will be handled by your team so we can QA_ACK
> this bug.

Hi Avihai,

RHHI-V QE team will need more time to run all the regression and this qualification is
targeted only RHV 4.3.8. We will start qualification with RHV 4.3.7 with few resource, and
this testing will spill over beyond RHV 4.3.7.

So, 4KN feature qualification is NO_ACK wrt RHV 4.3.7

Comment 11 SATHEESARAN 2019-10-27 17:14:16 UTC
4KN support with RHHI-V takes more time, as all the regression test cases needs to be run with 4KN setup.
It was decided in the RHHI-V pgm to target 4KN support with RHHI-V with RHV 4.3.8.

Hence negating the ack for qualification of this bug for RHV 4,3,7

Comment 12 RHV bug bot 2019-12-05 17:50:11 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.3.z': '?'}', ]

For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.3.z': '?'}', ]

For more info please contact: rhv-devops

Comment 13 RHV bug bot 2019-12-12 12:01:51 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.3.z': '?'}', ]

For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.3.z': '?'}', ]

For more info please contact: rhv-devops

Comment 17 SATHEESARAN 2020-01-28 08:01:37 UTC
Tested with RHV 4.3.8 + RHGS 3.5.1 ( glusterfs-6.0-29.el7rhgs )

1. All the RHHI-V core cases are run with 4K disks as well as with 4K VDO devices.
2. Deployment and functionally works good without any issues

Comment 26 errata-xmlrpc 2020-02-13 15:25:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0499


Note You need to log in before you can comment on or make changes to this bug.