+++ This bug is an upstream to downstream clone. The original bug is: +++ +++ bug 1753235 +++ ====================================================================== Description of problem: When swithcing to 4k support, we missed couple of place where there is still hardcoded 512B block size. All these pieces of code has to be fixed or rewritten. directio module: * DirectFile.read() * DirectFile.write() misc module: * readblock() * _alignData() (Originally by Vojtech Juranek)
The included patch is needed for file based storage which should be supported in 4.3.7, but the other cases mentioned here are all used only in block storage, which will not be supported in 4.3.7. Lets handle only file based storage in this bug, and file another bug for 4k support in block storage, targeted to 4.4. (Originally by Nir Soffer)
RFE for 4k support in block SD: https://bugzilla.redhat.com/1753263 (Originally by Vojtech Juranek)
Please provide a clear scenario of how to verify this bug. (Originally by Avihai Efrat)
There isn't any particular test how to verify it, the verification from QA point of view IMHO should be that there isn't any regression in vdsm. (Originally by Vojtech Juranek)
I will QA_ACKED this once also Sas give his OK to test it at RHHI ENV as well to see basic functionality. Just to be clear we only have the HW to test gluster+4K enabled and for non-4K enabled storage we will test regression on both gluster and NFS. As I stated before: verification will take time as we need to make sure that: 1) We will run TierX regressions(with regular non-4K storages) anyway when this fix will be introduced for downstream and see no issues are seen. 2) RHHI team will consume this fix and see if it affects RHHI operations(which are made on 4K enabled storage). Hi Sas, Please ack that this part(2) will be handled by your team so we can QA_ACK this bug.
(In reply to Avihai from comment #9) > I will QA_ACKED this once also Sas give his OK to test it at RHHI ENV as > well to see basic functionality. > > Just to be clear we only have the HW to test gluster+4K enabled and for > non-4K enabled storage we will test regression on both gluster and NFS. > > As I stated before: > verification will take time as we need to make sure that: > > 1) We will run TierX regressions(with regular non-4K storages) anyway when > this fix will be introduced for downstream and see no issues are seen. > 2) RHHI team will consume this fix and see if it affects RHHI > operations(which are made on 4K enabled storage). > > Hi Sas, > Please ack that this part(2) will be handled by your team so we can QA_ACK > this bug. Hi Avihai, RHHI-V QE team will need more time to run all the regression and this qualification is targeted only RHV 4.3.8. We will start qualification with RHV 4.3.7 with few resource, and this testing will spill over beyond RHV 4.3.7. So, 4KN feature qualification is NO_ACK wrt RHV 4.3.7
4KN support with RHHI-V takes more time, as all the regression test cases needs to be run with 4KN setup. It was decided in the RHHI-V pgm to target 4KN support with RHHI-V with RHV 4.3.8. Hence negating the ack for qualification of this bug for RHV 4,3,7
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.3.z': '?'}', ] For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.3.z': '?'}', ] For more info please contact: rhv-devops
Tested with RHV 4.3.8 + RHGS 3.5.1 ( glusterfs-6.0-29.el7rhgs ) 1. All the RHHI-V core cases are run with 4K disks as well as with 4K VDO devices. 2. Deployment and functionally works good without any issues
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0499