Bug 1765912
Summary: | [downstream clone - 4.3.8] Align volume size to 4k block size in hsm module for file based storage | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | RHV bug bot <rhv-bugzilla-bot> |
Component: | vdsm | Assignee: | Vojtech Juranek <vjuranek> |
Status: | CLOSED ERRATA | QA Contact: | SATHEESARAN <sasundar> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | unspecified | CC: | aefrat, bugs, dfediuck, lsurette, lsvaty, nsoffer, pelauter, rdlugyhe, sasundar, sgoodman, srevivo, tnisan, vjuranek, ycui |
Target Milestone: | ovirt-4.3.8 | Keywords: | ZStream |
Target Release: | 4.3.8 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | vdsm-4.30.40 | Doc Type: | Bug Fix |
Doc Text: |
Previously, VDSM v4.30.26 added support for 4K block size with file-based storage domains to Red Hat Virtualization 4.3. However, there were a few instances where 512B block size remained hard-coded. These instances impacted the support of 4K block size under certain conditions. This release fixes these issues.
|
Story Points: | --- |
Clone Of: | 1753235 | Environment: | |
Last Closed: | 2020-02-13 15:25:15 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1753235 | ||
Bug Blocks: |
Description
RHV bug bot
2019-10-27 10:12:19 UTC
The included patch is needed for file based storage which should be supported in 4.3.7, but the other cases mentioned here are all used only in block storage, which will not be supported in 4.3.7. Lets handle only file based storage in this bug, and file another bug for 4k support in block storage, targeted to 4.4. (Originally by Nir Soffer) RFE for 4k support in block SD: https://bugzilla.redhat.com/1753263 (Originally by Vojtech Juranek) Please provide a clear scenario of how to verify this bug. (Originally by Avihai Efrat) There isn't any particular test how to verify it, the verification from QA point of view IMHO should be that there isn't any regression in vdsm. (Originally by Vojtech Juranek) I will QA_ACKED this once also Sas give his OK to test it at RHHI ENV as well to see basic functionality. Just to be clear we only have the HW to test gluster+4K enabled and for non-4K enabled storage we will test regression on both gluster and NFS. As I stated before: verification will take time as we need to make sure that: 1) We will run TierX regressions(with regular non-4K storages) anyway when this fix will be introduced for downstream and see no issues are seen. 2) RHHI team will consume this fix and see if it affects RHHI operations(which are made on 4K enabled storage). Hi Sas, Please ack that this part(2) will be handled by your team so we can QA_ACK this bug. (In reply to Avihai from comment #9) > I will QA_ACKED this once also Sas give his OK to test it at RHHI ENV as > well to see basic functionality. > > Just to be clear we only have the HW to test gluster+4K enabled and for > non-4K enabled storage we will test regression on both gluster and NFS. > > As I stated before: > verification will take time as we need to make sure that: > > 1) We will run TierX regressions(with regular non-4K storages) anyway when > this fix will be introduced for downstream and see no issues are seen. > 2) RHHI team will consume this fix and see if it affects RHHI > operations(which are made on 4K enabled storage). > > Hi Sas, > Please ack that this part(2) will be handled by your team so we can QA_ACK > this bug. Hi Avihai, RHHI-V QE team will need more time to run all the regression and this qualification is targeted only RHV 4.3.8. We will start qualification with RHV 4.3.7 with few resource, and this testing will spill over beyond RHV 4.3.7. So, 4KN feature qualification is NO_ACK wrt RHV 4.3.7 4KN support with RHHI-V takes more time, as all the regression test cases needs to be run with 4KN setup. It was decided in the RHHI-V pgm to target 4KN support with RHHI-V with RHV 4.3.8. Hence negating the ack for qualification of this bug for RHV 4,3,7 WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.3.z': '?'}', ] For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.3.z': '?'}', ] For more info please contact: rhv-devops WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.3.z': '?'}', ] For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.3.z': '?'}', ] For more info please contact: rhv-devops Tested with RHV 4.3.8 + RHGS 3.5.1 ( glusterfs-6.0-29.el7rhgs ) 1. All the RHHI-V core cases are run with 4K disks as well as with 4K VDO devices. 2. Deployment and functionally works good without any issues Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0499 |