Bug 1748022

Summary: Enable gluster 4k support
Product: [oVirt] vdsm Reporter: Nir Soffer <nsoffer>
Component: CoreAssignee: Nir Soffer <nsoffer>
Status: CLOSED CURRENTRELEASE QA Contact: Shir Fishbain <sfishbai>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 4.40.0CC: aefrat, bugs, lsvaty, sasundar, tnisan
Target Milestone: ovirt-4.3.8Flags: sbonazzo: ovirt-4.3?
sasundar: blocker?
sasundar: testing_ack+
Target Release: 4.30.40   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: vdsm-4.30.40 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-01-27 12:55:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1745443    
Bug Blocks: 1784697    

Description Nir Soffer 2019-09-02 13:42:07 UTC
Description of problem:

Support for Gluster 4k storage is disabled by default due to issues in qemu and
qemu-img, and required testing with gluster 4k storage.

Qemu supports now 4k storage since version 2.12.0-33.el7_7.3
(see bug 1745443).

Currently users have to install a drop-in configuration file on all hosts
to enable gluster 4k support:

    $ cat /etc/vdsm/vdsm.conf.d/gluster.conf
    [gluster]
    enable_4k_storage = true

Required changes:
- Require qemu-kvm-rhev >= 2.12.0-33.el7_7.3
- Enable gluster:enable_4k_storage when building for RHEL 7

For other platforms (RHEL 8.1, CentOS, Fedora) we will have to wait until
fixed qemu version is available.

Comment 1 Nir Soffer 2019-09-18 17:11:32 UTC
We cannot enable gluster 4k support before bug 1751722 is fixed, since
the code detecting block size on gluster crash gluster fuse mount.

Comment 2 Nir Soffer 2019-09-26 10:20:48 UTC
We do not depend on bug 1751722 now since vdsm changed the way block size is
detected, so it is not affected by the gluster bug.

Comment 3 Avihai 2019-10-06 15:40:53 UTC
Nir, can you please provide a simple way to verify this bug?

Comment 4 Avihai 2019-10-27 13:14:19 UTC
As Sas stated here[1] these bug should not be merged to 4.3.7 as we will only be able to test this properly in 4.3.8.
Please retarget this bug to 4.3.8.

Sas words:
"
RHHI-V QE team will need more time to run all the regression and this qualification is
targeted only RHV 4.3.8. We will start qualification with RHV 4.3.7 with few resource, and
this testing will spill over beyond RHV 4.3.7.
"

So, 4KN feature qualification is NO_ACK wrt RHV 4.3.7

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1765912#c10

Comment 5 Nir Soffer 2019-11-01 12:44:29 UTC
We still missing patch enabling 4k storage support for gluster,
moving back to POST.

Comment 6 Avihai 2019-12-05 14:41:30 UTC
So Nir verification for this bug should be the following, right?

1) check that by default:

    $ cat /etc/vdsm/vdsm.conf.d/gluster.conf
    [gluster]
    enable_4k_storage = true

2) Also check Required changes:
- Require qemu-kvm-rhev >= 2.12.0-33.el7_7.3

Comment 7 RHV bug bot 2019-12-05 17:50:13 UTC
INFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Open patch attached]

For more info please contact: infra

Comment 8 Nir Soffer 2019-12-07 00:41:15 UTC
(In reply to Avihai from comment #6)
> So Nir verification for this bug should be the following, right?
> 
> 1) check that by default:
> 
>     $ cat /etc/vdsm/vdsm.conf.d/gluster.conf
>     [gluster]
>     enable_4k_storage = true

No, this is manual configuration that should not be needed in 4.3.8.

To check that 4k is enabled for gluster by default, run:

    vdsm-client Host getCapabilities

And check the output:

    "supported_block_size": {
        "FCP": [
            512
        ],
        "GLUSTERFS": [
            0,
            512,
            4096
        ],
        "ISCSI": [
            512
        ],
        "LOCALFS": [
            0,
            512,
            4096
        ],
        "NFS": [
            512
        ],
        "POSIXFS": [
            512
        ]
    },

However this will work only after merging this patch, which is still
pending:
https://gerrit.ovirt.org/c/105398/

> 2) Also check Required changes:
> - Require qemu-kvm-rhev >= 2.12.0-33.el7_7.3

Yes.

Comment 10 SATHEESARAN 2020-01-07 09:08:29 UTC
Nir,

I see that all the patches are merged. Why is this bug still on MODIFIED state ?
Was the build not happened yet ?

Comment 11 Nir Soffer 2020-01-07 10:13:05 UTC
SATHEESARAN, yes, this is included in the last build few weeks ago
and should be ready for testing.

Comment 13 Shir Fishbain 2020-01-21 14:33:36 UTC
Verified
ovirt-engine-4.3.8.2-0.1.master.el7.noarch
vdsm-4.30.40-1.el7ev.x86_64

Ran the vdsm-client Host getCapabilities and the output was :

   "supported_block_size": {
        "FCP": [
            512
        ], 
        "ISCSI": [
            512
        ], 
        "POSIXFS": [
            512
        ], 
        "GLUSTERFS": [
            0, 
            512, 
            4096
        ], 
        "LOCALFS": [
            0, 
            512, 
            4096
        ], 
        "NFS": [
            512
        ]
    }, 

Also checked the required changes:
qemu-kvm-rhev-2.12.0-33.el7_7.8.x86_64

Comment 14 Sandro Bonazzola 2020-01-27 12:55:58 UTC
This bugzilla is included in oVirt 4.3.8 release, published on January 27th 2020.

Since the problem described in this bug report should be
resolved in oVirt 4.3.8 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.