Bug 1784697

Summary: Hosted Engine deployment fails with 4K gluster storage domain
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: SATHEESARAN <sasundar>
Component: rhhiAssignee: Gobinda Das <godas>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: rhgs-3.5CC: godas, rhs-bugs, seamurph
Target Milestone: ---   
Target Release: RHHI-V 1.7   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Cause: Default /etc/vdsm/vdsm.conf.d/gluster.conf was not created which was causing deployment error. Fix: Now VDSM will create file for gluster based deployment by default and add entries like [gluster] enable_4k_storage = true Result: From this file VDSM and Engine will determine the block size of disks, if it's 4kN then engine will send 4096 bytes during Storage Domain creation.
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-02-13 15:57:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1748022    
Bug Blocks:    

Description SATHEESARAN 2019-12-18 06:11:37 UTC
Description of problem:
-----------------------
HE deployment with 4K gluster storage domain fails

Version-Release number of selected component (if applicable):
--------------------------------------------------------------
RHEL 7.7 ( 3.10.0-1062.11.1.el7.x86_64 )
qemu-kvm-common-rhev-2.12.0-33.el7_7.4.x86_64
qemu-kvm-rhev-2.12.0-33.el7_7.4.x86_64
RHVH 4.3.8 (RHVH-4.3-20191212.0-RHVH-x86_64-dvd1.iso)

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Start RHHI-V deployment from cockpit, with 4K device for gluster volume
2. Post gluster deployment/configuration, continue to 'Hosted Engine' deployment with the newly create gluster volume

Actual results:
---------------
HE deployment over 4K gluster storage domain fails

Expected results:
-----------------
HE deployment should be successful with 4K gluster storage domain


Additional info:

Comment 1 SATHEESARAN 2019-12-18 06:51:18 UTC
Error from vdsm.log

<snip>

2019-12-18 01:32:25,280+0000 INFO  (jsonrpc/7) [vdsm.api] FINISH createStorageDomain error=Block size does not match storage block size: 'block_size=512, storage_block_size=4096' from=:
:ffff:192.168.1.68,43658, flow_id=1a319448, task_id=f1091694-176d-4a0e-ae40-2c3d50b116ea (api:52)
2019-12-18 01:32:25,280+0000 ERROR (jsonrpc/7) [storage.TaskManager.Task] (Task='f1091694-176d-4a0e-ae40-2c3d50b116ea') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
    return fn(*args, **kargs)
  File "<string>", line 2, in createStorageDomain
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2626, in createStorageDomain
    max_hosts=max_hosts)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 98, in create
    block_size, storage_block_size)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 881, in _validate_storage_block_size
    raise se.StorageDomainBlockSizeMismatch(block_size, storage_block_size)
StorageDomainBlockSizeMismatch: Block size does not match storage block size: 'block_size=512, storage_block_size=4096'
2019-12-18 01:32:25,281+0000 INFO  (jsonrpc/7) [storage.TaskManager.Task] (Task='f1091694-176d-4a0e-ae40-2c3d50b116ea') aborting: Task is aborted: "Block size does not match storage blo
ck size: 'block_size=512, storage_block_size=4096'" - code 348 (task:1181)
2019-12-18 01:32:25,281+0000 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH createStorageDomain error=Block size does not match storage block size: 'block_size=512, storage_block_size=40
96' (dispatcher:83)

</snip>

Comment 3 SATHEESARAN 2020-01-16 14:42:02 UTC
Tested with RHVH 4.3.8 (rhvh-4.3.8.1-0.20200115.0+1) and RHVM (4.3.8.2-0.4.el7)

RHHI-V deployment succeeded with 4K disks in place.

Comment 6 errata-xmlrpc 2020-02-13 15:57:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0508