Bug 1801359

Summary: ESXi Disk.DiskMaxIOSize considerations for iGW
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Heðin <hmoller>
Component: DocumentationAssignee: Anjana Suparna Sriram <asriram>
Documentation sub component: Block Device Guide QA Contact: Manohar Murthy <mmurthy>
Status: NEW --- Docs Contact:
Severity: medium    
Priority: unspecified CC: asriram, kdreyer, vereddy
Version: 4.1   
Target Milestone: rc   
Target Release: Backlog   
Hardware: All   
OS: Other   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Heðin 2020-02-10 17:44:59 UTC
Description of problem:
Documentation talking about/explaining block sizes in relation to the iGW, to give a better "picture" of the data flow between iGW Client and RBD Image.
An example could be how changing the ESXi setting "Disk.DiskMaxIOSize"[1] from it's default value of 32767, would affect the iGW performance.


Version-Release number of selected component (if applicable):
Supported versions of ESXi.

[1] https://kb.vmware.com/s/article/1003469

Comment 1 Heðin 2020-02-10 20:08:40 UTC
Loosely affiliated, the following parameters could be added alongside Disk.DiskMaxIOSize:
ISCSI.MaxIoSizeKB 
Misc.APDTimeout

And an explanation on what scenarios can trigger the igw to reject requests, such as too slow IO, due to deep-scrub on a NL deviceclass, sharing LUN export with a much faster class. (happens on 3.x, not sure about 4.x)