Bug 1801359 - ESXi Disk.DiskMaxIOSize considerations for iGW
Summary: ESXi Disk.DiskMaxIOSize considerations for iGW
Keywords:
Status: NEW
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Documentation
Version: 4.1
Hardware: All
OS: Other
unspecified
medium
Target Milestone: rc
: Backlog
Assignee: Anjana Suparna Sriram
QA Contact: Manohar Murthy
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-02-10 17:44 UTC by Heðin
Modified: 2023-07-25 18:12 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-4373 0 None None None 2022-05-24 08:23:06 UTC

Description Heðin 2020-02-10 17:44:59 UTC
Description of problem:
Documentation talking about/explaining block sizes in relation to the iGW, to give a better "picture" of the data flow between iGW Client and RBD Image.
An example could be how changing the ESXi setting "Disk.DiskMaxIOSize"[1] from it's default value of 32767, would affect the iGW performance.


Version-Release number of selected component (if applicable):
Supported versions of ESXi.

[1] https://kb.vmware.com/s/article/1003469

Comment 1 Heðin 2020-02-10 20:08:40 UTC
Loosely affiliated, the following parameters could be added alongside Disk.DiskMaxIOSize:
ISCSI.MaxIoSizeKB 
Misc.APDTimeout

And an explanation on what scenarios can trigger the igw to reject requests, such as too slow IO, due to deep-scrub on a NL deviceclass, sharing LUN export with a much faster class. (happens on 3.x, not sure about 4.x)


Note You need to log in before you can comment on or make changes to this bug.