Bug 1384230 - iSCSI performance is slow on secondary (non-optimised) paths during failover
Summary: iSCSI performance is slow on secondary (non-optimised) paths during failover
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RBD
Version: 2.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 2.1
Assignee: Mike Christie
QA Contact: Hemanth Kumar
URL:
Whiteboard:
Depends On:
Blocks: 1379890
TreeView+ depends on / blocked
 
Reported: 2016-10-12 21:28 UTC by Paul Cuzner
Modified: 2017-07-30 15:33 UTC (History)
4 users (show)

Fixed In Version: ceph-iscsi-ansible-1.2-1.el7scon ceph-iscsi-config-1.2-1.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-22 19:32:40 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:2815 normal SHIPPED_LIVE Moderate: Red Hat Ceph Storage security, bug fix, and enhancement update 2017-03-22 02:06:33 UTC

Description Paul Cuzner 2016-10-12 21:28:43 UTC
Description of problem:
When a lun's primary path (active/optimised) is lost due to the owning gateway being down, client-side MPIO fails over correctly to the non-optimised paths. However, performance on the secondary paths is slow

Version-Release number of selected component (if applicable):


How reproducible:
So far all tests with Windows 2012r2 have seen this behaviour.

Steps to Reproduce:
1.Create a ceph cluster with iscsi gateways
2.define rbd/luns to a client, ensuring the lun has AO and ANO paths
3.run an i/o job(e.g. iometer)
4.power off the gateway that owns the primary path, observice iops profile
5.bring the gateway back online - iops will return to prior level

Actual results:
During i/o over ANO paths, iops dropped significantly

Expected results:
i/o rates should be comparable to the AO path, when access is via secondary path(s)


Additional info:
The issue is with the alua configuration defined by the ceph-iscsi-config rpm.
A patch is available for this issue to test

Comment 3 Ken Dreyer (Red Hat) 2016-10-14 23:14:09 UTC
(In reply to Paul Cuzner from comment #0)
> A patch is available for this issue to test

Where is this patch?

Comment 4 Paul Cuzner 2016-10-14 23:31:47 UTC
this patch is included in the 1.2 release, now available to QE

Comment 6 Hemanth Kumar 2016-11-03 15:20:17 UTC
Compared the Performance on a Active-Optimised path vs Active-NonOptimised Path and I do not see any difference in I/O

Moving to verified state..

Comment 8 errata-xmlrpc 2016-11-22 19:32:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2815.html


Note You need to log in before you can comment on or make changes to this bug.