Bug 1221868

Summary: Remove iscsi_helper calls from base iscsi driver
Product: Red Hat OpenStack Reporter: Jaison Raju <jraju>
Component: openstack-cinderAssignee: Gorka Eguileor <geguileo>
Status: CLOSED ERRATA QA Contact: lkuchlan <lkuchlan>
Severity: medium Docs Contact:
Priority: high    
Version: 5.0 (RHEL 7)CC: ddomingo, dmaley, eharney, geguileo, jraju, jschluet, nlevinki, scohen, sgordon, sgotliv, yeylon
Target Milestone: z5Keywords: ZStream
Target Release: 5.0 (RHEL 7)Flags: eharney: internal-review+
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: openstack-cinder-2014.1.4-6.el7ost Doc Type: Bug Fix
Doc Text:
Previous versions of the iSCSI base driver contained target helper calls that were inappropriate for some drivers. These calls could cause attachment failures on some back ends. This update moves those target helper calls to the right place, thereby avoiding any unexpected attachment failures.
Story Points: ---
Clone Of:
: 1222108 1254756 (view as bug list) Environment:
Last Closed: 2015-09-10 11:48:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1236633    
Bug Blocks: 1222108, 1254756    

Description Jaison Raju 2015-05-15 05:42:33 UTC
Description of problem:
Cinder fails to attach volumes while using solidfire storage subsystem on RHOS5 .
Following exception noticed :
ERROR oslo.messaging._drivers.common [req-b62d1e2e-3fa2-49a2-afee-59c0c1c38132 beebbbee3bc944f0847dabd1c458bb2f 9968bb41a7114f6ba75f99e135cd6e3b - - -] ['Traceback (most recent call last):\n', '  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply\n    incoming.message))\n', '  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch\n    return self._do_dispatch(endpoint, method, ctxt, args)\n', '  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch\n    result = getattr(endpoint, method)(ctxt, **new_args)\n', '  File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 363, in create_volume\n    _run_flow()\n', '  File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 356, in _run_flow\n    flow_engine.run()\n', '  File "/usr/lib/python2.7/site-packages/taskflow/utils/lock_utils.py", line 51, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 118, in run\n    self._run()\n', '  File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 128, in _run\n    self._revert(misc.Failure())\n', '  File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/engine.py", line 81, in _revert\n    misc.Failure.reraise_if_any(failures.values())\n', '  File "/usr/lib/python2.7/site-packages/taskflow/utils/misc.py", line 487, in reraise_if_any\n    failures[0].reraise()\n', '  File "/usr/lib/python2.7/site-packages/taskflow/utils/misc.py", line 494, in reraise\n    six.reraise(*self._exc_info)\n', '  File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 36, in _execute_task\n    result = task.execute(**arguments)\n', '  File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 594, in execute\n    **volume_spec)\n', '  File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 556, in _create_from_image\n    image_id, image_location, image_service)\n', '  File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 470, in _copy_image_to_volume\n    raise exception.ImageCopyFailure(reason=ex)\n', "ImageCopyFailure: Failed to copy image to volume: Bad or unexpected response from the storage volume backend API: Unable to fetch connection information from backend: 'SolidFireDriver' object has no attribute 'target_helper'\n"]

This bug is fixed in following patch (fixed in Juno) :
https://bugs.launchpad.net/cinder/+bug/1400804

Version-Release number of selected component (if applicable):
RHOS5

How reproducible:
Always

Steps to Reproduce:
1.
2.
3.

Actual results:
Cinder fails attach volumes .

Expected results:
Cinder is able perform all actions on solidfire storage .

Additional info:

Comment 7 Sergey Gotliv 2015-05-15 18:47:20 UTC
Gorka,

Please, provide a link to your upstream and downstream patches.

Comment 8 Eric Harney 2015-06-30 19:02:21 UTC
The fix here introduced other LIO problems (see bug 1236633).  Will fix that up with this build.

Comment 9 Eric Harney 2015-06-30 19:16:40 UTC
*** Bug 1236633 has been marked as a duplicate of this bug. ***

Comment 11 nlevinki 2015-09-06 13:07:59 UTC
We don't have solidfire storage so the only thing I can do is verify the fix is in

1)vi /usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py
def initialize_connection(self, volume, connector):
    530         """Initializes the connection and returns connection info."""
    531 
    532         # We have a special case for lioadm here, that's fine, we can
    533         # keep the call in the parent class (driver:ISCSIDriver) generic
    534         # and still use it throughout, just override and call super here
    535         # no duplication, same effect but doesn't break things
    536         # see bug: #1400804
    537         if self.configuration.iscsi_helper == 'lioadm':
    538             self.target_helper.initialize_connection(volume, connector)
    539         return super(LVMISCSIDriver, self).initialize_connection(volume,

2)vi /usr/lib/python2.7/site-packages/cinder/volume/driver.py
def get_target_helper(self, db):
    841         root_helper = utils.get_root_helper()
    842         # FIXME(jdg): These work because the driver will overide
    843         # but we need to move these to use self.configuraiton

Comment 13 errata-xmlrpc 2015-09-10 11:48:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1758.html