Bug 1121702 - Nova instance unshelve breaks with LIO-attached volume from cinder
Summary: Nova instance unshelve breaks with LIO-attached volume from cinder
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: unspecified
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 9.0 (Mitaka)
Assignee: Sergey Gotliv
QA Contact: Yogev Rabl
URL:
Whiteboard:
: 1173681 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-21 16:11 UTC by Jon Bernard
Modified: 2016-04-26 17:31 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-04-03 10:51:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1409409 0 None None None Never
OpenStack gerrit 146333 0 None ABANDONED rtstool: Update credentials when target(LIO) already exists 2020-07-29 01:21:58 UTC

Description Jon Bernard 2014-07-21 16:11:30 UTC
Using this Nova branch to shelve and unshelve and instance:

https://review.openstack.org/#/c/84793/

Steps:

 1. Apply patch listed above
 2. Boot nova instance from volume
 3. Shelve active instance
 4. Unshelve instance

When you unshelve the instance it works w/ tgt. But not w/ LIO. The
iscsiadm commands below are what fail...

-----

iscsiadm -m node -T
iqn.2010-10.org.openstack:volume-355d008d-e5a1-49a8-b2d7-5d803cd07d04 -p 172.19.0.33:3260
iscsiadm -m node -T
iqn.2010-10.org.openstack:volume-355d008d-e5a1-49a8-b2d7-5d803cd07d04 -p
172.19.0.33:3260 --op new
iscsiadm -m node -T
iqn.2010-10.org.openstack:volume-355d008d-e5a1-49a8-b2d7-5d803cd07d04 -p
172.19.0.33:3260 --op update -n node.session.auth.authmethod -v CHAP
iscsiadm -m node -T
iqn.2010-10.org.openstack:volume-355d008d-e5a1-49a8-b2d7-5d803cd07d04 -p
172.19.0.33:3260 --op update -n node.session.auth.username -v
7ZprMVkjakxRVmLE9ak2
iscsiadm -m node -T
iqn.2010-10.org.openstack:volume-355d008d-e5a1-49a8-b2d7-5d803cd07d04 -p
172.19.0.33:3260 --op update -n node.session.auth.password -v
57BYcd7nLP99JWUZCUsJ
iscsiadm -m session
iscsiadm -m node -T
iqn.2010-10.org.openstack:volume-355d008d-e5a1-49a8-b2d7-5d803cd07d04 -p
172.19.0.33:3260 --login

---------

When using LIO the login fails... any ideas why?

Comment 8 Sergey Gotliv 2015-01-06 12:34:32 UTC
It seems like this bug is reproducible in Juno. 

I see 2 issues around that login failure:

1. For some reason Nova uses wrong CHAP credentials when trying to connect the volume (I have to investigate why). System log full of:

[700490.524763] CHAP_N values do not match!

I checked username using iscsiadm and targetcli and got 2 different results. Login succeeded after updating iscsiadm credentials with values taken from targetcli.

2. The following iSCSI login code in Nova (libvirt/volume.py) swallows exceptions, it makes debugging more complicated:  

try:
   self._run_iscsiadm(iscsi_properties,
                     ("--login",), 
                     check_exit_code=[0, 255])
   except processutils.ProcessExecutionError as err:
       # as this might be one of many paths,
       # only set successful logins to startup automatically
       if err.exit_code in [15]:
           self._iscsiadm_update(iscsi_properties,
                                 "node.startup",
                                 "automatic")
           return

Comment 9 Sergey Gotliv 2015-01-06 12:38:32 UTC
Looks very similar to BZ#1173681. I suspect this is the same issue.

Comment 10 Sergey Gotliv 2015-01-08 14:21:40 UTC
This is a duplicate of BZ#1173681.

This code in rtstool.py doesn't update CHAP credentials if target is already exist, but Cinder sends to Nova new credentials so we create a mismatch between

def create(backing_device, name, userid, password, initiator_iqns=None):
    try:
        rtsroot = rtslib.root.RTSRoot()
    except rtslib.utils.RTSLibError:
        print(_('Ensure that configfs is mounted at /sys/kernel/config.'))
        raise

    # Look to see if BlockStorageObject already exists
    for x in rtsroot.storage_objects:
        if x.name == name:
            # Already exists, use this one
            return

Comment 11 Sergey Gotliv 2015-01-08 14:23:15 UTC
*** Bug 1173681 has been marked as a duplicate of this bug. ***

Comment 15 Yogev Rabl 2015-06-28 14:33:16 UTC
verification failed
Set Cinder with multiple back ends, Ceph & LVM. The following scenario has been tried on the LVM back end alone.

1. Create a volume from an image
2. launch an instance from the volume 
3. shelve the instance
4. unshelve the intance

The unshelving of the instance fails with the error in the compute node: 

 2015-06-28 17:28:55.748 7298 ERROR oslo_messaging.rpc.dispatcher [req-bb2c7012-7586-45b1-acc5-aa8fc4466303 773938c67b374ace8e1e380bffcdbcf2 81c3e0e0c9e549c686b66833969f4a50 - - -] Exception during message handli
ng: 'NoneType' object has no attribute 'get'
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher Traceback (most recent call last):
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     executor_callback))
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     executor_callback)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, in _do_dispatch
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     result = func(ctxt, **new_args)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6839, in unshelve_instance
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     node=node)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     payload)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     return f(self, context, *args, **kw)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 327, in decorated_function
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     LOG.warning(msg, e, instance_uuid=instance_uuid)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 298, in decorated_function
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 377, in decorated_function
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 355, in decorated_function
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     kwargs['instance'], e, sys.exc_info())
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 343, in decorated_function
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4441, in unshelve_instance
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     do_unshelve_instance()
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 445, in inner
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     return f(*args, **kwargs)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4440, in do_unshelve_instance
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     filter_properties, node)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4492, in _unshelve_instance
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     instance=instance)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4488, in _unshelve_instance
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     block_device_info=block_device_info)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2415, in spawn
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     block_device_info)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/blockinfo.py", line 625, in get_disk_info
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     instance=instance)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/blockinfo.py", line 232, in get_disk_bus_for_device_type
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher     disk_bus = image_meta.get('properties', {}).get(key)
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher AttributeError: 'NoneType' object has no attribute 'get'
2015-06-28 17:28:55.748 7298 TRACE oslo_messaging.rpc.dispatcher

Comment 19 Sergey Gotliv 2016-04-03 10:51:46 UTC
Fixed in https://review.openstack.org/#/c/148038/.


Note You need to log in before you can comment on or make changes to this bug.