Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1107733

Summary: nova boot from volume fails iscsi (iscsiadm: No session found.)
Product: Red Hat OpenStack Reporter: Giulio Fidente <gfidente>
Component: openstack-cinderAssignee: Eric Harney <eharney>
Status: CLOSED NOTABUG QA Contact: Dafna Ron <dron>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.0 (Juno)CC: aortega, ddomingo, derekh, eharney, jraju, pshaikh, scohen, sgotliv, yeylon, yrabl
Target Milestone: rc   
Target Release: 5.0 (RHEL 7)   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: openstack-cinder-2014.1-5.el7ost Doc Type: Bug Fix
Doc Text:
Previously, cinder-rtstool incorrectly required /etc/iscsi/initiatorname.iscsi to be present in order to create a LUN/ACL/portal successfully. This should not have been required since the Block Storage service will create the required ACLs dynamically at attach time anyway. With this update, cinder-rtstool no longer requires /etc/iscsi/initiatorname.iscsi to create a LUN/ACL/portal. As such, iscsi-initiator-utils no longer needs to be installed locally when using a remote Nova compute node.
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-12-14 12:25:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
nova boot from volume compute trace
none
targetcli ls output none

Description Giulio Fidente 2014-06-10 13:43:45 UTC
Created attachment 907266 [details]
nova boot from volume compute trace

Description of problem:
I'm unable to boot instances from volumes; from nova logs it seems it is trying to login on the remote iscsi but I don't have anything serving 3260 on the cinder node


Version-Release number of selected component (if applicable):
openstack-packstack-puppet-2014.1.1-0.22.dev1117.el7ost.noarch
openstack-packstack-2014.1.1-0.22.dev1117.el7ost.noarch

Comment 3 Giulio Fidente 2014-06-10 14:13:39 UTC
Created attachment 907285 [details]
targetcli ls output

I have two volumes set up, of which one is a clone

Comment 4 Eric Harney 2014-06-10 15:15:17 UTC
The problem is that cinder-rtstool attempts to create an ACL based on /etc/iscsi/initiatorname.iscsi when the volume is created.  If this fails because the file doesn't exist, the tpg is not enabled and the portal is not created.

This is not required since we create ACLs dynamically at attach time, so just removing this part from the code fixes things.



# diff -u /usr/bin/cinder-rtstool.orig /usr/bin/cinder-rtstool
--- /usr/bin/cinder-rtstool.orig	2014-06-10 11:06:04.032242578 -0400
+++ /usr/bin/cinder-rtstool	2014-06-10 11:09:20.080242578 -0400
@@ -67,18 +67,19 @@
                     initiator_name = m.group(1)
                     break
     except IOError:
-        raise RtstoolError(_('Could not open %s') % name_file)
+        pass
+        #raise RtstoolError(_('Could not open %s') % name_file)
 
-    if initiator_name == None:
-        raise RtstoolError(_('Could not read InitiatorName from %s') %
-                           name_file)
+    #if initiator_name == None:
+    #    raise RtstoolError(_('Could not read InitiatorName from %s') %
+    #                       name_file)
 
-    acl_new = rtslib.NodeACL(tpg_new, initiator_name, mode='create')
+    #acl_new = rtslib.NodeACL(tpg_new, initiator_name, mode='create')
 
-    acl_new.chap_userid = userid
-    acl_new.chap_password = password
+    #acl_new.chap_userid = userid
+    #acl_new.chap_password = password
 
-    rtslib.MappedLUN(acl_new, lun_new.lun, lun_new.lun)
+    #rtslib.MappedLUN(acl_new, lun_new.lun, lun_new.lun)
 
     if initiator_iqns:
         initiator_iqns = initiator_iqns.strip(' ')

Comment 5 Eric Harney 2014-06-11 13:39:39 UTC
This can be fixed by having the python-cinder package depend on iscsi-initiator-utils.  We need to do this anyway because Cinder's brick code leverages iscsiadm to attach volumes for things like migration.

Comment 6 Sergey Gotliv 2014-06-11 15:56:02 UTC
Removing a link to the gerrit patch because its fixed in the spec file.

Comment 10 Yogev Rabl 2014-06-26 10:29:58 UTC
verified on:

python-cinder-2014.1-6.el7ost.noarch
openstack-nova-network-2014.1-7.el7ost.noarch
python-novaclient-2.17.0-2.el7ost.noarch
openstack-cinder-2014.1-6.el7ost.noarch
openstack-nova-common-2014.1-7.el7ost.noarch
python-cinderclient-1.0.9-1.el7ost.noarch
openstack-nova-compute-2014.1-7.el7ost.noarch
openstack-nova-conductor-2014.1-7.el7ost.noarch
openstack-nova-scheduler-2014.1-7.el7ost.noarch
openstack-nova-api-2014.1-7.el7ost.noarch
openstack-nova-cert-2014.1-7.el7ost.noarch
openstack-nova-novncproxy-2014.1-7.el7ost.noarch
python-nova-2014.1-7.el7ost.noarch
openstack-nova-console-2014.1-7.el7ost.noarch

Comment 13 errata-xmlrpc 2014-07-08 15:31:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0852.html

Comment 14 pshaikh@cisco.com 2015-09-30 21:13:35 UTC
Hi,

Facing the  same issue on OSP6. Having 2 node OpenStack setup, with one node acting as both controller & compute while other node acting as compute, both compute nodes run "cinder-volume" and host the volumes which are to be attached to VMs running on that compute.

On abrupt power reset of a compute node, instances move in "error state" with log indicating error "iscsiadm (no sessions found)" error

Is the fix for this available in Juno release of OpenStack?

On compute node, we found that "target.service" was not enabled but enabling it did not fix the issue.

Comment 15 pshaikh@cisco.com 2015-09-30 21:40:03 UTC
Workaround of having /etc/iscsi/initiatorname.iscsi  did not work. The file /etc/iscsi/initiatorname.iscsi  was with correct initiator name matching with ACL present for target. 

2015-09-29 16:56:50.778 2562 ERROR oslo.messaging.rpc.dispatcher [req-aeef313a-9fe1-4b74-821a-c7fa0400868f ] Exception during message handling: Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2010-10.org.openstack:volume-9aeca3cb-c20f-4006-92bb-b2626d539990 -p XXX.XXX.XXX.XXX:3260 --rescan
Exit code: 21
Stdout: u''
Stderr: u'iscsiadm: No session found.\n'
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     incoming.message))
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     result = getattr(endpoint, method)(ctxt, **new_args)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     payload)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     return f(self, context, *args, **kw)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 309, in decorated_function
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     LOG.warning(msg, e, instance_uuid=instance_uuid)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 286, in decorated_function
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 359, in decorated_function
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 337, in decorated_function
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     kwargs['instance'], e, sys.exc_info())
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 325, in decorated_function
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     return function(self, context, *args, **kwargs)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2647, in start_instance
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     self._power_on(context, instance)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2635, in _power_on
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     block_device_info)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2581, in power_on
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     self._hard_reboot(context, instance, network_info, block_device_info)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2455, in _hard_reboot
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     write_to_disk=True)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4428, in _get_guest_xml
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     context)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4282, in _get_guest_config
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     flavor)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3606, in _get_guest_storage_config
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     cfg = self._connect_volume(connection_info, info)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1372, in _connect_volume
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     return driver.connect_volume(connection_info, disk_info)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 272, in inner
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     return f(*args, **kwargs)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume.py", line 348, in connect_volume
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     self._run_iscsiadm(iscsi_properties, ("--rescan",))
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume.py", line 237, in _run_iscsiadm
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     check_exit_code=check_exit_code)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/utils.py", line 194, in execute
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     return processutils.execute(*cmd, **kwargs)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py", line 203, in execute
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher     cmd=sanitized_cmd)
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher ProcessExecutionError: Unexpected error while running command.
2015-09-29 16:56:50.778 2562 TRACE oslo.messaging.rpc.dispatcher Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2010-10.org.openstack:volume-9aeca3cb-c20f-4006-92bb-b2626d539990 -p XXX.XXX.XXX.XXX:3260 --rescan

Comment 16 Jaison Raju 2015-12-08 14:58:19 UTC
Hi ,

I faced similar issue on rhos6 , although i was able to fix it via targetcli.
Did you set the auth details in acl ?
For example :
# targetcli
/> ls
/> cd /iscsi/iqn.2010-10.org.openstack:volume-3f344318-c12e-4d92-a87c-02c025e6915a/tpg1/acls/iqn.1994-05.com.redhat:95c3136ac590/
/iscsi/iqn.20...:95c3136ac590> get auth

If there is no userid & password set , then set it .
Check from /var/lib/iscsi/node/*/* file if node.session.auth variables on compute.
Use same information to set on targetcli .
https://wiki.archlinux.org/index.php/ISCSI_Target

Does that work ?

Regards,
Jaison R

Comment 17 Jaison Raju 2015-12-09 04:26:58 UTC
Hello Shaikh / Team, 

I raised a new bug for osp6 : https://bugzilla.redhat.com/show_bug.cgi?id=1289526
I guess if this bug was re-opened for osp6 , then we can close it &
follow up on the above bug .

Regards,
Jaison R

Comment 18 Jaison Raju 2015-12-09 10:30:24 UTC
(In reply to Jaison Raju from comment #17)
> Hello Shaikh / Team, 
> 
> I raised a new bug for osp6 :
> https://bugzilla.redhat.com/show_bug.cgi?id=1289526
> I guess if this bug was re-opened for osp6 , then we can close it &
> follow up on the above bug .
> 
> Regards,
> Jaison R

Please disregard my previous messages . The above bug was created as a clone for a different bug .

Comment 19 Sergey Gotliv 2015-12-14 12:25:21 UTC
(In reply to pshaikh from comment #14)
> Hi,
> 
> Facing the  same issue on OSP6. Having 2 node OpenStack setup, with one node
> acting as both controller & compute while other node acting as compute, both
> compute nodes run "cinder-volume" and host the volumes which are to be
> attached to VMs running on that compute.
> 
> On abrupt power reset of a compute node, instances move in "error state"
> with log indicating error "iscsiadm (no sessions found)" error
> 
> Is the fix for this available in Juno release of OpenStack?
> 
> On compute node, we found that "target.service" was not enabled but enabling
> it did not fix the issue.

Jaison/pshaikh,

I am closing this bug because the issue was reported for and closed in RHEL OSP 5.0. If you experience the same problem in RHEL OSP 6.0, please, open another case for RHEL OSP 6.0 and attach the relevant logs from you environment. Please, don't reopen that case!

Comment 20 Red Hat Bugzilla 2023-09-14 02:09:45 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days