Description of problem: volumes below are attached to the instance. It's live migrated form compute 0 to compute 1 and then the volume are no long usable. Noticed that there are no acl's for the compute's initiator in the controller 2's targetcli. +--------------------------------------+--------+--------------+------+-----------------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------------+------+-----------------------+----------+--------------------------------------+ | 0f76bd78-bd09-479f-a546-98a00ab148fe | in-use | test2-large | 150 | CSvD_Redhat_Encrypted | false | 25bf76d3-1a26-4c14-90ab-d545a420f71f | | 6dd174e2-41b9-439b-85fe-c2b67ab91a5d | in-use | test1 | 10 | CSvD_Redhat_Encrypted | false | 25bf76d3-1a26-4c14-90ab-d545a420f71f | +--------------------------------------+--------+--------------+------+-----------------------+----------+--------------------------------------+ Version-Release number of selected component (if applicable): openstack-cinder-2015.1.2-6.el7ost.noarch Thu Jan 14 14:56:19 2016 1452801379 Red Hat, Inc. x86-021.build.eng.bos.redhat.com (none) (none) openstack-cinder-doc-2015.1.2-6.el7ost.noarch How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: ###logout and delete of volume from compute 0 success 2016-01-22 12:06:43.478 3268 DEBUG oslo_concurrency.processutils [req-1d95a5e4-baba-4f86-a29e-47646fe26546 cc54807e8ae546c1a83f62f592f3c6dc 1491ffd8ee684d3188b8b976cacd3108 - - -] CMD "sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2010-10.org.openstack:volume-0f76bd78-bd09-479f-a546-98a00ab148fe -p 10.192.65.46:3260 --logout" returned: 0 in 0.079s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:254 2016-01-22 12:06:43.479 3268 DEBUG nova.virt.libvirt.volume [req-1d95a5e4-baba-4f86-a29e-47646fe26546 cc54807e8ae546c1a83f62f592f3c6dc 1491ffd8ee684d3188b8b976cacd3108 - - -] iscsiadm ('--logout',): stdout=Logging out of session [sid: 22, target: iqn.2010-10.org.openstack:volume-0f76bd78-bd09-479f-a546-98a00ab148fe, portal: 10.192.65.46,3260] Logout of [sid: 22, target: iqn.2010-10.org.openstack:volume-0f76bd78-bd09-479f-a546-98a00ab148fe, portal: 10.192.65.46,3260] successful. stderr= _run_iscsiadm /usr/lib/python2.7/site-packages/nova/virt/libvirt/volume.py:365 2016-01-22 12:06:43.479 3268 DEBUG oslo_concurrency.processutils [req-1d95a5e4-baba-4f86-a29e-47646fe26546 cc54807e8ae546c1a83f62f592f3c6dc 1491ffd8ee684d3188b8b976cacd3108 - - -] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2010-10.org.openstack:volume-0f76bd78-bd09-479f-a546-98a00ab148fe -p 10.192.65.46:3260 --op delete execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:223 2016-01-22 12:06:43.557 3268 DEBUG oslo_concurrency.processutils [req-1d95a5e4-baba-4f86-a29e-47646fe26546 cc54807e8ae546c1a83f62f592f3c6dc 1491ffd8ee684d3188b8b976cacd3108 - - -] CMD "sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2010-10.org.openstack:volume-0f76bd78-bd09-479f-a546-98a00ab148fe -p 10.192.65.46:3260 --op delete" returned: 0 in 0.078s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:254 #### initiator is removed from the controller 2 2016-01-22 12:06:45.299 16951 DEBUG oslo_concurrency.processutils [req-69f06718-f399-4d59-8ed2-fb4d3457798c cc54807e8ae546c1a83f62f592f3c6dc 1491ffd8ee684d3188b8b976cacd3108 - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf cinder-rtstool delete-initiator iqn.2010-10.org.openstack:volume-0f76bd78-bd09-479f-a546-98a00ab148fe iqn.1994-05.com.redhat:d0c58fd0f44 execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:223 2016-01-22 12:06:45.476 16951 DEBUG oslo_concurrency.processutils [req-69f06718-f399-4d59-8ed2-fb4d3457798c cc54807e8ae546c1a83f62f592f3c6dc 1491ffd8ee684d3188b8b976cacd3108 - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf cinder-rtstool delete-initiator iqn.2010-10.org.openstack:volume-0f76bd78-bd09-479f-a546-98a00ab148fe iqn.1994-05.com.redhat:d0c58fd0f44" returned: 0 in 0.177s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:254 #### initiator added to compute 1. controller 2 cinder/api.log 2016-01-22 12:06:34.514 16727 INFO cinder.api.openstack.wsgi [req-c9fb567a-1cf8-4de7-893e-fa0839c5be5b cc54807e8ae546c1a83f62f592f3c6dc 1491ffd8ee684d3188b8b976cacd3108 - - -] POST http://10.192.65.41:8776/v2/1491ffd8ee684d3188b8b976cacd3108/volumes/0f76bd78-bd09-479f-a546-98a00ab148fe/action 2016-01-22 12:06:34.515 16727 DEBUG cinder.api.openstack.wsgi [req-c9fb567a-1cf8-4de7-893e-fa0839c5be5b cc54807e8ae546c1a83f62f592f3c6dc 1491ffd8ee684d3188b8b976cacd3108 - - -] Action body: {"os-initialize_connection": {"connector": {"ip": "192.0.2.7", "host": "overcloud-compute-1.localdomain", "initiator": "iqn.1994-05.com.redhat:d0c58fd0f44", "os_type": "linux2", "platform": "x86_64"}}} get_method /usr/lib/python2.7/site-packages/cinder/api/openstack/wsgi.py:1096 2016-01-22 12:06:34.572 16727 DEBUG cinder.volume.api [req-c9fb567a-1cf8-4de7-893e-fa0839c5be5b cc54807e8ae546c1a83f62f592f3c6dc 1491ffd8ee684d3188b8b976cacd3108 - - -] initialize connection for volume-id: 0f76bd78-bd09-479f-a546-98a00ab148fe, and connector: {u'ip': u'192.0.2.7', u'host': u'overcloud-compute-1.localdomain', u'initiator': u'iqn.1994-05.com.redhat:d0c58fd0f44', u'os_type': u'linux2', u'platform': u'x86_64'}. initialize_connection /usr/lib/python2.7/site-packages/cinder/volume/api.py:567 ##volume seems to be connected to compute 1 after migration 2016-01-22 12:06:35.924 3840 DEBUG keystoneclient.session [req-1d95a5e4-baba-4f86-a29e-47646fe26546 cc54807e8ae546c1a83f62f592f3c6dc 1491ffd8ee684d3188b8b976cacd3108 - - -] RESP: [200] content-length: 443 x-compute-request-id: req-c9fb567a-1cf8-4de7-893e-fa0839c5be5b connection: keep-alive date: Fri, 22 Jan 2016 17:06:35 GMT content-type: application/json x-openstack-request-id: req-c9fb567a-1cf8-4de7-893e-fa0839c5be5b RESP BODY: {"connection_info": {"driver_volume_type": "iscsi", "data": {"auth_password": "T4JJJpCpc3h7iFGn", "target_discovered": false, "encrypted": true, "qos_specs": {}, "target_iqn": "iqn.2010-10.org.openstack:volume-0f76bd78-bd09-479f-a546-98a00ab148fe", "target_portal": "10.192.65.46:3260", "volume_id": "0f76bd78-bd09-479f-a546-98a00ab148fe", "target_lun": 0, "access_mode": "rw", "auth_username": "Y6ZdfQ4GG5fQKXxrPLbM", "auth_method": "CHAP"}}} _http_log_response /usr/lib/python2.7/site-packages/keystoneclient/session.py:224 2016-01-22 12:07:02.061 3840 DEBUG oslo_concurrency.processutils [req-ed5372cb-9cda-428c-a056-e2a55214cff4 - - - - -] CMD "sudo nova-rootwrap /etc/nova/rootwrap.conf blockdev --getsize64 /dev/disk/by-path/ip-10.192.65.46:3260-iscsi-iqn.2010-10.org.openstack:volume-0f76bd78-bd09-479f-a546-98a00ab148fe-lun-0" returned: 0 in 0.079s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:254 ###controller 2 /var/log/messages Jan 22 12:06:47 overcloud-controller-2 kernel: iSCSI Initiator Node: iqn.1994-05.com.redhat:d0c58fd0f44 is not authorized to access iSCSI target portal group: 1. Jan 22 12:06:47 overcloud-controller-2 kernel: iSCSI Login negotiation failed. Jan 22 12:06:48 overcloud-controller-2 kernel: iSCSI Initiator Node: iqn.1994-05.com.redhat:d0c58fd0f44 is not authorized to access iSCSI target portal group: 1. ###First error seen for the volume 2016-01-22 12:08:45.552 3840 ERROR nova.openstack.common.periodic_task [req-ed5372cb-9cda-428c-a056-e2a55214cff4 - - - - -] Error during ComputeManager.update_available_resource: Unexpected error while running command. Command: sudo nova-rootwrap /etc/nova/rootwrap.conf blockdev --getsize64 /dev/disk/by-path/ip-10.192.65.46:3260-iscsi-iqn.2010-10.org.openstack:volume-0f76bd78-bd09-479f-a546-98a00ab148fe-lun-0 Exit code: 1 Stdout: u'' Stderr: u'blockdev: cannot open /dev/disk/by-path/ip-10.192.65.46:3260-iscsi-iqn.2010-10.org.openstack:volume-0f76bd78-bd09-479f-a546-98a00ab148fe-lun-0: No such device or address\n'
*** This bug has been marked as a duplicate of bug 1288423 ***