Bug 1268182
Summary: | cinder spontaneously sets instance root device to 'available' | ||
---|---|---|---|
Product: | [Community] RDO | Reporter: | Jack Waterworth <jwaterwo> |
Component: | openstack-cinder | Assignee: | Eric Harney <eharney> |
Status: | CLOSED EOL | QA Contact: | nlevinki <nlevinki> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | Kilo | CC: | amoralej, eharney, srevivo |
Target Milestone: | --- | ||
Target Release: | Kilo | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-05-18 17:24:03 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jack Waterworth
2015-10-02 03:04:25 UTC
this appears to be a problem with either targetcli or iscsiadm -------------------- Oct 1 22:55:58 bulldozer systemd: Starting Virtual Machine qemu-instance-000000e6. Oct 1 22:55:58 bulldozer systemd-machined: New machine qemu-instance-000000e6. Oct 1 22:55:58 bulldozer systemd: Started Virtual Machine qemu-instance-000000e6. Oct 1 22:55:58 bulldozer kvm: 5 guests now active Oct 1 22:56:01 bulldozer ovs-vsctl: ovs|00001|vsctl|INFO|Called as /bin/ovs-vsctl --timeout=10 --oneline --format=json -- set Port qvo6a3af369-46 tag=1 Oct 1 22:56:02 bulldozer cinder-volume: 2015-10-01 22:56:02.043 3035 INFO cinder.volume.manager [req-27c0b1eb-acf4-4391-86c6-4b0d817b541f - - - - -] Updating volume status Oct 1 22:56:19 bulldozer cinder-api: 2015-10-01 22:56:19.934 3014 INFO cinder.api.openstack.wsgi [req-da8d2dc6-d6eb-4f7a-b379-9453d87a1205 d76c0705baca4e2eb3a24541f9f1fa39 ce8f683ab7fa4cbba42a3e9bb084e6fe - - -] POST http://192.168.1.200:8776/v2/ce8f683ab7fa4cbba42a3e9bb084e6fe/volumes/0b08d268-4e90-41e3-aca3-20284f273028/action Oct 1 22:56:20 bulldozer kernel: connection6:0: detected conn error (1020) Oct 1 22:56:20 bulldozer cinder-api: 2015-10-01 22:56:20.751 3014 INFO cinder.api.openstack.wsgi [req-da8d2dc6-d6eb-4f7a-b379-9453d87a1205 d76c0705baca4e2eb3a24541f9f1fa39 ce8f683ab7fa4cbba42a3e9bb084e6fe - - -] http://192.168.1.200:8776/v2/ce8f683ab7fa4cbba42a3e9bb084e6fe/volumes/0b08d268-4e90-41e3-aca3-20284f273028/action returned with HTTP 202 Oct 1 22:56:20 bulldozer cinder-api: 2015-10-01 22:56:20.754 3014 INFO eventlet.wsgi.server [req-da8d2dc6-d6eb-4f7a-b379-9453d87a1205 d76c0705baca4e2eb3a24541f9f1fa39 ce8f683ab7fa4cbba42a3e9bb084e6fe - - -] 192.168.1.200 - - [01/Oct/2015 22:56:20] "POST /v2/ce8f683ab7fa4cbba42a3e9bb084e6fe/volumes/0b08d268-4e90-41e3-aca3-20284f273028/action HTTP/1.1" 202 211 0.825671 Oct 1 22:56:20 bulldozer cinder-api: 2015-10-01 22:56:20.762 3014 INFO cinder.api.openstack.wsgi [req-94d86858-2e6e-4233-98d3-dd28b5250519 d76c0705baca4e2eb3a24541f9f1fa39 ce8f683ab7fa4cbba42a3e9bb084e6fe - - -] POST http://192.168.1.200:8776/v2/ce8f683ab7fa4cbba42a3e9bb084e6fe/volumes/0b08d268-4e90-41e3-aca3-20284f273028/action Oct 1 22:56:20 bulldozer iscsid: Kernel reported iSCSI connection 6:0 error (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) state (3) Oct 1 22:56:21 bulldozer cinder-volume: 2015-10-01 22:56:21.401 3035 INFO cinder.volume.targets.lio [req-94d86858-2e6e-4233-98d3-dd28b5250519 d76c0705baca4e2eb3a24541f9f1fa39 ce8f683ab7fa4cbba42a3e9bb084e6fe - - -] Removing iscsi_target: 0b08d268-4e90-41e3-aca3-20284f273028 Oct 1 22:56:21 bulldozer cinder-api: 2015-10-01 22:56:21.985 3014 INFO cinder.api.openstack.wsgi [req-94d86858-2e6e-4233-98d3-dd28b5250519 d76c0705baca4e2eb3a24541f9f1fa39 ce8f683ab7fa4cbba42a3e9bb084e6fe - - -] http://192.168.1.200:8776/v2/ce8f683ab7fa4cbba42a3e9bb084e6fe/volumes/0b08d268-4e90-41e3-aca3-20284f273028/action returned with HTTP 202 Oct 1 22:56:21 bulldozer cinder-api: 2015-10-01 22:56:21.987 3014 INFO eventlet.wsgi.server [req-94d86858-2e6e-4233-98d3-dd28b5250519 d76c0705baca4e2eb3a24541f9f1fa39 ce8f683ab7fa4cbba42a3e9bb084e6fe - - -] 192.168.1.200 - - [01/Oct/2015 22:56:21] "POST /v2/ce8f683ab7fa4cbba42a3e9bb084e6fe/volumes/0b08d268-4e90-41e3-aca3-20284f273028/action HTTP/1.1" 202 211 1.228033 Oct 1 22:56:22 bulldozer kernel: Unable to locate Target IQN: iqn.2010-10.org.openstack:volume-0b08d268-4e90-41e3-aca3-20284f273028 in Storage Node Oct 1 22:56:22 bulldozer kernel: iSCSI Login negotiation failed. Oct 1 22:56:22 bulldozer kernel: connection6:0: detected conn error (1020) Oct 1 22:56:22 bulldozer iscsid: conn 0 login rejected: target error (03/01) Oct 1 22:56:23 bulldozer iscsid: Kernel reported iSCSI connection 6:0 error (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) state (1) Oct 1 22:56:25 bulldozer kernel: Unable to locate Target IQN: iqn.2010-10.org.openstack:volume-0b08d268-4e90-41e3-aca3-20284f273028 in Storage Node Oct 1 22:56:25 bulldozer kernel: iSCSI Login negotiation failed. Oct 1 22:56:25 bulldozer kernel: connection6:0: detected conn error (1020) Oct 1 22:56:25 bulldozer iscsid: conn 0 login rejected: target error (03/01) Oct 1 22:56:25 bulldozer iscsid: Kernel reported iSCSI connection 6:0 error (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) state (1) Oct 1 22:56:28 bulldozer kernel: Unable to locate Target IQN: iqn.2010-10.org.openstack:volume-0b08d268-4e90-41e3-aca3-20284f273028 in Storage Node Oct 1 22:56:28 bulldozer kernel: iSCSI Login negotiation failed. Oct 1 22:56:28 bulldozer iscsid: conn 0 login rejected: target error (03/01) -------------------- this is an all-in-one install. not sure how we get a connection error? 2015-10-01 22:50:26.174 3035 INFO cinder.volume.targets.lio [req-3dd00cb0-2cfc-4a1e-b929-fee05bc0ee5e d76c0705baca4e2eb3a24541f9f1fa39 ce8f683ab7fa4cbba42a3e9bb084e6fe - - -] Creating iscsi_target for volume: vo lume-0b08d268-4e90-41e3-aca3-20284f273028 2015-10-01 22:50:33.651 3035 INFO cinder.volume.targets.lio [req-37421e06-6d11-4b0e-a248-23a55f7d17eb d76c0705baca4e2eb3a24541f9f1fa39 ce8f683ab7fa4cbba42a3e9bb084e6fe - - -] Removing iscsi_target: 0b08d268-4e90-41e3-aca3-20284f273028 2015-10-01 22:51:02.046 3035 INFO cinder.volume.manager [req-ee64df5b-45a8-4ae9-884f-3560819bffca - - - - -] Updating volume status 2015-10-01 22:52:02.051 3035 INFO cinder.volume.manager [req-1b656a5e-aabd-4643-a02c-2cf3a6805221 - - - - -] Updating volume status 2015-10-01 22:52:35.883 3035 ERROR cinder.volume.targets.lio [req-51ed89ec-991c-4570-8122-a2e7ae287c4a d76c0705baca4e2eb3a24541f9f1fa39 ce8f683ab7fa4cbba42a3e9bb084e6fe - - -] Failed to delete initiator iqn iqn.1994-05.com.redhat:d32a1588856 to target. 2015-10-01 22:52:35.887 3035 ERROR cinder.volume.manager [req-51ed89ec-991c-4570-8122-a2e7ae287c4a d76c0705baca4e2eb3a24541f9f1fa39 ce8f683ab7fa4cbba42a3e9bb084e6fe - - -] Unable to terminate volume connection: Failed to detach iSCSI target for volume 0b08d268-4e90-41e3-aca3-20284f273028. 2015-10-01 22:52:35.889 3035 ERROR oslo_messaging.rpc.dispatcher [req-51ed89ec-991c-4570-8122-a2e7ae287c4a d76c0705baca4e2eb3a24541f9f1fa39 ce8f683ab7fa4cbba42a3e9bb084e6fe - - -] Exception during message handling: Bad or unexpected response from the storage volume backend API: Unable to terminate volume connection: Failed to detach iSCSI target for volume 0b08d268-4e90-41e3-aca3-20284f273028. 2015-10-01 22:52:35.889 3035 TRACE oslo_messaging.rpc.dispatcher Traceback (most recent call last): 2015-10-01 22:52:35.889 3035 TRACE oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply 2015-10-01 22:52:35.889 3035 TRACE oslo_messaging.rpc.dispatcher executor_callback)) 2015-10-01 22:52:35.889 3035 TRACE oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch 2015-10-01 22:52:35.889 3035 TRACE oslo_messaging.rpc.dispatcher executor_callback) 2015-10-01 22:52:35.889 3035 TRACE oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, in _do_dispatch 2015-10-01 22:52:35.889 3035 TRACE oslo_messaging.rpc.dispatcher result = func(ctxt, **new_args) 2015-10-01 22:52:35.889 3035 TRACE oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in wrapper 2015-10-01 22:52:35.889 3035 TRACE oslo_messaging.rpc.dispatcher return f(*args, **kwargs) 2015-10-01 22:52:35.889 3035 TRACE oslo_messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1174, in terminate_connection 2015-10-01 22:52:35.889 3035 TRACE oslo_messaging.rpc.dispatcher raise exception.VolumeBackendAPIException(data=err_msg) 2015-10-01 22:52:35.889 3035 TRACE oslo_messaging.rpc.dispatcher VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Unable to terminate volume connection: Failed to detach iSCSI target for volume 0b08d268-4e90-41e3-aca3-20284f273028. 2015-10-01 22:52:35.889 3035 TRACE oslo_messaging.rpc.dispatcher 2015-10-01 22:52:35.897 3035 ERROR oslo_messaging._drivers.common [req-51ed89ec-991c-4570-8122-a2e7ae287c4a d76c0705baca4e2eb3a24541f9f1fa39 ce8f683ab7fa4cbba42a3e9bb084e6fe - - -] Returning exception Bad or unexpected response from the storage volume backend API: Unable to terminate volume connection: Failed to detach iSCSI target for volume 0b08d268-4e90-41e3-aca3-20284f273028. to caller 2015-10-01 22:52:35.898 3035 ERROR oslo_messaging._drivers.common [req-51ed89ec-991c-4570-8122-a2e7ae287c4a d76c0705baca4e2eb3a24541f9f1fa39 ce8f683ab7fa4cbba42a3e9bb084e6fe - - -] ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply\n executor_callback))\n', ' File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch\n executor_callback)\n', ' File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, in _do_dispatch\n result = func(ctxt, **new_args)\n', ' File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in wrapper\n return f(*args, **kwargs)\n', ' File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1174, in terminate_connection\n raise exception.VolumeBackendAPIException(data=err_msg)\n', 'VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Unable to terminate volume connection: Failed to detach iSCSI target for volume 0b08d268-4e90-41e3-aca3-20284f273028.\n'] 2015-10-01 22:53:02.043 3035 INFO cinder.volume.manager [req-3880de1b-27ee-43f7-9d88-12c410657a11 - - - - -] Updating volume status 2015-10-01 22:54:02.043 3035 INFO cinder.volume.manager [req-c78e2144-fa34-4b93-a7cb-72fccee6e0c7 - - - - -] Updating volume status 2015-10-01 22:55:02.043 3035 INFO cinder.volume.manager [req-ae5e4c4a-2d53-4f2a-81e9-1fe5e91fc380 - - - - -] Updating volume status 2015-10-01 22:55:52.847 3035 INFO cinder.volume.targets.lio [req-a5940350-9c33-400d-99c1-1d126b29bd40 d76c0705baca4e2eb3a24541f9f1fa39 ce8f683ab7fa4cbba42a3e9bb084e6fe - - -] Creating iscsi_target for volume: volume-0b08d268-4e90-41e3-aca3-20284f273028 2015-10-01 22:56:02.043 3035 INFO cinder.volume.manager [req-27c0b1eb-acf4-4391-86c6-4b0d817b541f - - - - -] Updating volume status 2015-10-01 22:56:21.401 3035 INFO cinder.volume.targets.lio [req-94d86858-2e6e-4233-98d3-dd28b5250519 d76c0705baca4e2eb3a24541f9f1fa39 ce8f683ab7fa4cbba42a3e9bb084e6fe - - -] Removing iscsi_target: 0b08d268-4e90-41e3-aca3-20284f273028 [root@bulldozer cinder(keystone_admin)]# targetcli ls o- / ......................................................................................................................... [...] o- backstores .............................................................................................................. [...] | o- block .................................................................................................. [Storage Objects: 4] | | o- iqn.2010-10.org.openstack:volume-4b218153-d3e3-4674-b9e8-5fd3c0840a5f [/dev/cinder-volumes/volume-4b218153-d3e3-4674-b9e8-5fd3c0840a5f (3.0GiB) write-thru activated] | | o- iqn.2010-10.org.openstack:volume-5a31c6e7-a5e1-423d-b3a4-06ff7c0bd248 [/dev/cinder-volumes/volume-5a31c6e7-a5e1-423d-b3a4-06ff7c0bd248 (100.0GiB) write-thru activated] | | o- iqn.2010-10.org.openstack:volume-7d644152-08d1-4e82-b7f5-e597f860ca4b [/dev/cinder-volumes/volume-7d644152-08d1-4e82-b7f5-e597f860ca4b (10.0GiB) write-thru activated] | | o- iqn.2010-10.org.openstack:volume-d9c8cf52-370c-4ee2-afbc-6bc742a4425b [/dev/cinder-volumes/volume-d9c8cf52-370c-4ee2-afbc-6bc742a4425b (3.0GiB) write-thru activated] | o- fileio ................................................................................................. [Storage Objects: 0] | o- pscsi .................................................................................................. [Storage Objects: 0] | o- ramdisk ................................................................................................ [Storage Objects: 0] o- iscsi ............................................................................................................ [Targets: 4] | o- iqn.2010-10.org.openstack:volume-4b218153-d3e3-4674-b9e8-5fd3c0840a5f ............................................. [TPGs: 1] | | o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl] | | o- acls .......................................................................................................... [ACLs: 1] | | | o- iqn.1994-05.com.redhat:d32a1588856 ....................................................... [1-way auth, Mapped LUNs: 1] | | | o- mapped_lun0 ................. [lun0 block/iqn.2010-10.org.openstack:volume-4b218153-d3e3-4674-b9e8-5fd3c0840a5f (rw)] | | o- luns .......................................................................................................... [LUNs: 1] | | | o- lun0 [block/iqn.2010-10.org.openstack:volume-4b218153-d3e3-4674-b9e8-5fd3c0840a5f (/dev/cinder-volumes/volume-4b218153-d3e3-4674-b9e8-5fd3c0840a5f)] | | o- portals .................................................................................................... [Portals: 1] | | o- 192.168.1.200:3260 ............................................................................................... [OK] | o- iqn.2010-10.org.openstack:volume-5a31c6e7-a5e1-423d-b3a4-06ff7c0bd248 ............................................. [TPGs: 1] | | o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl] | | o- acls .......................................................................................................... [ACLs: 1] | | | o- iqn.1994-05.com.redhat:d32a1588856 ....................................................... [1-way auth, Mapped LUNs: 1] | | | o- mapped_lun0 ................. [lun0 block/iqn.2010-10.org.openstack:volume-5a31c6e7-a5e1-423d-b3a4-06ff7c0bd248 (rw)] | | o- luns .......................................................................................................... [LUNs: 1] | | | o- lun0 [block/iqn.2010-10.org.openstack:volume-5a31c6e7-a5e1-423d-b3a4-06ff7c0bd248 (/dev/cinder-volumes/volume-5a31c6e7-a5e1-423d-b3a4-06ff7c0bd248)] | | o- portals .................................................................................................... [Portals: 1] | | o- 192.168.1.200:3260 ............................................................................................... [OK] | o- iqn.2010-10.org.openstack:volume-7d644152-08d1-4e82-b7f5-e597f860ca4b ............................................. [TPGs: 1] | | o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl] | | o- acls .......................................................................................................... [ACLs: 1] | | | o- iqn.1994-05.com.redhat:d32a1588856 ....................................................... [1-way auth, Mapped LUNs: 1] | | | o- mapped_lun0 ................. [lun0 block/iqn.2010-10.org.openstack:volume-7d644152-08d1-4e82-b7f5-e597f860ca4b (rw)] | | o- luns .......................................................................................................... [LUNs: 1] | | | o- lun0 [block/iqn.2010-10.org.openstack:volume-7d644152-08d1-4e82-b7f5-e597f860ca4b (/dev/cinder-volumes/volume-7d644152-08d1-4e82-b7f5-e597f860ca4b)] | | o- portals .................................................................................................... [Portals: 1] | | o- 192.168.1.200:3260 ............................................................................................... [OK] | o- iqn.2010-10.org.openstack:volume-d9c8cf52-370c-4ee2-afbc-6bc742a4425b ............................................. [TPGs: 1] | o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl] | o- acls .......................................................................................................... [ACLs: 1] | | o- iqn.1994-05.com.redhat:d32a1588856 ....................................................... [1-way auth, Mapped LUNs: 1] | | o- mapped_lun0 ................. [lun0 block/iqn.2010-10.org.openstack:volume-d9c8cf52-370c-4ee2-afbc-6bc742a4425b (rw)] | o- luns .......................................................................................................... [LUNs: 1] | | o- lun0 [block/iqn.2010-10.org.openstack:volume-d9c8cf52-370c-4ee2-afbc-6bc742a4425b (/dev/cinder-volumes/volume-d9c8cf52-370c-4ee2-afbc-6bc742a4425b)] | o- portals .................................................................................................... [Portals: 1] | o- 192.168.1.200:3260 ............................................................................................... [OK] o- loopback ......................................................................................................... [Targets: 0] This appears to only be affecting 1 volume. I created a new volume and was able to correctly attach it to an instance without any issues. Additionally, I did not mention that this problematic volume was created in horizion with a "boot from image (create new volume)" option. While waiting for the 'block device mapping' to complete, something failed out. The instance went into error state and the volume went into an available state. Attaching the volume to a different instance results in the above behavior. i'm trying to reproduce this in my environment, but i cant seem to get the volume to error out anymore. This bug is against a version which has reached End of Life. Please reopen if it is still relevant with a latest version: http://releases.openstack.org/ |