Bug 1042801
| Summary: | [RHS-RHOS] Cinder volume migration fails to migrate from one glusterfs backend to another | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | shilpa <smanjara> | ||||
| Component: | openstack-cinder | Assignee: | Eric Harney <eharney> | ||||
| Status: | CLOSED ERRATA | QA Contact: | nlevinki <nlevinki> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | medium | ||||||
| Version: | 4.0 | CC: | deepakcs, eharney, grajaiya, sclewis, scohen, sgotliv, vbellur, yeylon | ||||
| Target Milestone: | z2 | Keywords: | TestOnly, ZStream | ||||
| Target Release: | 5.0 (RHEL 7) | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | openstack-cinder-2014.1-4.el7ost | Doc Type: | Bug Fix | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2014-11-03 08:37:06 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Comment 2
shilpa
2013-12-13 12:32:31 UTC
Setting "Corbett" for triage. Per 12/13 triage moving this bug to RHOS product for analysis since there is no gluster operation performed in this test case. We at a minimum need a fix similar to the one (linked) that the NFS driver just got upstream. Is this scheduled to be fixed in 5.0? This needs at least two fixes: 1) Fix for LP 1238085 in Brick for GlusterFS (I have this staged and mostly working) 2) The GlusterFS driver needs some changes around use of name_id as it isn't yet handling volume UUIDs in a way that is compatible with volume migration. Created attachment 872580 [details]
cinder migrate error log
I tried latest devstack to reproduce the issue and I have the below observations:
1) The original issue reported (glusterfs_mount_point_base required) is fixed in devstack as it has the commit for LP 1238085
2) cinder migrate still doesn't work as throws a lot of errors which I captured and attaching here to this BZ. Below is my setup details
2.1) I setup cinder multi-backend
[stack@devstack-vm cinder]$ cinder-manage host list
host zone
devstack-vm.localdomain nova
devstack-vm.localdomain@gluster_vol1 nova
devstack-vm.localdomain@gluster_vol2 nova
2.2) I setup new volume-type
[stack@devstack-vm cinder]$ cinder type-list
+--------------------------------------+-----------+
| ID | Name |
+--------------------------------------+-----------+
| a2d5d2ba-5f3d-46e9-a388-76644aff8bbf | glusterfs |
+--------------------------------------+-----------+
2.3) /etc/cinder/cinder.conf has
volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver
enabled_backends=gluster_vol1,gluster_vol2
[gluster_vol1]
volume_backend_name=glusterfs
volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver
glusterfs_shares_config=/etc/cinder/gluster_vol1_share
[gluster_vol2]
volume_backend_name=glusterfs
volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver
glusterfs_shares_config=/etc/cinder/gluster_vol2_share
2.4) Created a new volume myvol1 and then migrated it using standard cinder cmds
[stack@devstack-vm cinder]$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 4357653f-9b84-42d1-96cb-66c5b31c313f | available | myvol1 | 1 | glusterfs | false | |
| bcd183c1-19f8-4fd1-819b-1e6d52114826 | available | myvol1 | 1 | glusterfs | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
[stack@devstack-vm cinder]$ cinder show bcd183c1-19f8-4fd1-819b-1e6d52114826
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-03-10T06:41:54.000000 |
| display_description | None |
| display_name | myvol1 |
| encrypted | False |
| id | bcd183c1-19f8-4fd1-819b-1e6d52114826 |
| metadata | {} |
| os-vol-host-attr:host | devstack-vm.localdomain@gluster_vol1 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 1ca85ebcc7a94cc39a210dcd42c35cee |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| volume_type | glusterfs |
+--------------------------------+--------------------------------------+
[stack@devstack-vm cinder]$ cinder show 4357653f-9b84-42d1-96cb-66c5b31c313f
+--------------------------------+---------------------------------------------+
| Property | Value |
+--------------------------------+---------------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-03-10T06:41:54.000000 |
| display_description | None |
| display_name | myvol1 |
| encrypted | False |
| id | 4357653f-9b84-42d1-96cb-66c5b31c313f |
| metadata | {} |
| os-vol-host-attr:host | devstack-vm.localdomain@gluster_vol2 |
| os-vol-mig-status-attr:migstat | target:bcd183c1-19f8-4fd1-819b-1e6d52114826 |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 1ca85ebcc7a94cc39a210dcd42c35cee |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| volume_type | glusterfs |
+--------------------------------+---------------------------------------------+
[stack@devstack-vm cinder]$
As we can see above.. its stuck at target:bcd183c1-19f8-4fd1-819b-1e6d52114826
and the c-vol devstack screen has tons of errors which is captured in the log i am attaching herewith
Lookign into the gluster backend...
[stack@devstack-vm cinder]$ ls -lh /opt/stack/glusterfs/brick?/
/opt/stack/glusterfs/brick1/:
total 0
-rw-rw-rw-. 2 root root 1.0G Mar 10 06:41 volume-bcd183c1-19f8-4fd1-819b-1e6d52114826
/opt/stack/glusterfs/brick2/:
total 0
Only the original volume is created.. the new volume isn't yet created!
Pls see the log attached above (Comment #9).
I am working towards understanding the errors reported in the logs.
thanx,
deepak
root@cougar09 ~(keystone_admin)]# cinder show 9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0
+--------------------------------+---------------------------------------------------+
| Property | Value |
+--------------------------------+---------------------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-10-28T11:41:26.000000 |
| display_description | None |
| display_name | None |
| encrypted | False |
| id | 9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0 |
| metadata | {} |
| os-vol-host-attr:host | cougar09.scl.lab.tlv.redhat.com@GLUSTERFS_DRIVER1 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | a893dc6bb8994e008886905c7c8607f5 |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| volume_type | None |
+--------------------------------+---------------------------------------------------+
cinder migrate 9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0 cougar09.scl.lab.tlv.redhat.com@GLUSTERFS_DRIVER2
[root@cougar09 ~(keystone_admin)]# tail -f /var/log/cinder/volume.log
2014-10-28 13:43:16.336 1547 DEBUG cinder.openstack.common.periodic_task [-] Running periodic task VolumeManager._report_driver_status run_periodic_tasks /usr/lib/python2.7/site-packages/cinder/openstack/common/periodic_task.py:178
2014-10-28 13:43:16.337 1547 INFO cinder.volume.manager [-] Updating volume status
2014-10-28 13:43:16.337 1547 DEBUG cinder.volume.drivers.nfs [-] shares loaded: {u'10.35.162.44:/gv0': None} _load_shares_config /usr/lib/python2.7/site-packages/cinder/volume/drivers/nfs.py:327
2014-10-28 13:43:16.338 1547 DEBUG cinder.openstack.common.processutils [-] Running cmd (subprocess): mkdir -p /var/lib/cinder/cinder-volume2/5564d14a8a4de54436024e5b60e811a6 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:147
2014-10-28 13:43:16.347 1547 DEBUG cinder.openstack.common.processutils [-] Result was 0 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:171
2014-10-28 13:43:16.348 1547 DEBUG cinder.openstack.common.processutils [-] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 10.35.162.44:/gv0 /var/lib/cinder/cinder-volume2/5564d14a8a4de54436024e5b60e811a6 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:147
2014-10-28 13:43:16.439 1547 DEBUG cinder.openstack.common.processutils [-] Result was 0 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:171
2014-10-28 13:43:16.447 1547 DEBUG cinder.volume.drivers.glusterfs [-] Available shares: [u'10.35.162.44:/gv0'] _ensure_shares_mounted /usr/lib/python2.7/site-packages/cinder/volume/drivers/glusterfs.py:1069
2014-10-28 13:43:16.448 1547 DEBUG cinder.openstack.common.processutils [-] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf df --portability --block-size 1 /var/lib/cinder/cinder-volume2/5564d14a8a4de54436024e5b60e811a6 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:147
2014-10-28 13:43:16.523 1547 DEBUG cinder.openstack.common.processutils [-] Result was 0 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:171
^[[1;5D^C
[root@cougar09 ~(keystone_admin)]# tail -f /var/log/cinder/*
==> /var/log/cinder/api.log <==
2014-10-28 13:43:08.424 1650 DEBUG keystoneclient.middleware.auth_token [-] Returning cached token _cache_get /usr/lib/python2.7/site-packages/keystoneclient/middleware/auth_token.py:1129
2014-10-28 13:43:08.424 1650 DEBUG keystoneclient.middleware.auth_token [-] Storing token in cache _cache_put /usr/lib/python2.7/site-packages/keystoneclient/middleware/auth_token.py:1239
2014-10-28 13:43:08.425 1650 DEBUG keystoneclient.middleware.auth_token [-] Received request from user: e2e9b6c22d6146f7858f1f161ed0a7d1 with project_id : a893dc6bb8994e008886905c7c8607f5 and roles: admin _build_user_headers /usr/lib/python2.7/site-packages/keystoneclient/middleware/auth_token.py:1026
2014-10-28 13:43:08.426 1650 DEBUG routes.middleware [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Matched POST /a893dc6bb8994e008886905c7c8607f5/volumes/9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0/action __call__ /usr/lib/python2.7/site-packages/routes/middleware.py:100
2014-10-28 13:43:08.427 1650 DEBUG routes.middleware [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Route path: '/{project_id}/volumes/:(id)/action', defaults: {'action': u'action', 'controller': <cinder.api.openstack.wsgi.Resource object at 0x41bbb10>} __call__ /usr/lib/python2.7/site-packages/routes/middleware.py:102
2014-10-28 13:43:08.427 1650 DEBUG routes.middleware [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Match dict: {'action': u'action', 'controller': <cinder.api.openstack.wsgi.Resource object at 0x41bbb10>, 'project_id': u'a893dc6bb8994e008886905c7c8607f5', 'id': u'9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0'} __call__ /usr/lib/python2.7/site-packages/routes/middleware.py:103
2014-10-28 13:43:08.428 1650 INFO cinder.api.openstack.wsgi [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] POST http://10.35.160.139:8776/v1/a893dc6bb8994e008886905c7c8607f5/volumes/9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0/action
2014-10-28 13:43:08.429 1650 DEBUG cinder.api.openstack.wsgi [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Action body: {"os-migrate_volume": {"force_host_copy": false, "host": "cougar09.scl.lab.tlv.redhat.com@GLUSTERFS_DRIVER2"}} get_method /usr/lib/python2.7/site-packages/cinder/api/openstack/wsgi.py:1008
2014-10-28 13:43:08.535 1650 INFO cinder.api.openstack.wsgi [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] http://10.35.160.139:8776/v1/a893dc6bb8994e008886905c7c8607f5/volumes/9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0/action returned with HTTP 202
2014-10-28 13:43:08.536 1650 INFO eventlet.wsgi.server [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] 10.35.160.139 - - [28/Oct/2014 13:43:08] "POST /v1/a893dc6bb8994e008886905c7c8607f5/volumes/9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0/action HTTP/1.1" 202 187 0.114180
==> /var/log/cinder/cinder-manage.log <==
2014-10-28 11:06:41.226 3226 DEBUG cinder.openstack.common.lockutils [req-c9187f9c-639e-4a69-81fd-7255233c93c5 - - - - -] Got semaphore "dbapi_backend" for method "__get_backend"... inner /usr/lib/python2.7/site-packages/cinder/openstack/common/lockutils.py:191
==> /var/log/cinder/scheduler.log <==
2014-10-28 13:42:14.115 1514 DEBUG cinder.scheduler.host_manager [req-b781a710-4111-4aef-a2b4-7f5b04e1bd09 - - - - -] Received volume service update from cougar09.scl.lab.tlv.redhat.com@GLUSTERFS_DRIVER1. update_service_capabilities /usr/lib/python2.7/site-packages/cinder/scheduler/host_manager.py:273
2014-10-28 13:42:16.336 1514 WARNING cinder.context [-] Arguments dropped when creating context: {'user': None, 'tenant': None, 'user_identity': u'- - - - -'}
2014-10-28 13:42:16.336 1514 DEBUG cinder.scheduler.host_manager [req-547df793-bc78-4334-8626-0c48a4b91f3e - - - - -] Received volume service update from cougar09.scl.lab.tlv.redhat.com@GLUSTERFS_DRIVER2. update_service_capabilities /usr/lib/python2.7/site-packages/cinder/scheduler/host_manager.py:273
2014-10-28 13:43:08.537 1514 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'e2e9b6c22d6146f7858f1f161ed0a7d1', 'tenant': u'a893dc6bb8994e008886905c7c8607f5', 'user_identity': u'e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -'}
2014-10-28 13:43:08.541 1514 WARNING cinder.scheduler.host_manager [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] volume service is down or disabled. (host: cougar09.scl.lab.tlv.redhat.com)
2014-10-28 13:43:08.542 1514 DEBUG cinder.scheduler.filter_scheduler [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Filtered [host 'cougar09.scl.lab.tlv.redhat.com@GLUSTERFS_DRIVER1': free_capacity_gb: 9.1943359375, host 'cougar09.scl.lab.tlv.redhat.com@GLUSTERFS_DRIVER2': free_capacity_gb: 7.19482421875] _get_weighted_candidates /usr/lib/python2.7/site-packages/cinder/scheduler/filter_scheduler.py:259
2014-10-28 13:43:14.115 1514 WARNING cinder.context [-] Arguments dropped when creating context: {'user': None, 'tenant': None, 'user_identity': u'- - - - -'}
2014-10-28 13:43:14.116 1514 DEBUG cinder.scheduler.host_manager [req-09c1b0b8-c311-4d8f-acdd-33586d7c8094 - - - - -] Received volume service update from cougar09.scl.lab.tlv.redhat.com@GLUSTERFS_DRIVER1. update_service_capabilities /usr/lib/python2.7/site-packages/cinder/scheduler/host_manager.py:273
2014-10-28 13:43:16.337 1514 WARNING cinder.context [-] Arguments dropped when creating context: {'user': None, 'tenant': None, 'user_identity': u'- - - - -'}
2014-10-28 13:43:16.338 1514 DEBUG cinder.scheduler.host_manager [req-fc28003e-3a54-4585-a877-2473b0bee386 - - - - -] Received volume service update from cougar09.scl.lab.tlv.redhat.com@GLUSTERFS_DRIVER2. update_service_capabilities /usr/lib/python2.7/site-packages/cinder/scheduler/host_manager.py:273
==> /var/log/cinder/volume.log <==
2014-10-28 13:43:16.336 1547 DEBUG cinder.openstack.common.periodic_task [-] Running periodic task VolumeManager._report_driver_status run_periodic_tasks /usr/lib/python2.7/site-packages/cinder/openstack/common/periodic_task.py:178
2014-10-28 13:43:16.337 1547 INFO cinder.volume.manager [-] Updating volume status
2014-10-28 13:43:16.337 1547 DEBUG cinder.volume.drivers.nfs [-] shares loaded: {u'10.35.162.44:/gv0': None} _load_shares_config /usr/lib/python2.7/site-packages/cinder/volume/drivers/nfs.py:327
2014-10-28 13:43:16.338 1547 DEBUG cinder.openstack.common.processutils [-] Running cmd (subprocess): mkdir -p /var/lib/cinder/cinder-volume2/5564d14a8a4de54436024e5b60e811a6 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:147
2014-10-28 13:43:16.347 1547 DEBUG cinder.openstack.common.processutils [-] Result was 0 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:171
2014-10-28 13:43:16.348 1547 DEBUG cinder.openstack.common.processutils [-] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 10.35.162.44:/gv0 /var/lib/cinder/cinder-volume2/5564d14a8a4de54436024e5b60e811a6 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:147
2014-10-28 13:43:16.439 1547 DEBUG cinder.openstack.common.processutils [-] Result was 0 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:171
2014-10-28 13:43:16.447 1547 DEBUG cinder.volume.drivers.glusterfs [-] Available shares: [u'10.35.162.44:/gv0'] _ensure_shares_mounted /usr/lib/python2.7/site-packages/cinder/volume/drivers/glusterfs.py:1069
2014-10-28 13:43:16.448 1547 DEBUG cinder.openstack.common.processutils [-] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf df --portability --block-size 1 /var/lib/cinder/cinder-volume2/5564d14a8a4de54436024e5b60e811a6 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:147
2014-10-28 13:43:16.523 1547 DEBUG cinder.openstack.common.processutils [-] Result was 0 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:171
2014-10-28 13:43:53.470 1546 DEBUG cinder.openstack.common.processutils [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Result was 0 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:171
2014-10-28 13:43:53.477 1547 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'e2e9b6c22d6146f7858f1f161ed0a7d1', 'tenant': u'a893dc6bb8994e008886905c7c8607f5', 'user_identity': u'e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -'}
2014-10-28 13:43:53.514 1546 DEBUG cinder.volume.driver [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] volume 9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0: removing export _detach_volume /usr/lib/python2.7/site-packages/cinder/volume/driver.py:456
2014-10-28 13:43:53.515 1546 DEBUG cinder.volume.manager [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] migrate_volume_completion: completing migration for volume 9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0 (temporary volume 795ab04c-bd7c-49ba-b030-a3a413a49b71 migrate_volume_completion /usr/lib/python2.7/site-packages/cinder/volume/manager.py:996
2014-10-28 13:43:53.654 1546 DEBUG cinder.openstack.common.lockutils [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Got semaphore "9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0-delete_volume" for method "lvo_inner2"... inner /usr/lib/python2.7/site-packages/cinder/openstack/common/lockutils.py:191
2014-10-28 13:43:53.655 1546 DEBUG cinder.openstack.common.lockutils [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Attempting to grab file lock "9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0-delete_volume" for method "lvo_inner2"... inner /usr/lib/python2.7/site-packages/cinder/openstack/common/lockutils.py:202
2014-10-28 13:43:53.656 1546 DEBUG cinder.openstack.common.lockutils [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Got file lock "9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0-delete_volume" at /var/lib/cinder/tmp/cinder-9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0-delete_volume for method "lvo_inner2"... inner /usr/lib/python2.7/site-packages/cinder/openstack/common/lockutils.py:232
2014-10-28 13:43:53.689 1546 INFO cinder.volume.manager [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] volume 9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0: deleting
2014-10-28 13:43:53.693 1546 DEBUG cinder.volume.manager [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] volume 9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0: removing export delete_volume /usr/lib/python2.7/site-packages/cinder/volume/manager.py:399
2014-10-28 13:43:53.694 1546 DEBUG cinder.volume.manager [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] volume 9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0: deleting delete_volume /usr/lib/python2.7/site-packages/cinder/volume/manager.py:401
2014-10-28 13:43:53.695 1546 DEBUG cinder.openstack.common.lockutils [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Got semaphore "glusterfs" for method "delete_volume"... inner /usr/lib/python2.7/site-packages/cinder/openstack/common/lockutils.py:191
2014-10-28 13:43:53.695 1546 DEBUG cinder.openstack.common.processutils [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Running cmd (subprocess): mkdir -p /var/lib/cinder/cinder-volume1/4db5b631c5aaace76bb92aaa7823f005 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:147
2014-10-28 13:43:53.706 1546 DEBUG cinder.openstack.common.processutils [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Result was 0 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:171
2014-10-28 13:43:53.708 1546 DEBUG cinder.openstack.common.processutils [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 10.35.163.21:/gv0 /var/lib/cinder/cinder-volume1/4db5b631c5aaace76bb92aaa7823f005 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:147
2014-10-28 13:43:53.820 1546 DEBUG cinder.openstack.common.processutils [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Result was 0 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:171
2014-10-28 13:43:53.829 1546 DEBUG cinder.openstack.common.processutils [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf rm -f /var/lib/cinder/cinder-volume1/4db5b631c5aaace76bb92aaa7823f005/volume-9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:147
2014-10-28 13:43:53.903 1546 DEBUG cinder.openstack.common.processutils [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Result was 0 execute /usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py:171
2014-10-28 13:43:53.906 1546 DEBUG cinder.openstack.common.lockutils [req-52288fa6-0430-4211-8958-9adb74a74afc e2e9b6c22d6146f7858f1f161ed0a7d1 a893dc6bb8994e008886905c7c8607f5 - - -] Released file lock "9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0-delete_volume" at /var/lib/cinder/tmp/cinder-9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0-delete_volume for method "lvo_inner2"... inner /usr/lib/python2.7/site-packages/cinder/openstack/common/lockutils.py:239
^C
[root@cougar09 ~(keystone_admin)]# less /var/log/cinder/volume.log
[root@cougar09 ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 2b58360b-694c-4a76-84e3-291f41a2d095 | available | None | 1 | None | false | |
| 9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0 | available | None | 1 | None | false | |
| abe75aa4-0ec5-439d-af21-f5cc301332a0 | available | Test1 | 1 | None | false | |
| c85b3218-d212-4b67-b44a-e1ffe82b528d | error | koko3 | 2 | None | false | |
| e8007352-1768-4e94-95b6-e99758709d62 | error | koko2 | 2 | None | false | |
| fd5e2120-c383-406e-bf2b-3c7f5e3dad86 | error | vol23 | 2 | None | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
[root@cougar09 ~(keystone_admin)]# cinder show 9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0
+--------------------------------+---------------------------------------------------+
| Property | Value |
+--------------------------------+---------------------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-10-28T11:41:26.000000 |
| display_description | None |
| display_name | None |
| encrypted | False |
| id | 9ffaa3cb-66ad-4d35-9f85-c32bdea81dd0 |
| metadata | {} |
| os-vol-host-attr:host | cougar09.scl.lab.tlv.redhat.com@GLUSTERFS_DRIVER2 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | 795ab04c-bd7c-49ba-b030-a3a413a49b71 |
| os-vol-tenant-attr:tenant_id | a893dc6bb8994e008886905c7c8607f5 |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| volume_type | None |
+--------------------------------+---------------------------------------------------+
[root@cougar09 ~(keystone_admin)]# ^C
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2014-1788.html |