| Summary: | cinder: when failing to extend volume it's status changes to 'error_extending' | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Dafna Ron <dron> | ||||
| Component: | openstack-cinder | Assignee: | Flavio Percoco <fpercoco> | ||||
| Status: | CLOSED WONTFIX | QA Contact: | Dafna Ron <dron> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 4.0 | CC: | abaron, eharney, fpercoco, scohen, yeylon | ||||
| Target Milestone: | --- | Flags: | fpercoco:
needinfo?
(scohen) |
||||
| Target Release: | 5.0 (RHEL 7) | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | storage | ||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2014-01-21 12:01:25 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Attachments: |
|
||||||
I am using glusterfs This is currently expected behavior. Yes, it is less than ideal usability-wise, but this is how Cinder reports errors for such things. You can reset the status using "cinder reset-state". What's the expected behavior? Having the volume moved to 'error_extending' instead of failing before changing its status? Is this something that was discussed upstream? I agree this will cause some frustration in cinder users. Closing in favour of the following BP and bug report: https://blueprints.launchpad.net/cinder/+spec/task-reporting https://bugzilla.redhat.com/show_bug.cgi?id=1054268 |
Created attachment 828679 [details] logs Description of problem: extending a volume with snapshots is suppose to be blocked. but instead of failing the action before touching the volume we actually change the volume status to 'error_extending' whi8ch makes the volume unusable. Version-Release number of selected component (if applicable): openstack-cinder-2013.2-2.el6ost.noarch How reproducible: 100% Steps to Reproduce: 1. create a volume and a snapshot: cinder create <size> --display-name <name> cinder snapshot-create <vol> --display-name <snap_name> 2. extend the volume cinder extend <vol> <new_size> 3. Actual results: we fail to extend the volume but the volume status is stuck in 'error_extending' which makes it unusable Expected results: we should fail the extend before it begins and should not touch the volume at all. Additional info: 2013-11-25 15:21:24.264 14458 ERROR cinder.volume.manager [req-7344ba7b-ec70-4708-bf24-a063e08d8bef 24b77982be8049ee9cd5ad7bed913565 7eb59aa89e8944d098554ff6f5a4cf88] volume dcc6f500-ac93-40c5-ab8e-fac5854daa31: Error trying to extend volume 2013-11-25 15:21:24.264 14458 TRACE cinder.volume.manager Traceback (most recent call last): 2013-11-25 15:21:24.264 14458 TRACE cinder.volume.manager File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 875, in extend_volume 2013-11-25 15:21:24.264 14458 TRACE cinder.volume.manager self.driver.extend_volume(volume, new_size) 2013-11-25 15:21:24.264 14458 TRACE cinder.volume.manager File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 917, in extend_volume 2013-11-25 15:21:24.264 14458 TRACE cinder.volume.manager raise exception.InvalidVolume(msg) 2013-11-25 15:21:24.264 14458 TRACE cinder.volume.manager InvalidVolume: Extend volume is only supported for this driver when no snapshots exist. 2013-11-25 15:21:24.264 14458 TRACE cinder.volume.manager (END) [root@cougar06 ~(keystone_admin)]# cinder list +--------------------------------------+-----------------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------------+--------------+------+-------------+----------+-------------+ | 128681f1-8af2-44fc-bcd6-e3075687f67a | available | vol1 | 12 | None | true | | | 188d5e9f-7bb3-4e4a-95e9-6ba964f5a52a | available | new8 | 10 | None | false | | | 292e6a93-922f-4370-806a-33fbf1ef48c7 | available | vol5 | 9 | None | false | | | 49088941-73f8-48a4-a906-dc7bf024f0ea | available | new3 | 10 | None | true | | | 4ee66206-1ce4-442c-8dfb-d7f2d50d32f1 | available | upload1 | 10 | None | false | | | 684bdfe8-1f73-405b-ad71-d8c2b67af8de | available | upload | 10 | None | false | | | 79f6dbb3-4897-427e-9ad6-7acc21e1d12b | available | baba1 | 10 | None | true | | | 7e3df44e-14e1-4a92-b612-b0dd7731a4e2 | available | new4 | 10 | None | true | | | 82045366-538e-41d5-8a7f-632e0c8e3550 | available | new2 | 10 | None | true | | | 928dc8d9-9658-4df0-93a1-20a1bd245f4f | available | new7 | 10 | None | false | | | a142d020-1fec-4b39-967c-c171696920a5 | available | vol7 | 10 | None | true | | | b9e99855-dd4a-4268-a342-90312ff2adf8 | available | new5 | 10 | None | true | | | bd2c6980-1f9b-4271-9675-0fcae037744f | available | new1 | 10 | None | true | | | cc5405cb-5024-4215-ab2b-be80aa7f1ccf | available | baba2 | 10 | None | true | | | dcc6f500-ac93-40c5-ab8e-fac5854daa31 | error_extending | ext | 7 | None | false | | | df17e52b-c153-4d51-b4a0-9e53b8b02745 | available | baba | 10 | None | true | | | e2fc5cc7-dca4-4470-a995-7ad92f85d283 | available | baba3 | 10 | None | true | | +--------------------------------------+-----------------+--------------+------+-------------+----------+-------------+ [root@cougar06 ~(keystone_admin)]# nova list [root@cougar06 ~(keystone_admin)]# nova volume-attach c1c68d6b-3308-4c43-b98d-64980d684868 dcc6f500-ac93-40c5-ab8e-fac5854daa31 /dev/vdc ERROR: Invalid volume: status must be 'available' (HTTP 400) (Request-ID: req-7785fbac-278f-40bc-ac87-a469706f9f1f)