Hide Forgot
Description of problem: if we have a failure during an action we are running on multiple volumes we might have multiple volumes with state which we need to reset. currently I can only run reset-state for a single object, but it would be helpful if we could do it for multiple objects (primarily if we currently volume state remains in error in case of a failure) Version-Release number of selected component (if applicable): python-cinderclient-1.0.6-2.el6ost.noarch How reproducible: 100% Steps to Reproduce: 1. restart cinder-volume during extend of a volume (do this several times) 2. try to run cinder reset-state <vol> <vol> <vol> 3. Actual results: we can only reset-state for a single volume at a time Expected results: since currently we have a problem with the volume states reporting the failure of an action, we should make the user's life easier by allowing multiple objects state to be reset in a single command Additional info: root@cougar06 ~(keystone_admin)]# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | 1121404d-372f-4c9e-9f00-8fa8e253c064 | in-use | small | 2 | None | false | 8401c157-032b-47d5-bcf9-e2fc413f333c | | 2f2abeea-44b7-43f5-9536-655077875bf5 | in-use | dafna | 6 | None | false | 4ab92098-6587-4afa-9c41-c7adf93d76e0 | | 990d2111-240d-4184-bb76-e6a82b2fda81 | available | user | 3 | None | false | | | a6b9603d-1047-4965-86a0-4f15d34876b5 | extending | test | 4 | None | false | | | ea526067-a6bd-4e58-9db2-d0e58d166022 | extending | dafna1 | 3 | None | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ [root@cougar06 ~(keystone_admin)]# cinder reset-state a6b9603d-1047-4965-86a0-4f15d34876b5 ea526067-a6bd-4e58-9db2-d0e58d166022 usage: cinder [--version] [--debug] [--os-username <auth-user-name>] [--os-password <auth-password>] [--os-tenant-name <auth-tenant-name>] [--os-tenant-id <auth-tenant-id>] [--os-auth-url <auth-url>] [--os-region-name <region-name>] [--service-type <service-type>] [--service-name <service-name>] [--volume-service-name <volume-service-name>] [--endpoint-type <endpoint-type>] [--os-volume-api-version <volume-api-ver>] [--os-cacert <ca-certificate>] [--retries <retries>] <subcommand> ... error: unrecognized arguments: ea526067-a6bd-4e58-9db2-d0e58d166022 Try 'cinder help ' for more information. [root@cougar06 ~(keystone_admin)]# cinder reset-state a6b9603d-1047-4965-86a0-4f15d34876b5 [root@cougar06 ~(keystone_admin)]# cinder reset-state ea526067-a6bd-4e58-9db2-d0e58d166022 [root@cougar06 ~(keystone_admin)]# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | 1121404d-372f-4c9e-9f00-8fa8e253c064 | in-use | small | 2 | None | false | 8401c157-032b-47d5-bcf9-e2fc413f333c | | 2f2abeea-44b7-43f5-9536-655077875bf5 | in-use | dafna | 6 | None | false | 4ab92098-6587-4afa-9c41-c7adf93d76e0 | | 990d2111-240d-4184-bb76-e6a82b2fda81 | available | user | 3 | None | false | | | a6b9603d-1047-4965-86a0-4f15d34876b5 | available | test | 4 | None | false | | | ea526067-a6bd-4e58-9db2-d0e58d166022 | available | dafna1 | 3 | None | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
verified on: python-cinderclient-1.0.9-1.el7ost.noarch openstack-cinder-2014.1.1-1.el7ost.noarch python-cinder-2014.1.1-1.el7ost.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-0932.html