Bug 2022121
Summary: | [16.2] NFS snapshot deletion issue | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Sofia Enriquez <senrique> |
Component: | openstack-cinder | Assignee: | Sofia Enriquez <senrique> |
Status: | CLOSED ERRATA | QA Contact: | Tzach Shefi <tshefi> |
Severity: | medium | Docs Contact: | Andy Stillman <astillma> |
Priority: | medium | ||
Version: | 15.0 (Stein) | CC: | eharney, ltoscano, rheslop, senrique |
Target Milestone: | z2 | Keywords: | Triaged |
Target Release: | 16.2 (Train on RHEL 8.4) | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | openstack-cinder-15.6.1-2.20211218044843.c093eda.el8ost | Doc Type: | Bug Fix |
Doc Text: |
Before this update, the NFS driver would block attempts to delete OpenStack Storage snapshots in the error state, which prevented users from removing broken snapshot DB entries. With this update, the restriction is removed so that you can clean up failed snapshots.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2022-03-23 22:12:24 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Sofia Enriquez
2021-11-10 19:27:21 UTC
Verified on: openstack-cinder-15.6.1-2.20220112174913.c093eda.el8ost.noarch On a deployment using generic NFS backend for Cinder, nfs_snapshot_support was left unset, neither true or false, thus it defaults to false/not supported. Created an empty volume: overcloud) [stack@undercloud-0 ~]$ cinder create 1 --name volA +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2022-02-27T10:27:50.000000 | | description | None | | encrypted | False | | id | ef5e7530-1f31-4588-b979-74db8b05d26c | | metadata | {} | | migration_status | None | | multiattach | False | | name | volA | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | d37d020a690e47cd906193fa75041173 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | 3b7b7c709eb14689b1c61dd1dda838fe | | volume_type | tripleo | +--------------------------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ef5e7530-1f31-4588-b979-74db8b05d26c | available | volA | 1 | tripleo | false | | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ Now lets create a volume snapshot, operation should fail as we didn't enabled snapshot support: (overcloud) [stack@undercloud-0 ~]$ cinder snapshot-create ef5e7530-1f31-4588-b979-74db8b05d26c +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | created_at | 2022-02-27T10:28:24.725970 | | description | None | | id | f7fc8956-a583-4a57-b917-91c7e3eb703f | | metadata | {} | | name | None | | size | 1 | | status | creating | | updated_at | None | | volume_id | ef5e7530-1f31-4588-b979-74db8b05d26c | +-------------+--------------------------------------+ As expected the snapshot reaches error state (overcloud) [stack@undercloud-0 ~]$ cinder snapshot-list +--------------------------------------+--------------------------------------+--------+------+------+ | ID | Volume ID | Status | Name | Size | +--------------------------------------+--------------------------------------+--------+------+------+ | f7fc8956-a583-4a57-b917-91c7e3eb703f | ef5e7530-1f31-4588-b979-74db8b05d26c | error | - | 1 | +--------------------------------------+--------------------------------------+--------+------+------+ Now lets try to delete this errored snapshot: (overcloud) [stack@undercloud-0 ~]$ cinder snapshot-delete f7fc8956-a583-4a57-b917-91c7e3eb703f (overcloud) [stack@undercloud-0 ~]$ cinder snapshot-list +----+-----------+--------+------+------+ | ID | Volume ID | Status | Name | Size | +----+-----------+--------+------+------+ +----+-----------+--------+------+------+ Great, as apposed to before the fix, Now I did manage to delete the snapshot. Lets try again this time deleting with --force: Create a new failed snapshot: overcloud) [stack@undercloud-0 ~]$ cinder snapshot-create ef5e7530-1f31-4588-b979-74db8b05d26c --name snap2 +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | created_at | 2022-02-27T10:54:50.428966 | | description | None | | id | 0d3f7d8f-9905-4cbc-bcdb-d7d820a29e9e | | metadata | {} | | name | snap2 | | size | 1 | | status | creating | | updated_at | None | | volume_id | ef5e7530-1f31-4588-b979-74db8b05d26c | +-------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder snapshot-list +--------------------------------------+--------------------------------------+--------+-------+------+ | ID | Volume ID | Status | Name | Size | +--------------------------------------+--------------------------------------+--------+-------+------+ | 0d3f7d8f-9905-4cbc-bcdb-d7d820a29e9e | ef5e7530-1f31-4588-b979-74db8b05d26c | error | snap2 | 1 | +--------------------------------------+--------------------------------------+--------+-------+------+ Now delete this failed snapshot: (overcloud) [stack@undercloud-0 ~]$ cinder snapshot-delete --force 0d3f7d8f-9905-4cbc-bcdb-d7d820a29e9e (overcloud) [stack@undercloud-0 ~]$ cinder snapshot-list +----+-----------+--------+------+------+ | ID | Volume ID | Status | Name | Size | +----+-----------+--------+------+------+ +----+-----------+--------+------+------+ Again the errored snapshot was successfully deleted, this time with the optional --force flag. Good to verify, the failing snapshot was indeed deleted on both attempts. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Release of components for Red Hat OpenStack Platform 16.2.2), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:1001 |