Bug 1889894
Summary: | cinder volume failing when setting rbd_max_clone_depth=1 | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | alink |
Component: | openstack-cinder | Assignee: | Eric Harney <eharney> |
Status: | CLOSED ERRATA | QA Contact: | Tzach Shefi <tshefi> |
Severity: | high | Docs Contact: | Chuck Copello <ccopello> |
Priority: | high | ||
Version: | 13.0 (Queens) | CC: | ahyder, dhill, fwissing, gfidente, gkadam, jmelvin, jvisser, ldenny, owalsh, tkajinam |
Target Milestone: | z14 | Keywords: | Triaged, ZStream |
Target Release: | 13.0 (Queens) | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | openstack-cinder-12.0.10-21.el7ost | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-12-16 13:57:57 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1892995 | ||
Bug Blocks: |
Description
alink
2020-10-20 20:41:10 UTC
I've commited the following [1] upstream ... let's see how it goes. [1] https://review.opendev.org/#/c/759328 Verified on: openstack-cinder-12.0.10-21.el7ost.noarch On a preexisting system were I had previously created 5 cinder volumes, each one a clone of the prior volume: (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+-------------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+-------------+------+-------------+----------+-------------+ | 01f71541-797e-48bf-8121-739c63e15cbd | available | volG_clone6 | 2 | tripleo | false | | | 12f63e6b-fd35-4679-af2c-7cd3c7fb7dee | available | volA | 2 | tripleo | false | | | 34bff18c-ed4d-41b2-8f63-162c5486e6ef | available | volB_clone1 | 2 | tripleo | false | | | 839194d8-5d50-415d-8466-cf85cf189d22 | available | volE_clone4 | 2 | tripleo | false | | | ca84ca86-bbcd-4f1d-935d-b44f364769b8 | available | volD_clone3 | 2 | tripleo | false | | | d304aa90-0409-4508-bf81-306a9f686f90 | available | volC_clone2 | 2 | tripleo | false | | | fbdd6c64-c121-45ab-b63e-00eed3996061 | available | volF_clone5 | 2 | tripleo | false | | +--------------------------------------+-----------+-------------+------+-------------+----------+-------------+ Default value was unchanged when original volumes were created. [root@controller-0 ~]# grep -irn rbd_max /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf 2990:#rbd_max_clone_depth = 5 Now lets reduce rbd_max_clone_depth to 1, restart c-vol and see what happens. We shouldn't hit any tracebacks, nothing so far for a few minutes after the service re-started, good. Now lets create some new volumes (overcloud) [stack@undercloud-0 ~]$ cinder create 2 --name test +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-11-19T13:14:45.000000 | | description | None | | encrypted | False | | id | 3e132b9e-b4e8-4b3e-9549-a074cab27053 | | metadata | {} | | migration_status | None | | multiattach | False | | name | test | | os-vol-host-attr:host | hostgroup@tripleo_ceph#tripleo_ceph | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | fe7c9e7e23104b839fef51ea7e00d1c8 | | replication_status | None | | size | 2 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2020-11-19T13:14:46.000000 | | user_id | 3170f966a536448abc8b20d69fa3c98d | | volume_type | tripleo | +--------------------------------+--------------------------------------+ And now a clone of this volume: (overcloud) [stack@undercloud-0 ~]$ cinder create 3 --name clone_test --source-volid 3e132b9e-b4e8-4b3e-9549-a074cab27053 +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-11-19T13:16:03.000000 | | description | None | | encrypted | False | | id | c245e78b-076d-4433-8f7e-74eb74453727 | | metadata | {} | | migration_status | None | | multiattach | False | | name | clone_test | | os-vol-host-attr:host | hostgroup@tripleo_ceph#tripleo_ceph | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | fe7c9e7e23104b839fef51ea7e00d1c8 | | replication_status | None | | size | 3 | | snapshot_id | None | | source_volid | 3e132b9e-b4e8-4b3e-9549-a074cab27053 | | status | creating | | updated_at | 2020-11-19T13:16:03.000000 | | user_id | 3170f966a536448abc8b20d69fa3c98d | | volume_type | tripleo | +--------------------------------+--------------------------------------+ Now lets clone one of the original cloned volumes, which were created when max depth was it's default value of 5. (overcloud) [stack@undercloud-0 ~]$ cinder create 2 --name test3 --source-volid 839194d8-5d50-415d-8466-cf85cf189d22 +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-11-19T13:21:33.000000 | | description | None | | encrypted | False | | id | 62475098-70d1-459c-9bf2-491b94ba8d89 | | metadata | {} | | migration_status | None | | multiattach | False | | name | test3 | | os-vol-host-attr:host | hostgroup@tripleo_ceph#tripleo_ceph | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | fe7c9e7e23104b839fef51ea7e00d1c8 | | replication_status | None | | size | 2 | | snapshot_id | None | | source_volid | 839194d8-5d50-415d-8466-cf85cf189d22 | | status | creating | | updated_at | 2020-11-19T13:21:33.000000 | | user_id | 3170f966a536448abc8b20d69fa3c98d | | volume_type | tripleo | +--------------------------------+--------------------------------------+ Looks fine no errors or tracebacks reported on c-vol log, all volumes are available: (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+-------------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+-------------+------+-------------+----------+-------------+ | 01f71541-797e-48bf-8121-739c63e15cbd | available | volG_clone6 | 2 | tripleo | false | | | 12f63e6b-fd35-4679-af2c-7cd3c7fb7dee | available | volA | 2 | tripleo | false | | | 34bff18c-ed4d-41b2-8f63-162c5486e6ef | available | volB_clone1 | 2 | tripleo | false | | | 3e132b9e-b4e8-4b3e-9549-a074cab27053 | available | test | 2 | tripleo | false | | | 62475098-70d1-459c-9bf2-491b94ba8d89 | available | test3 | 2 | tripleo | false | | | 839194d8-5d50-415d-8466-cf85cf189d22 | available | volE_clone4 | 2 | tripleo | false | | | c245e78b-076d-4433-8f7e-74eb74453727 | available | clone_test | 3 | tripleo | false | | | ca84ca86-bbcd-4f1d-935d-b44f364769b8 | available | volD_clone3 | 2 | tripleo | false | | | d304aa90-0409-4508-bf81-306a9f686f90 | available | volC_clone2 | 2 | tripleo | false | | | fbdd6c64-c121-45ab-b63e-00eed3996061 | available | volF_clone5 | 2 | tripleo | false | | +--------------------------------------+-----------+-------------+------+-------------+----------+-------------+ Good to verify. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (openstack-cinder bug fix advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5579 |