Bug 1503352
| Summary: | Cinder backup on in-use volume from Ceph backend failure | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | James Biao <jbiao> |
| Component: | python-os-brick | Assignee: | Gorka Eguileor <geguileo> |
| Status: | CLOSED ERRATA | QA Contact: | Avi Avraham <aavraham> |
| Severity: | medium | Docs Contact: | Don Domingo <ddomingo> |
| Priority: | medium | ||
| Version: | 10.0 (Newton) | CC: | apevec, dvd, jschluet, lhh, lkuchlan, lyzhou2, mmethot, scohen, srevivo, tshefi |
| Target Milestone: | beta | Keywords: | Triaged |
| Target Release: | 13.0 (Queens) | Flags: | lkuchlan:
automate_bug+
|
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | python-os-brick-2.3.0-0.20180211233135.7dd2076.el7ost | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-06-27 13:37:31 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1375207, 1710946, 1790752 | ||
| Bug Blocks: | |||
|
Description
James Biao
2017-10-17 23:34:15 UTC
On OSP 11 I was able to reproduce as the following. In general, the "incremental" backup is not really an increment but a full copy of the volume. 1. This is the volume to be attached [stack@instack ~]$ openstack volume list +--------------------------------------+--------------+-----------+------+-------------+ | ID | Display Name | Status | Size | Attached to | +--------------------------------------+--------------+-----------+------+-------------+ | 1d12eb29-08bd-4457-98fe-0debf8dbcf59 | backupvol | available | 10 | | +--------------------------------------+--------------+-----------+------+-------------+ 2. Attaching it to my instance stack@instack ~]$ openstack server add volume rhel7-volume-backup backupvol [stack@instack ~]$ openstack volume list +--------------------------------------+--------------+--------+------+----------------------------------------------+ | ID | Display Name | Status | Size | Attached to | +--------------------------------------+--------------+--------+------+----------------------------------------------+ | 1d12eb29-08bd-4457-98fe-0debf8dbcf59 | backupvol | in-use | 10 | Attached to rhel7-volume-backup on /dev/vdd | +--------------------------------------+--------------+--------+------+----------------------------------------------+ 3. Logged in the instance, mkfs and mount the volume. Copied a file to the mount directory 4. Create backup [stack@instack ~]$ cinder backup-create backupvol --force +-----------+--------------------------------------+ | Property | Value | +-----------+--------------------------------------+ | id | 9eed827c-4100-4b20-b520-8bab2d769521 | | name | None | | volume_id | 1d12eb29-08bd-4457-98fe-0debf8dbcf59 | +-----------+--------------------------------------+ 5. Checking on the Ceph side [root@overcloud-controller-0 ~]# rbd -p backups ls volume-1d12eb29-08bd-4457-98fe-0debf8dbcf59.backup.9eed827c-4100-4b20-b520-8bab2d769521 [root@overcloud-controller-0 ~]# rbd -p backups info volume-1d12eb29-08bd-4457-98fe-0debf8dbcf59.backup.9eed827c-4100-4b20-b520-8bab2d769521 rbd image 'volume-1d12eb29-08bd-4457-98fe-0debf8dbcf59.backup.9eed827c-4100-4b20-b520-8bab2d769521': size 10240 MB in 2560 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.26e4977099a5b format: 2 features: layering, striping flags: stripe unit: 4096 kB stripe count: 1 6. Log back to instance and add another file to the mount directory 7. Create incremental backup [stack@instack ~]$ cinder backup-create backupvol --force --incremental +-----------+--------------------------------------+ | Property | Value | +-----------+--------------------------------------+ | id | 2b72b678-4f5b-4c0c-a0e5-3dcb3c487210 | | name | None | | volume_id | 1d12eb29-08bd-4457-98fe-0debf8dbcf59 | +-----------+--------------------------------------+ 8. Check on Ceph side, we can see that the "incremental" backup is 10G in size. [root@overcloud-controller-0 ~]# rbd -p backups ls volume-1d12eb29-08bd-4457-98fe-0debf8dbcf59.backup.2b72b678-4f5b-4c0c-a0e5-3dcb3c487210 volume-1d12eb29-08bd-4457-98fe-0debf8dbcf59.backup.9eed827c-4100-4b20-b520-8bab2d769521 [root@overcloud-controller-0 ~]# rbd -p backups info volume-1d12eb29-08bd-4457-98fe-0debf8dbcf59.backup.2b72b678-4f5b-4c0c-a0e5-3dcb3c487210 rbd image 'volume-1d12eb29-08bd-4457-98fe-0debf8dbcf59.backup.2b72b678-4f5b-4c0c-a0e5-3dcb3c487210': size 10240 MB in 2560 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.26e82583ef15d format: 2 features: layering, striping flags: stripe unit: 4096 kB stripe count: 1 9. none of the backup rbds has snapshot [root@overcloud-controller-0 ~]# rbd -p backups snap ls volume-1d12eb29-08bd-4457-98fe-0debf8dbcf59.backup.9eed827c-4100-4b20-b520-8bab2d769521 [root@overcloud-controller-0 ~]# rbd -p backups snap ls volume-1d12eb29-08bd-4457-98fe-0debf8dbcf59.backup.2b72b678-4f5b-4c0c-a0e5-3dcb3c487210 Tested using: python2-os-brick-2.3.1-1.el7ost.noarch Automation result: https://rhos-qe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/DFG-storage-qe-13_director-rhel-virthost-3cont_2comp_1ceph-ipv4-vxlan-qe-storage-tests/5/testReport/tempest_storage_plugin.tests.scenario.test_volume_backup/TestVolumeBackup/Second_tempest_run___test_volume_backup_increment_restore_compute_id_2ce5e55c_4085_43c1_98c6_582525334ad7_image_volume_/ Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:2086 @"jbiao"<jbiao>; hi, I also meet the same question,so how to resolv it,ths! |