Bug 1368680 - snapshot data is not preserved after importing on other openstack setup.
Summary: snapshot data is not preserved after importing on other openstack setup.
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 9.0 (Mitaka)
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Eric Harney
QA Contact: Tzach Shefi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-08-20 15:36 UTC by Pratik Pravin Bandarkar
Modified: 2017-10-26 12:57 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-10-25 13:33:56 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Pratik Pravin Bandarkar 2016-08-20 15:36:10 UTC
Description of problem:

snapshot data is not preserved after importing on other openstack setup.

* Reproducer steps:

- create volume 

[root@dhcp200-208 ~(keystone_admin)]# cinder create 1 --name volume_pbandark  --volume-type lvm
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          attachments           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|      consistencygroup_id       |                 None                 |
|           created_at           |      2016-08-20T22:50:23.000000      |
|          description           |                 None                 |
|           encrypted            |                False                 |
|               id               | ba87ca45-6711-4929-911a-c19fabf0e262 |
|            metadata            |                  {}                  |
|        migration_status        |                 None                 |
|          multiattach           |                False                 |
|              name              |           volume_pbandark            |
|     os-vol-host-attr:host      |                 None                 |
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   84df9e408f454a74ac746c9e3af0abd6   |
|       replication_status       |               disabled               |
|              size              |                  1                   |
|          snapshot_id           |                 None                 |
|          source_volid          |                 None                 |
|             status             |               creating               |
|           updated_at           |                 None                 |
|            user_id             |   aa2bc39017904b7ea83b479af68ca1c7   |
|          volume_type           |                 lvm                  |
+--------------------------------+--------------------------------------+

- attach it to any instance and write some data. 
- take snapshot of instance
- again write some data.
- detach the volume
- take backup with snapshot:
[root@dhcp200-208 ~(keystone_admin)]# cinder backup-create ba87ca45-6711-4929-911a-c19fabf0e262 --snapshot-id f4659f2e-6cfb-4814-8733-58fe995d189e
+-----------+--------------------------------------+
|  Property |                Value                 |
+-----------+--------------------------------------+
|     id    | 7fc9d944-5fae-4777-ab05-256a7e36c169 |
|    name   |                 None                 |
| volume_id | ba87ca45-6711-4929-911a-c19fabf0e262 |
+-----------+--------------------------------------+

[root@dhcp200-208 ~(keystone_admin)]# cinder backup-show 7fc9d944-5fae-4777-ab05-256a7e36c169
+-----------------------+--------------------------------------------+
|        Property       |                   Value                    |
+-----------------------+--------------------------------------------+
|   availability_zone   |                    nova                    |
|       container       | 7f/c9/7fc9d944-5fae-4777-ab05-256a7e36c169 |
|       created_at      |         2016-08-20T22:55:29.000000         |
|     data_timestamp    |         2016-08-20T22:52:37.000000         |
|      description      |                    None                    |
|      fail_reason      |                    None                    |
| has_dependent_backups |                   False                    |
|           id          |    7fc9d944-5fae-4777-ab05-256a7e36c169    |
|     is_incremental    |                   False                    |
|          name         |                    None                    |
|      object_count     |                     65                     |
|          size         |                     1                      |
|      snapshot_id      |    f4659f2e-6cfb-4814-8733-58fe995d189e    | <==
|         status        |                 available                  |
|       updated_at      |         2016-08-20T22:56:20.000000         |
|       volume_id       |    ba87ca45-6711-4929-911a-c19fabf0e262    | <==
+-----------------------+--------------------------------------------+

- export the backup:
[root@dhcp200-208 ~(keystone_admin)]# cinder backup-export 7fc9d944-5fae-4777-ab05-256a7e36c169

+++

* on other openstack setup:
- Import backup:
[root@dell-fc430-1 ~(keystone_admin)]# cinder backup-import cinder.backup.drivers.nfs $(tr -d '\n' < metadata.txt)
+----------+--------------------------------------+
| Property |                Value                 |
+----------+--------------------------------------+
|    id    | 7fc9d944-5fae-4777-ab05-256a7e36c169 |
|   name   |                 None                 |
+----------+--------------------------------------+

where metadata.txt is backup_url.
-----
{"status": "available", "temp_snapshot_id": null, "display_name": null, "availability_zone": "nova", "deleted": false, "volume_id": "ba87ca45-6711-4929-911a-c19fabf0e262", "restore_volume_id": null, "updated_at": "2016-08-20T22:56:20Z", "host": "dhcp200-208.gsslab.pnq.redhat.com", "snapshot_id": "f4659f2e-6cfb-4814-8733-58fe995d189e", "user_id": "aa2bc39017904b7ea83b479af68ca1c7", "service_metadata": "backup", "id": "7fc9d944-5fae-4777-ab05-256a7e36c169", "size": 1, "object_count": 65, "deleted_at": null, "container": "7f/c9/7fc9d944-5fae-4777-ab05-256a7e36c169", "service": "cinder.backup.drivers.nfs", "driver_info": {}, "created_at": "2016-08-20T22:55:29Z", "display_description": null, "data_timestamp": "2016-08-20T22:52:37Z", "parent_id": null, "num_dependent_backups": 0, "fail_reason": null, "project_id": "84df9e408f454a74ac746c9e3af0abd6", "temp_volume_id": null}
-----

- create volume:
[root@dell-fc430-1 ~(keystone_admin)]# cinder create --name pbandark_volume_imported_with_snap 2 

- restore backup:
[root@dell-fc430-1 ~(keystone_admin)]# cinder backup-restore --volume 4c25e63f-893e-4c9e-bda6-a4c27cbb8e25 7fc9d944-5fae-4777-ab05-256a7e36c169
cinder lis+-------------+--------------------------------------+
|   Property  |                Value                 |
+-------------+--------------------------------------+
|  backup_id  | 7fc9d944-5fae-4777-ab05-256a7e36c169 |
|  volume_id  | 4c25e63f-893e-4c9e-bda6-a4c27cbb8e25 |
| volume_name |  pbandark_volume_imported_with_snap  |
+-------------+--------------------------------------+

- attach volume to the instance:
+--------------------------------------+-----------+------------------------------------+------+-------------+----------+--------------------------------------+
|                  ID                  |   Status  |                Name                | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+-----------+------------------------------------+------+-------------+----------+--------------------------------------+
| 4c25e63f-893e-4c9e-bda6-a4c27cbb8e25 |   in-use  | pbandark_volume_imported_with_snap |  2   |      -      |  false   | 91fb0e60-7df6-4867-aaa5-6ceb3289c152 |


- The data which we written on source openstack setup after snapshot will not be available in the volume.

Version-Release number of selected component (if applicable):
# rpm -qa|grep -i cinder
python-cinder-8.0.0-5.el7ost.noarch
openstack-cinder-8.0.0-5.el7ost.noarch
python-cinderclient-1.6.0-1.el7ost.noarch


How reproducible:
100%

Steps to Reproduce:
1.
2.
3.

Actual results:
snapshot data is not preserved after importing on other openstack setup.

Expected results:
data should be preserved

Additional info:

Comment 5 Red Hat Bugzilla Rules Engine 2017-10-04 13:51:30 UTC
This bugzilla has been removed from the release and needs to be reviewed for Triaging and release planning for an appropriate Target Milestone.

Comment 7 Tzach Shefi 2017-10-26 12:57:53 UTC
I've also tested this on osp12, same result. 
But as Eric mentioned ^ isn't a bug, backup done per snapshot point in time. 
Just adding a few missing steps for future reference. 
----------------

"- take snapshot of instance"
Typo assume he meant Cinder snapshot of volume not snapshot of instance. 

Both Openstacks Cinder backup_driver's must use same shared storage in his/my case NFS. This wasn't mentioned on bug but without which only metadata without volume data would be exported/imported.  

On both Openstack's Cinder.conf, set these two:
backup_driver = cinder.backup.drivers.nfs
backup_share = HOST:EXPORT_PATH

Start your backup service
#systemctl enable --now openstack-cinder-backup.service


To export backup I used:
#cinder backup-export f43efd20-44fa-4040-9fb2-ee107e46b4d1 | sed -n '/backup_url/,$ s/|.*|  *\(.*\) |/\1/p' > metadata.txt -n '/backup_url/,$ s/|.*|  *\(.*\) |/\1/p' > metadata.txt 

Copy over metadata.txt to target openstack. 

Import backup via: 
#cinder backup-import cinder.backup.drivers.nfs $(tr -d '\n' < metadata.txt)


Note You need to log in before you can comment on or make changes to this bug.