Bug 1649845
| Summary: | cinder backup fail while backend is in different availability zone | |||
|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Avi Avraham <aavraham> | |
| Component: | openstack-cinder | Assignee: | Cinder Bugs List <cinder-bugs> | |
| Status: | CLOSED WORKSFORME | QA Contact: | Avi Avraham <aavraham> | |
| Severity: | unspecified | Docs Contact: | Kim Nylander <knylande> | |
| Priority: | unspecified | |||
| Version: | 14.0 (Rocky) | CC: | aavraham, abishop, tenobreg, tshefi | |
| Target Milestone: | --- | |||
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | ||
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1650290 (view as bug list) | Environment: | ||
| Last Closed: | 2018-11-22 12:30:08 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1554013, 1650290 | |||
|
Description
Avi Avraham
2018-11-14 15:56:25 UTC
First let's make sure the backup service is up. - What does "openstack volume service list" report? - What backup backend are you using? +------------------+-----------------------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+-----------------------------+------+---------+-------+----------------------------+-----------------+ | cinder-backup | controller-0 | nova | enabled | up | 2018-11-15T15:25:34.000000 | - | | cinder-backup | controller-1 | nova | enabled | down | 2018-11-14T15:08:56.000000 | - | | cinder-scheduler | controller-0 | nova | enabled | up | 2018-11-15T15:25:32.000000 | - | | cinder-scheduler | controller-1 | nova | enabled | up | 2018-11-15T15:25:30.000000 | - | | cinder-scheduler | controller-2 | nova | enabled | up | 2018-11-15T15:25:34.000000 | - | | cinder-volume | controller-0@kaminario | beta | enabled | up | 2018-11-15T15:25:31.000000 | - | | cinder-volume | controller-0@tripleo_iscsi | nova | enabled | down | 2018-11-14T06:43:04.000000 | - | | cinder-volume | controller-0@tripleo_netapp | nova | enabled | down | 2018-11-13T14:29:11.000000 | - | | cinder-volume | hostgroup@tripleo_ceph | nova | enabled | down | 2018-11-14T05:34:05.000000 | - | | cinder-volume | hostgroup@tripleo_netapp | alfa | enabled | up | 2018-11-15T15:25:34.000000 | - | +------------------+-----------------------------+------+---------+-------+----------------------------+-----------------+ ceph is our backup backend for this setup And we have netapp ceph and kaminario as backends I got a another setup facing the same error with netapp nfs and lvm as backend and swift as backup. I verified the feature works, but there are a couple of tricky bits that may
not be documented. The main thing is the feature only works with cinder
API version 3.51 (or later), meaning it requires a microversion.
Unfortunately, due to other factors related to the openstack CLI, TripleO sets
the OS_VOLUME_API_VERSION=3 (no microversion), and that's why the cross-AZ
backup didn't work.
The other thing to know is you have to specify the backup AZ. Compare this
help text:
$ cinder help backup-create
usage: cinder backup-create [--container <container>] [--name <name>]
[--description <description>] [--incremental]
[--force] [--snapshot-id <snapshot-id>]
<volume>
Creates a volume backup.
Positional arguments:
<volume> Name or ID of volume to backup.
Optional arguments:
--container <container>
Backup container name. Default=None.
--name <name> Backup name. Default=None.
--description <description>
Backup description. Default=None.
--incremental Incremental backup. Default=False.
--force Allows or disallows backup of a volume when the volume
is attached to an instance. If set to True, backs up
the volume whether its status is "available" or "in-
use". The backup of an "in-use" volume means your data
is crash consistent. Default=False.
--snapshot-id <snapshot-id>
ID of snapshot to backup. Default=None.
Now, with the microverion:
$ OS_VOLUME_API_VERSION=3.51 cinder help backup-create
usage: cinder backup-create [--container <container>] [--name <name>]
[--description <description>] [--incremental]
[--force] [--snapshot-id <snapshot-id>]
[--metadata [<key=value> [<key=value> ...]]]
[--availability-zone AVAILABILITY_ZONE]
<volume>
Creates a volume backup.
Positional arguments:
<volume> Name or ID of volume to backup.
Optional arguments:
--container <container>
Backup container name. Default=None.
--name <name> Backup name. Default=None.
--description <description>
Backup description. Default=None.
--incremental Incremental backup. Default=False.
--force Allows or disallows backup of a volume when the volume
is attached to an instance. If set to True, backs up
the volume whether its status is "available" or "in-
use". The backup of an "in-use" volume means your data
is crash consistent. Default=False.
--snapshot-id <snapshot-id>
ID of snapshot to backup. Default=None.
--metadata [<key=value> [<key=value> ...]]
Metadata key and value pairs. Default=None. (Supported
by API version 3.43 and later)
--availability-zone AVAILABILITY_ZONE
AZ where the backup should be stored, by default it
will be the same as the source. (Supported by API
version 3.51 and later)
Note the "--availability-zone" parameter. That needs to specify the AZ where
the backup service is running. So, given a volume in AZ "x" and a backup
service running in AZ "y", this command should work:
$ OS_VOLUME_API_VERSION=3.51 cinder backup-create vol --availability-zone y
The final thing to note is the 'openstack' client still does not support
microversions:
$ OS_VOLUME_API_VERSION=3.51 openstack volume backup create ...
volume version 3.51 is not in supported versions: 1, 2, 3
Alan steps allow us to verify the RFE Need to be added to decontamination the required steps. *** Bug 1715469 has been marked as a duplicate of this bug. *** |