Bug 1650290 - [Docs][Cinder] Document cinder backup to availability zone
Summary: [Docs][Cinder] Document cinder backup to availability zone
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: documentation
Version: 14.0 (Rocky)
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: ---
: ---
Assignee: RHOS Documentation Team
QA Contact: RHOS Documentation Team
URL:
Whiteboard:
Depends On: 1649845
Blocks: 1554013
TreeView+ depends on / blocked
 
Reported: 2018-11-15 18:17 UTC by Kim Nylander
Modified: 2022-08-11 11:17 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1649845
Environment:
Last Closed: 2021-07-06 11:25:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-5890 0 None None None 2022-08-11 11:17:09 UTC

Description Kim Nylander 2018-11-15 18:17:22 UTC
Original engineering BZ: 1554013

+++ This bug was initially created as a clone of Bug #1649845 +++

Description of problem:
While trying to verify RFE 1554013 we faceed the following error 
ClientException: Service cinder-backup could not be found. (HTTP 503) (Request-ID: req-cf554ab9-e9bd-452a-8790-61d873afd08f)


Version-Release number of selected component (if applicable):
openstack-cinder-13.0.1-0.20181013185427.31ff628.el7ost.noarch

How reproducible:


Steps to Reproduce:
1. setup cinder backup 
2. configure backend in different AZ then backup backend  
3. create volume from image 
4. run the following command : 
openstack --debug volume backup create --name vol-cirros-backup vol-cirros

Actual results:
backup failed with the following error 
ClientException: Service cinder-backup could not be found. (HTTP 503) 

Expected results:
volume will be backed-up 


--- Additional comment from Alan Bishop on 2018-11-14 11:50:33 EST ---

First let's make sure the backup service is up.
- What does "openstack volume service list" report?
- What backup backend are you using?

--- Additional comment from Avi Avraham on 2018-11-15 10:26:26 EST ---

+------------------+-----------------------------+------+---------+-------+----------------------------+-----------------+
| Binary           | Host                        | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-----------------------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup    | controller-0                | nova | enabled | up    | 2018-11-15T15:25:34.000000 | -               |
| cinder-backup    | controller-1                | nova | enabled | down  | 2018-11-14T15:08:56.000000 | -               |
| cinder-scheduler | controller-0                | nova | enabled | up    | 2018-11-15T15:25:32.000000 | -               |
| cinder-scheduler | controller-1                | nova | enabled | up    | 2018-11-15T15:25:30.000000 | -               |
| cinder-scheduler | controller-2                | nova | enabled | up    | 2018-11-15T15:25:34.000000 | -               |
| cinder-volume    | controller-0@kaminario      | beta | enabled | up    | 2018-11-15T15:25:31.000000 | -               |
| cinder-volume    | controller-0@tripleo_iscsi  | nova | enabled | down  | 2018-11-14T06:43:04.000000 | -               |
| cinder-volume    | controller-0@tripleo_netapp | nova | enabled | down  | 2018-11-13T14:29:11.000000 | -               |
| cinder-volume    | hostgroup@tripleo_ceph      | nova | enabled | down  | 2018-11-14T05:34:05.000000 | -               |
| cinder-volume    | hostgroup@tripleo_netapp    | alfa | enabled | up    | 2018-11-15T15:25:34.000000 | -               |
+------------------+-----------------------------+------+---------+-------+----------------------------+-----------------+

--- Additional comment from Avi Avraham on 2018-11-15 10:32:15 EST ---

ceph is our backup backend for this setup 
And we have netapp ceph and kaminario as backends
I got a another setup facing the same error with netapp nfs and lvm as backend and swift as backup.

--- Additional comment from Alan Bishop on 2018-11-15 12:38:22 EST ---

I verified the feature works, but there are a couple of tricky bits that may
not be documented. The main thing is the feature only works with cinder
API version 3.51 (or later), meaning it requires a microversion.
Unfortunately, due to other factors related to the openstack CLI, TripleO sets
the OS_VOLUME_API_VERSION=3 (no microversion), and that's why the cross-AZ
backup didn't work.

The other thing to know is you have to specify the backup AZ. Compare this
help text:

$ cinder help backup-create
usage: cinder backup-create [--container <container>] [--name <name>]
                            [--description <description>] [--incremental]
                            [--force] [--snapshot-id <snapshot-id>]
                            <volume>

Creates a volume backup.

Positional arguments:
  <volume>              Name or ID of volume to backup.

Optional arguments:
  --container <container>
                        Backup container name. Default=None.
  --name <name>         Backup name. Default=None.
  --description <description>
                        Backup description. Default=None.
  --incremental         Incremental backup. Default=False.
  --force               Allows or disallows backup of a volume when the volume
                        is attached to an instance. If set to True, backs up
                        the volume whether its status is "available" or "in-
                        use". The backup of an "in-use" volume means your data
                        is crash consistent. Default=False.
  --snapshot-id <snapshot-id>
                        ID of snapshot to backup. Default=None.


Now, with the microverion:

$ OS_VOLUME_API_VERSION=3.51 cinder help backup-create
usage: cinder backup-create [--container <container>] [--name <name>]
                            [--description <description>] [--incremental]
                            [--force] [--snapshot-id <snapshot-id>]
                            [--metadata [<key=value> [<key=value> ...]]]
                            [--availability-zone AVAILABILITY_ZONE]
                            <volume>

Creates a volume backup.

Positional arguments:
  <volume>              Name or ID of volume to backup.

Optional arguments:
  --container <container>
                        Backup container name. Default=None.
  --name <name>         Backup name. Default=None.
  --description <description>
                        Backup description. Default=None.
  --incremental         Incremental backup. Default=False.
  --force               Allows or disallows backup of a volume when the volume
                        is attached to an instance. If set to True, backs up
                        the volume whether its status is "available" or "in-
                        use". The backup of an "in-use" volume means your data
                        is crash consistent. Default=False.
  --snapshot-id <snapshot-id>
                        ID of snapshot to backup. Default=None.
  --metadata [<key=value> [<key=value> ...]]
                        Metadata key and value pairs. Default=None. (Supported
                        by API version 3.43 and later)
  --availability-zone AVAILABILITY_ZONE
                        AZ where the backup should be stored, by default it
                        will be the same as the source. (Supported by API
                        version 3.51 and later)

Note the "--availability-zone" parameter. That needs to specify the AZ where
the backup service is running. So, given a volume in AZ "x" and a backup
service running in AZ "y", this command should work:

$ OS_VOLUME_API_VERSION=3.51 cinder backup-create vol --availability-zone y

The final thing to note is the 'openstack' client still does not support
microversions:

$ OS_VOLUME_API_VERSION=3.51 openstack volume backup create ...
volume version 3.51 is not in supported versions: 1, 2, 3


Note You need to log in before you can comment on or make changes to this bug.