Bug 1740560 - Cinder retype from non-type to new type on NFS deletes the Volume
Summary: Cinder retype from non-type to new type on NFS deletes the Volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 13.0 (Queens)
Hardware: All
OS: Linux
high
urgent
Target Milestone: z9
: 13.0 (Queens)
Assignee: Alan Bishop
QA Contact: Tzach Shefi
Chuck Copello
URL:
Whiteboard:
: 1784020 (view as bug list)
Depends On: 1749491 1749493
Blocks: 1752560
TreeView+ depends on / blocked
 
Reported: 2019-08-13 09:16 UTC by Johannes Beisiegel
Modified: 2023-12-15 16:42 UTC (History)
3 users (show)

Fixed In Version: openstack-cinder-12.0.7-6.el7ost
Doc Type: Bug Fix
Doc Text:
Previously, the cinder NFS volume migration process renamed the volume file on the destination NFS server so that it matched the name on the source NFS server. This process then deletes the source volume, however, when you retype an NFS volume using migration, the source and destination might be the same. In this scenario, the NFS volume file was deleted if the source and destination back ends were on the same NFS server. With this update, the volume file is renamed only when the source and destination files reside on different NFS servers, and retyping an NFS volume using migration functions correctly.
Clone Of:
: 1749491 1752560 (view as bug list)
Environment:
Last Closed: 2019-11-07 13:59:50 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Cinder Volume Log (controller) (2.46 MB, text/plain)
2019-08-21 08:39 UTC, Johannes Beisiegel
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1798468 0 None None None 2019-08-23 17:57:53 UTC
OpenStack gerrit 678278 0 None MERGED Fix NFS volume retype with migrate 2020-06-26 19:36:42 UTC
Red Hat Issue Tracker OSP-30792 0 None None None 2023-12-15 16:42:43 UTC
Red Hat Product Errata RHBA-2019:3800 0 None None None 2019-11-07 13:59:56 UTC

Description Johannes Beisiegel 2019-08-13 09:16:46 UTC
Description of problem:
Migrating a Volume with a none-type to a volume type on the same Backend (NFS) ends up deleting the Volume.

Version-Release number of selected component (if applicable):
RHOSP 13.0.7
rhosp13/openstack-cinder-volume 13.0-79

How reproducible:
Always

Steps to Reproduce:
1.
openstack volume create --size 5 test-vol01
+---------------------+------------------------------------------------------------------+
| Field               | Value                                                            |
+---------------------+------------------------------------------------------------------+
| attachments         | []                                                               |
| availability_zone   | nova                                                             |
| bootable            | false                                                            |
| consistencygroup_id | None                                                             |
| created_at          | 2019-08-13T08:58:15.000000                                       |
| description         | None                                                             |
| encrypted           | False                                                            |
| id                  | ef237fb0-ea21-4881-8042-efcfbb308e9c                             |
| multiattach         | False                                                            |
| name                | test-vol01                                                       |
| properties          |                                                                  |
| replication_status  | None                                                             |
| size                | 5                                                                |
| snapshot_id         | None                                                             |
| source_volid        | None                                                             |
| status              | creating                                                         |
| type                | None                                                             |
| updated_at          | None                                                             |
| user_id             | 9190b5e4dd5f0e1f577df88b0ca669d6e2ba87a2d41bbf6628a5eaf792968e7a |
+---------------------+------------------------------------------------------------------+

2.
openstack volume type list
+--------------------------------------+--------+-----------+
| ID                                   | Name   | Is Public |
+--------------------------------------+--------+-----------+
| 8645458e-062e-4321-9103-cd656ca0cee6 | Legacy | True      |
+--------------------------------------+--------+-----------+
openstack volume type show Legacy
+--------------------+--------------------------------------+
| Field              | Value                                |
+--------------------+--------------------------------------+
| access_project_ids | None                                 |
| description        | Default Storage                      |
| id                 | 8645458e-062e-4321-9103-cd656ca0cee6 |
| is_public          | True                                 |
| name               | Legacy                               |
| properties         | volume_backend_name='tripleo_nfs'    |
| qos_specs_id       | None                                 |
+--------------------+--------------------------------------+

3.
openstack volume set --type Legacy test-vol01 
cinder-volume.log: ERROR oslo_messaging.rpc.server VolumeMigrationFailed: Volume migration failed: Retype requires migration but is not allowed.

4.
openstack volume set  --retype-policy on-demand --type Legacy test-vol01
openstack volume show test-vol01
+------------------------------+------------------------------------------------------------------+
| Field                        | Value                                                            |
+------------------------------+------------------------------------------------------------------+
| attachments                  | []                                                               |
| availability_zone            | nova                                                             |
| bootable                     | false                                                            |
| consistencygroup_id          | None                                                             |
| created_at                   | 2019-08-13T08:58:15.000000                                       |
| description                  | None                                                             |
| encrypted                    | False                                                            |
| id                           | ef237fb0-ea21-4881-8042-efcfbb308e9c                             |
| multiattach                  | False                                                            |
| name                         | test-vol01                                                       |
| os-vol-tenant-attr:tenant_id | dbe2fb6b113b418da018420b7bc88240                                 |
| properties                   |                                                                  |
| replication_status           | None                                                             |
| size                         | 5                                                                |
| snapshot_id                  | None                                                             |
| source_volid                 | None                                                             |
| status                       | available                                                        |
| type                         | Legacy                                                           |
| updated_at                   | 2019-08-13T09:02:25.000000                                       |
| user_id                      | 9190b5e4dd5f0e1f577df88b0ca669d6e2ba87a2d41bbf6628a5eaf792968e7a |
+------------------------------+------------------------------------------------------------------+

Actual results:
The volume is copied to a new ID but cinder ends up deleted both the copy and the original on the Backend.
It then still shows up in cinder but any usage fails (attaching, snapshots etc)

openstack volume snapshot create --volume test-vol01 test-snap01
+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| created_at  | 2019-08-13T09:05:16.337204           |
| description | None                                 |
| id          | a63a55bd-396b-4597-ad37-18bf4e00a220 |
| name        | test-snap01                          |
| properties  |                                      |
| size        | 5                                    |
| status      | creating                             |
| updated_at  | None                                 |
| volume_id   | ef237fb0-ea21-4881-8042-efcfbb308e9c |
+-------------+--------------------------------------+

openstack volume snapshot show test-snap01
+--------------------------------------------+--------------------------------------+
| Field                                      | Value                                |
+--------------------------------------------+--------------------------------------+
| created_at                                 | 2019-08-13T09:05:16.000000           |
| description                                | None                                 |
| id                                         | a63a55bd-396b-4597-ad37-18bf4e00a220 |
| name                                       | test-snap01                          |
| os-extended-snapshot-attributes:progress   | 0%                                   |
| os-extended-snapshot-attributes:project_id | dbe2fb6b113b418da018420b7bc88240     |
| properties                                 |                                      |
| size                                       | 5                                    |
| status                                     | error                                |
| updated_at                                 | 2019-08-13T09:05:16.000000           |
| volume_id                                  | ef237fb0-ea21-4881-8042-efcfbb308e9c |
+--------------------------------------------+--------------------------------------+

2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server [req-8065c636-b6fe-4d08-80a4-42413a8fa6ee 9190b5e4dd5f0e1f577df88b0ca669d6e2ba87a2d41bbf6628a5eaf792968e7a dbe2fb6b113[5/24919]
420b7bc88240 - f0cab1f633da4ec99b5d2822c5abced5 f0cab1f633da4ec99b5d2822c5abced5] Exception during message handling: ProcessExecutionError: Unexpected error while running command.
Command: /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C qemu-img info --force-share /var/lib/cinder/mnt/8d3227ac12c6118a2bd2f20902437228/volume-ef237fb0-ea21-4881-8042-efcfbb308e9c
Exit code: 1
Stdout: u''
Stderr: u"qemu-img: Could not open '/var/lib/cinder/mnt/8d3227ac12c6118a2bd2f20902437228/volume-ef237fb0-ea21-4881-8042-efcfbb308e9c': Could not open '/var/lib/cinder/mnt/8d3227ac12c6118
a2bd2f20902437228/volume-ef237fb0-ea21-4881-8042-efcfbb308e9c': No such file or directory\n"
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "<string>", line 2, in create_snapshot
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/objects/cleanable.py", line 207, in wrapper
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     result = f(*args, **kwargs)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1096, in create_snapshot
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     snapshot.save()
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     self.force_reraise()
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     six.reraise(self.type_, self.value, self.tb)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 1088, in create_snapshot
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     model_update = self.driver.create_snapshot(snapshot)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "<string>", line 2, in create_snapshot
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/coordination.py", line 151, in _synchronized
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     return f(*a, **k)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/nfs.py", line 566, in create_snapshot
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     return self._create_snapshot(snapshot)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/remotefs.py", line 1412, in _create_snapshot
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     new_snap_path)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/remotefs.py", line 1246, in _do_create_snapshot
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     snapshot.volume.name)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/nfs.py", line 542, in _qemu_img_info
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     run_as_root=True)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/remotefs.py", line 764, in _qemu_img_info_base
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     run_as_root=run_as_root)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/image/image_utils.py", line 111, in qemu_img_info
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     prlimit=QEMU_IMG_LIMITS)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 126, in execute
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     return processutils.execute(*cmd, **kwargs)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 424, in execute
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server     cmd=sanitized_cmd)
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server ProcessExecutionError: Unexpected error while running command.
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server Command: /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C qemu-img info --force-share /var/
lib/cinder/mnt/8d3227ac12c6118a2bd2f20902437228/volume-ef237fb0-ea21-4881-8042-efcfbb308e9c
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server Exit code: 1
2019-08-13 11:05:16.890 60 ERROR oslo_messaging.rpc.server Stdout: u''

Expected results:
cinder retype should either just change the Volume type of the original Volume or keep the copied Volume.

Additional info:

cinder.conf backend:

[tripleo_nfs]
backend_host=hostgroup
volume_backend_name=tripleo_nfs
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=/etc/cinder/shares-nfs.conf
nfs_mount_options=
nfs_snapshot_support=True
nas_secure_file_operations=False
nas_secure_file_permissions=False
nfs_sparsed_volumes = True

Comment 1 Alan Bishop 2019-08-20 20:28:07 UTC
Sorry for the delay, we're taking a look at this now.

Comment 2 Alan Bishop 2019-08-20 22:04:12 UTC
I am not able to reproduce the problem (I running a newer version of cinder, but the remotefs driver hasn't changed significantly).

Please reproduce the problem with debug logging enabled, and attach the cinder logs (especially the cinder-volume log). I need to see the logs during the time the volume is migrated to the Legacy backend.

Comment 3 Johannes Beisiegel 2019-08-21 08:39:35 UTC
Created attachment 1606433 [details]
Cinder Volume Log (controller)

Comment 4 Johannes Beisiegel 2019-08-21 08:45:52 UTC
I was able to replicate the Issue on our Staging Environment with debug enabled.
Volume ID is a6277876-c126-4fa8-a3b9-0d99a83c07b5 and the temporary Volume ID for the retype is 7f64ffe7-348f-4dd9-93f0-6be71910bc90.

On the NFS Host I can observe the following:

# ls -lh
total 2
-rw-rw-rw- 1 root  42436   5.0G Aug 21 10:17 volume-a6277876-c126-4fa8-a3b9-0d99a83c07b5
# ls -lh
total 4
-rw-r--r-- 1 root  42436   5.0G Aug 21 10:20 volume-7f64ffe7-348f-4dd9-93f0-6be71910bc90
-rw-rw-rw- 1 root  42436   5.0G Aug 21 10:17 volume-a6277876-c126-4fa8-a3b9-0d99a83c07b5
# ls -lh
total 4
-rw-rw-rw- 1 root  42436   5.0G Aug 21 10:20 volume-7f64ffe7-348f-4dd9-93f0-6be71910bc90
-rw-rw-rw- 1 root  42436   5.0G Aug 21 10:17 volume-a6277876-c126-4fa8-a3b9-0d99a83c07b5
# ls -lh
total 4
-rw-rw-rw- 1 root  42436     0B Aug 21 10:20 volume-7f64ffe7-348f-4dd9-93f0-6be71910bc90
-rw-rw-rw- 1 root  42436   5.0G Aug 21 10:17 volume-a6277876-c126-4fa8-a3b9-0d99a83c07b5
# ls -lh
total 4
-rw-rw-rw- 1 root  42436   5.0G Aug 21 10:21 volume-7f64ffe7-348f-4dd9-93f0-6be71910bc90
-rw-rw-rw- 1 root  42436   5.0G Aug 21 10:17 volume-a6277876-c126-4fa8-a3b9-0d99a83c07b5
# ls -lh
total 0

Comment 5 Alan Bishop 2019-08-23 04:30:46 UTC
I am now able to reproduce the problem, and am investigating its cause.

Comment 6 Alan Bishop 2019-08-29 15:03:09 UTC
The fix has merged on master, and upstream backports are underway.

Comment 15 Tzach Shefi 2019-10-22 08:42:58 UTC
Verified on:
openstack-cinder-12.0.8-2.el7ost.noarch


Create vol without a type "None":

(overcloud) [stack@undercloud-0 ~]$ openstack volume create --size 1 test-vol01
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2019-10-22T06:57:50.000000           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | a8be099e-3230-40d9-b7f1-608e387c1da4 |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | test-vol01                           |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | None                                 |  ----> note type is "None" not set. 
| updated_at          | None                                 |
| user_id             | 74b333bafc2a45ab9d3e50710753f123     |
+---------------------+--------------------------------------+

Below we see volume is one type "None" yet created on NFS as requested by this bug

(overcloud) [stack@undercloud-0 ~]$ openstack volume show a8be099e-3230-40d9-b7f1-608e387c1da4
+--------------------------------+--------------------------------------+
| Field                          | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2019-10-22T06:57:50.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | a8be099e-3230-40d9-b7f1-608e387c1da4 |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | test-vol01                           |
| os-vol-host-attr:host          | controller-0@nfs#nfs                 |   -> backend is NFS
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | 739fb53130c14ca5a6ce887515341000     |
| properties                     |                                      |
| replication_status             | None                                 |
| size                           | 1                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | available                            |
| type                           | None                                 |   --> Type is "None"
| updated_at                     | 2019-10-22T06:57:53.000000           |
| user_id                        | 74b333bafc2a45ab9d3e50710753f123     |
+--------------------------------+--------------------------------------+

Now lets create a Cinder vol type for NFS

(overcloud) [stack@undercloud-0 ~]$ openstack volume type create nfs 
+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| description | None                                 |
| id          | 7db47c7f-f42d-4c1c-ab9d-e34229117207 |
| is_public   | True                                 |
| name        | nfs                                  |
+-------------+--------------------------------------+

(overcloud) [stack@undercloud-0 ~]$ openstack volume type set nfs --property  volume_backend_name=nfs 

(overcloud) [stack@undercloud-0 ~]$ openstack volume type list 
+--------------------------------------+------+-----------+
| ID                                   | Name | Is Public |
+--------------------------------------+------+-----------+
| 7db47c7f-f42d-4c1c-ab9d-e34229117207 | nfs  | True      |
+--------------------------------------+------+-----------+
(overcloud) [stack@undercloud-0 ~]$ openstack volume type show nfs
+--------------------+--------------------------------------+
| Field              | Value                                |
+--------------------+--------------------------------------+
| access_project_ids | None                                 |
| description        | None                                 |
| id                 | 7db47c7f-f42d-4c1c-ab9d-e34229117207 |
| is_public          | True                                 |
| name               | nfs                                  |
| properties         | volume_backend_name='nfs'            |
| qos_specs_id       | None                                 |
+--------------------+--------------------------------------+


Now that we have a volume with a "None" type on NFS we can try to migrate it same NFS type/backend. 
Before I do this here is the volume on the NFS server -> 

[root@cougar11 ins_cinder]# ll | grep a8be
-rw-rw----. 1 root root 1073741824 אוק 22 09:57 volume-a8be099e-3230-40d9-b7f1-608e387c1da4


(overcloud) [stack@undercloud-0 ~]$ openstack volume set  --retype-policy on-demand --type nfs test-vol01
(overcloud) [stack@undercloud-0 ~]$ openstack volume show test-vol01
+--------------------------------+--------------------------------------+
| Field                          | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2019-10-22T06:57:50.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | a8be099e-3230-40d9-b7f1-608e387c1da4 |   ---> Same original ID   
| migration_status               | success                              |
| multiattach                    | False                                |
| name                           | test-vol01                           |
| os-vol-host-attr:host          | controller-0@nfs#nfs                 |   ---> Volume remains on NFS backend
| os-vol-mig-status-attr:migstat | success                              |   --> migrated 
| os-vol-mig-status-attr:name_id | 868466f0-0ec2-4eb8-b379-1d30e0baa678 |   --> new name/ID
| os-vol-tenant-attr:tenant_id   | 739fb53130c14ca5a6ce887515341000     |
| properties                     |                                      |
| replication_status             | None                                 |
| size                           | 1                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | available                            |
| type                           | nfs                                  |   ---> type changed from None to NFS 
| updated_at                     | 2019-10-22T07:10:49.000000           |
| user_id                        | 74b333bafc2a45ab9d3e50710753f123     |
+--------------------------------+--------------------------------------+


On NFS server side we now only have the new volume
[root@cougar11 ins_cinder]# ll | grep 8684
-rw-rw----. 1 qemu qemu 1073741824 אוק 22 10:10 volume-868466f0-0ec2-4eb8-b379-1d30e0baa678

[root@cougar11 ins_cinder]# ll | grep a8be
[root@cougar11 ins_cinder]#      


Successfully attach volume to an instance:

(overcloud) [stack@undercloud-0 ~]$ nova volume-attach 08538769-3218-40d4-bab0-fc9169fe1eff a8be099e-3230-40d9-b7f1-608e387c1da4 auto
/usr/lib/python2.7/site-packages/requests/packages/urllib3/connection.py:344: SubjectAltNameWarning: Certificate for 10.0.0.101 has no `sck for a `commonName` for now. This feature is being removed by major browsers and deprecated by RFC 2818. (See https://github.com/shazow
  SubjectAltNameWarning
/usr/lib/python2.7/site-packages/requests/packages/urllib3/connection.py:344: SubjectAltNameWarning: Certificate for 10.0.0.101 has no `sck for a `commonName` for now. This feature is being removed by major browsers and deprecated by RFC 2818. (See https://github.com/shazow
  SubjectAltNameWarning
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | a8be099e-3230-40d9-b7f1-608e387c1da4 |
| serverId | 08538769-3218-40d4-bab0-fc9169fe1eff |
| volumeId | a8be099e-3230-40d9-b7f1-608e387c1da4 |
+----------+--------------------------------------+


Looks good to verify.

Comment 17 errata-xmlrpc 2019-11-07 13:59:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3800

Comment 18 Eric Harney 2019-12-20 16:00:36 UTC
*** Bug 1784020 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.