This bug was initially created as a copy of Bug #1977845 Description =========== When we use nfs backend in cinder and attach a cinder volume to an instance, the instance access to the file in nfs share, which is named like volume-<volume id>. When the volume is attached to an instance and we take snapshot with "openstack volume snapshot create <volume> --force", it will create the following 3 files in nfs share. (1) volume-<volume id> base image freezed when taking snapshot (2) volume-<volume id>-<snapshot id> diff image where instance should write into after taking snapshot (3) volume-<volume id>.info json file to manage active snapshot As described above, after taking snapshot, the instance should write into (2) volume-<volume id>-<snapshot id> . It works just after taking snapshot, but if we stop and start the instance, the instance starts to write into (1) volume-<volume id>, which it should not modify. Steps to reproduce ================== 1. Create a volume in cinder nfs backend 2. Create a bfv instance with the volume 3. Take snapshot of the volume 4. Stop and Start the instance Expected result =============== The instance keeps writing into volume-<volume id>-<snapshot id> Actual result ============= The instance writes into volume-<volume id> Environment =========== I reproduced the issue with Queens release with nova: libvirt driver cinder: nfs backed, with nfs_snapshot_support=True As far as I see the implementation about file path handling, I don't see any changes in the way how we handle disk file path for nfs backend, so the problem should be reproduced with master.
Verified on: openstack-cinder-15.4.0-1.20210713144326.el8ost.noarch Installed a deployment using RHEL NFS as Cinder's backend: Backend config section: [tripleo_nfs] backend_host=hostgroup volume_backend_name=tripleo_nfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_shares_config=/etc/cinder/shares-nfs.conf nfs_mount_options=context=system_u:object_r:container_file_t:s0 nfs_snapshot_support=True nas_secure_file_operations=False nas_secure_file_permissions=False 1. Create a bootable volume in cinder nfs backend (overcloud) [stack@undercloud-0 ~]$ cinder create 2 --name BootVol --image 4910307d-e428-4145-9dc4-bfa5a3c23c20 +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2021-11-16T14:46:38.000000 | | description | None | | encrypted | False | | id | 0affa559-fbfb-4879-8c83-074a22c34896 | | metadata | {} | | migration_status | None | | multiattach | False | | name | BootVol | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | d0fadc14814245b0a6a00240a4a85ee9 | | replication_status | None | | size | 2 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | 92ef968f291046a0b6879d0a0e1a12cd | | volume_type | tripleo | +--------------------------------+--------------------------------------+ 2. Create a bfv instance with the volume (overcloud) [stack@undercloud-0 ~]$ nova boot --flavor tiny --block-device source=volume,id=0affa559-fbfb-4879-8c83-074a22c34896,dest=volume,size=2,shutdown=preserve,bootindex=0 myInstanceFromVolume --nic net-id=efd5f6df-f74d-4542-83c0-f72a17e7affb +--------------------------------------+----------------------------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hostname | myinstancefromvolume | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-SRV-ATTR:kernel_id | | | OS-EXT-SRV-ATTR:launch_index | 0 | | OS-EXT-SRV-ATTR:ramdisk_id | | | OS-EXT-SRV-ATTR:reservation_id | r-09lnw27z | | OS-EXT-SRV-ATTR:root_device_name | - | | OS-EXT-SRV-ATTR:user_data | - | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | 5FqZ9VwpFxBC | | config_drive | | | created | 2021-11-16T14:49:01Z | | description | - | | flavor:disk | 1 | | flavor:ephemeral | 0 | | flavor:extra_specs | {} | | flavor:original_name | tiny | | flavor:ram | 512 | | flavor:swap | 0 | | flavor:vcpus | 1 | | hostId | | | host_status | | | id | 4faf1da3-c636-4c1f-92c2-c932b06e3d06 | | image | Attempt to boot from volume - no image supplied | | key_name | - | | locked | False | | locked_reason | - | | metadata | {} | | name | myInstanceFromVolume | | os-extended-volumes:volumes_attached | [{"id": "0affa559-fbfb-4879-8c83-074a22c34896", "delete_on_termination": false}] | | progress | 0 | | security_groups | default | | server_groups | [] | | status | BUILD | | tags | [] | | tenant_id | d0fadc14814245b0a6a00240a4a85ee9 | | trusted_image_certificates | - | | updated | 2021-11-16T14:49:02Z | | user_id | 92ef968f291046a0b6879d0a0e1a12cd | +--------------------------------------+----------------------------------------------------------------------------------+ Instance is up: (overcloud) [stack@undercloud-0 ~]$ nova list +--------------------------------------+----------------------+--------+------------+-------------+-----------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------------------+--------+------------+-------------+-----------------------------------+ | 4faf1da3-c636-4c1f-92c2-c932b06e3d06 | myInstanceFromVolume | ACTIVE | - | Running | internal=192.168.0.26 | +--------------------------------------+----------------------+--------+------------+-------------+-----------------------------------+ 3. Take snapshot of the volume: (overcloud) [stack@undercloud-0 ~]$ cinder snapshot-create BootVol --force --name SnapOfBVF +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | created_at | 2021-11-16T14:56:36.741669 | | description | None | | id | 163b4c20-00a5-4ca5-99f7-ecae293a9a80 | | metadata | {} | | name | SnapOfBVF | | size | 2 | | status | creating | | updated_at | None | | volume_id | 0affa559-fbfb-4879-8c83-074a22c34896 | +-------------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ cinder snapshot-list +--------------------------------------+--------------------------------------+-----------+-----------+------+ | ID | Volume ID | Status | Name | Size | +--------------------------------------+--------------------------------------+-----------+-----------+------+ | 163b4c20-00a5-4ca5-99f7-ecae293a9a80 | 0affa559-fbfb-4879-8c83-074a22c34896 | available | SnapOfBVF | 2 | +--------------------------------------+--------------------------------------+-----------+-----------+------+ 4. Stop and Start the instance (overcloud) [stack@undercloud-0 ~]$ nova stop myInstanceFromVolume Request to stop server myInstanceFromVolume has been accepted. (overcloud) [stack@undercloud-0 ~]$ nova list +--------------------------------------+----------------------+---------+------------+-------------+-----------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------------------+---------+------------+-------------+-----------------------------------+ | 4faf1da3-c636-4c1f-92c2-c932b06e3d06 | myInstanceFromVolume | SHUTOFF | - | Shutdown | internal=192.168.0.26 | +--------------------------------------+----------------------+---------+------------+-------------+-----------------------------------+ Powered off^, now lets start it up: (overcloud) [stack@undercloud-0 ~]$ nova start myInstanceFromVolume Request to start server myInstanceFromVolume has been accepted. (overcloud) [stack@undercloud-0 ~]$ nova list +--------------------------------------+----------------------+--------+------------+-------------+-----------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------------------+--------+------------+-------------+-----------------------------------+ | 4faf1da3-c636-4c1f-92c2-c932b06e3d06 | myInstanceFromVolume | ACTIVE | - | Running | internal=192.168.0.26 | +--------------------------------------+----------------------+--------+------------+-------------+-----------------------------------+ If we check virsh: ()[root@compute-0 /]# virsh dumpxml instance-00000005 | grep volume <source file='/var/lib/nova/mnt/9503cec640ed4e0a054b70af162f4d2b/volume-0affa559-fbfb-4879-8c83-074a22c34896.163b4c20-00a5-4ca5-99f7-ecae293a9a80' index='1'/> As expected the disk now points to the volume's snapshot: volume-0affa559-fbfb-4879-8c83-074a22c34896.163b4c20-00a5-4ca5-99f7-ecae293a9a80 NFS share shows both files: [root@titan32 cinder]# ll -> the NFS share total 106804 -rw-rw-r--. 1 1001 1001 0 Aug 9 14:49 cindernfs -rw-rw-rw-. 1 nobody nobody 2147483648 Nov 16 14:54 volume-0affa559-fbfb-4879-8c83-074a22c34896 -rw-rw-rw-. 1 nobody nobody 851968 Nov 16 14:58 volume-0affa559-fbfb-4879-8c83-074a22c34896.163b4c20-00a5-4ca5-99f7-ecae293a9a80 -rw-rw-rw-. 1 nobody nobody 222 Nov 16 14:56 volume-0affa559-fbfb-4879-8c83-074a22c34896.info Rather than pointing to the original volume as occurred (wrongly) we now use/point to the snapshot, Good to verify!
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenStack Platform 16.1.7 (Train) bug fix and enhancement advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3762