Login
Log in using an SSO provider:
Fedora Account System
Red Hat Associate
Red Hat Customer
Login using a Red Hat Bugzilla account
Forgot Password
Create an Account
Red Hat Bugzilla – Attachment 1979481 Details for
Bug 2226366
[RBD] Retyping of in-use boot volumes renders instances unusable (possible data corruption)
Home
New
Search
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh89 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
[?]
This site requires JavaScript to be enabled to function correctly, please enable it.
reproduction notes
paste.openstack.org-bNpzkjbeX (text/plain), 11.49 KB, created by
Eric Harney
on 2023-07-25 19:57:21 UTC
(
hide
)
Description:
reproduction notes
Filename:
MIME Type:
Creator:
Eric Harney
Created:
2023-07-25 19:57:21 UTC
Size:
11.49 KB
patch
obsolete
>None of the volumes are encrypted. > >cinder.conf excerpt (after step 3 updated ceph2): > >[ceph1] >image_volume_cache_enabled = True >rbd_max_clone_depth = 5 >rbd_flatten_volume_from_snapshot = False >rbd_secret_uuid = 7fe03205-375d-45eb-a203-56c461c6888c >rbd_user = cinder >rbd_pool = volumes >rbd_ceph_conf = /etc/ceph/ceph.conf >volume_driver = cinder.volume.drivers.rbd.RBDDriver >volume_backend_name = ceph1 > >[ceph2] >image_volume_cache_enabled = True >rbd_max_clone_depth = 5 >rbd_flatten_volume_from_snapshot = False >rbd_secret_uuid = 7fe03205-375d-45eb-a203-56c461c6888c >rbd_user = cinder >rbd_pool = othervolumes >rbd_ceph_conf = /etc/ceph/ceph.conf >volume_driver = cinder.volume.drivers.rbd.RBDDriver >volume_backend_name = ceph2 > >[nova] >region_name = RegionOne >memcached_servers = localhost:11211 >cafile = /opt/stack/data/ca-bundle.pem >project_domain_name = Default >project_name = service >user_domain_name = Default >password = a >username = nova >auth_url = http://127.0.0.1/identity >interface = public >auth_type = password > >1. devstack local.conf relevant settings: > >[[local|localrc]] >enable_plugin devstack-plugin-ceph https://opendev.org/openstack/devstack-plugin-ceph >CEPH_RELEASE=quincy >VOLUME_BACKING_FILE_SIZE=8GB >CINDER_ENABLED_BACKENDS=ceph:ceph1,ceph:ceph2 > >This will create 2 backends: ceph1 and ceph2. > >2. Create a new ceph pool for the second ceph backend, this will create one called âothervolumesâ > 1. sudo ceph osd pool create othervolumes > >3. Update /etc/cinder/cinder.conf to make the ceph2 backend use the new pool > 1. [ceph2] >image_volume_cache_enabled = True >rbd_max_clone_depth = 5 >rbd_flatten_volume_from_snapshot = False >rbd_secret_uuid = 7fe03205-375d-45eb-a203-56c461c6888c >rbd_user = cinder >rbd_pool = othervolumes >rbd_ceph_conf = /etc/ceph/ceph.conf >volume_driver = cinder.volume.drivers.rbd.RBDDriver >volume_backend_name = ceph2 > >4. Add the new pool to the ceph auth capabilities > 1. sudo ceph auth caps client.cinder mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=othervolumes" > 2. sudo ceph auth caps client.glance mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=othervolumes" > >5. Note that you will need to create a new image because the default image created by devstack will not have permissions to work with the new pool > 1. See https://bugs.launchpad.net/cinder/+bug/1823445 comment #6 > 2. openstack --os-cloud devstack image save cirros-0.6.2-x86_64-disk --file ~/cirros-0.6.2-x86_64-disk-copy > 3. openstack --os-cloud devstack image create --disk-format qcow2 --file ~/cirros-0.6.2-x86_64-disk-copy cirros-0.6.2-x86_64-disk-copy > >6. Create a volume from the new image that âothervolumesâ has permissions for with type ceph2 > 1. openstack --os-cloud devstack volume create --size 1 --type ceph2 --image cirros-0.6.2-x86_64-disk-copy --bootable ceph2vol > >7. Create a server from that volume to end up with a server residing in pool âothervolumesâ > 1. openstack --os-cloud devstack server create --volume ceph2vol --flavor m1.tiny --network private --wait testretype > >8. Verify the server is booting > 1. openstack --os-cloud devstack console log show --lines 5 testretype > >Reproducing https://bugs.launchpad.net/cinder/+bug/2019190 ([RBD] Retyping of in-use boot volumes renders instances unusable (possible data corruption)) >Steps > >1. From the above setup, the instance volume is residing in the pool âothervolumesâ > >2. Server xml for the disk is: > 1. <disk type='network' device='disk'> > <driver name='qemu' type='raw' cache='writeback' discard='unmap'/> > <auth username='cinder'> > <secret type='ceph' uuid='7fe03205-375d-45eb-a203-56c461c6888c'/> > </auth> > <source protocol='rbd' name='othervolumes/volume-b4e1881a-18bd-4407-8c3f-976230177ecc' index='1'> > <host name='127.0.0.1' port='6789'/> > </source> > <target dev='vda' bus='virtio'/> > <serial>b4e1881a-18bd-4407-8c3f-976230177ecc</serial> > <alias name='virtio-disk0'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> > </disk> > >3. Retype to âceph1â to cause a move to pool âvolumesâ in the other ceph backend > 1. openstack --os-cloud devstack volume set --type ceph1 --retype-policy "on-demand" ceph2vol > >4. Hard reboot the server > 1. openstack --os-cloud devstack server reboot --hard testretype > >5. The server is not able to boot > 1. sudo virsh list > Id Name State >-------------------- > > 2. openstack --os-cloud devstack console log show --lines 5 testretype >ResourceNotFound: 404: Client Error for url: http://127.0.0.1/compute/v2.1/servers/b5224a90-9813-457c-863f-7afff96cb173/action, Instance b5224a90-9813-457c-863f-7afff96cb173 could not be found. > >6. Check the attachment connection_info > 1. openstack --os-cloud devstack --os-volume-api-version 3.27 volume attachment list >+----------------------------+----------------------------+-----------------------------+----------+ >| ID | Volume ID | Server ID | Status | >+----------------------------+----------------------------+-----------------------------+----------+ >| f4451c66-ab56-4dc5-9824- | b4e1881a-18bd-4407-8c3f- | b5224a90-9813-457c-863f- | attached | >| 43aae5048c48 | 976230177ecc | 7afff96cb173 | | >+----------------------------+----------------------------+-----------------------------+----------+ > > > 2. openstack --os-cloud devstack --os-volume-api-version 3.27 volume attachment show f4451c66-ab56-4dc5-9824-43aae5048c48 --max-width 100 >+-------------+------------------------------------------------------------------------------------+ >| Field | Value | >+-------------+------------------------------------------------------------------------------------+ >| ID | f4451c66-ab56-4dc5-9824-43aae5048c48 | >| Volume ID | b4e1881a-18bd-4407-8c3f-976230177ecc | >| Instance ID | b5224a90-9813-457c-863f-7afff96cb173 | >| Status | attached | >| Attach Mode | rw | >| Attached At | 2023-07-21T21:18:08.000000 | >| Detached At | | >| Properties | access_mode='rw', attachment_id='f4451c66-ab56-4dc5-9824-43aae5048c48', | >| | auth_enabled='True', auth_username='cinder', cacheable='False', | >| | cluster_name='ceph', discard='True', driver_volume_type='rbd', encrypted='False', | >| | hosts='['127.0.0.1']', | >| | name='othervolumes/volume-b4e1881a-18bd-4407-8c3f-976230177ecc', ports='['6789']', | >| | qos_specs=, secret_type='ceph', | >| | secret_uuid='7fe03205-375d-45eb-a203-56c461c6888c', | >| | volume_id='b4e1881a-18bd-4407-8c3f-976230177ecc' | >+-------------+------------------------------------------------------------------------------------+ > > >The volume ânameâ has not been updated to where the volume actually is after the retype. It should be: >* name='volumes/volume-b4e1881a-18bd-4407-8c3f-976230177ecc' > > >Notes >* openstack --os-cloud devstack volume show ceph2vol --max-width 100 >+------------------------------+-------------------------------------------------------------------+ >| Field | Value | >+------------------------------+-------------------------------------------------------------------+ >| attachments | [{'id': 'b4e1881a-18bd-4407-8c3f-976230177ecc', 'attachment_id': | >| | 'f4451c66-ab56-4dc5-9824-43aae5048c48', 'volume_id': | >| | 'b4e1881a-18bd-4407-8c3f-976230177ecc', 'server_id': | >| | 'b5224a90-9813-457c-863f-7afff96cb173', 'host_name': None, | >| | 'device': '/dev/vda', 'attached_at': | >| | '2023-07-21T21:18:08.000000'}] | >| availability_zone | nova | >| bootable | true | >| consistencygroup_id | None | >| created_at | 2023-07-21T21:15:53.000000 | >| description | None | >| encrypted | False | >| id | b4e1881a-18bd-4407-8c3f-976230177ecc | >| multiattach | False | >| name | ceph2vol | >| os-vol-tenant-attr:tenant_id | 003768df7d534d3eabab6aae452f5b07 | >| properties | | >| replication_status | None | >| size | 1 | >| snapshot_id | None | >| source_volid | None | >| status | in-use | >| type | ceph1 | >| updated_at | 2023-07-21T21:25:55.000000 | >| user_id | c8ba17fbbc4549858fcaba9a8499914c | >| volume_image_metadata | {'signature_verified': 'False', 'owner_specified.openstack.md5': | >| | '', 'owner_specified.openstack.object': | >| | 'images/cirros-0.6.2-x86_64-disk-copy', | >| | 'owner_specified.openstack.sha256': '', 'image_id': | >| | 'e790919a-522c-4522-a009-5baee9a31744', 'image_name': | >| | 'cirros-0.6.2-x86_64-disk-copy', 'checksum': | >| | 'c8fc807773e5354afe61636071771906', 'container_format': 'bare', | >| | 'disk_format': 'qcow2', 'min_disk': '0', 'min_ram': '0', 'size': | >| | '21430272'} | >+------------------------------+-------------------------------------------------------------------+
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 2226366
: 1979481