Description of problem: Cu performed FFU post which secret id is not updated for cinder volumes. Version-Release number of selected component (if applicable): RHOSP13 + Ceph(External) Actual results: secret id not updated in cinder and nova DB Expected results: secret id should be updated in cinder and nova DB
Hello Team, After performing FFU, secret ID (ceph FSID) is not updated in cinder & nova DB for already created volumes which in result is affecting production VMs(Boot from Volume) whereas VMs(boot from image) are working fine. VM in NOSTATE with below error ~~~ | fault | {u'message': u"Secret not found: no secret with matching uuid '20c4eedc-860a-4da5-b381-4261c97af87f'", u'code': 500, u'details': u' File | | | "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 202, in decorated_function\n return function(self, context, *args, **kwargs)\n | | | File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3282, in reboot_instance\n self._set_instance_obj_error_state(context, | | | instance)\n File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__\n self.force_reraise()\n File | | | "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise\n six.reraise(self.type_, self.value, self.tb)\n File | | | "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3257, in reboot_instance\n bad_volumes_callback=bad_volumes_callback)\n File | | | "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2702, in reboot\n block_device_info)\n File "/usr/lib/python2.7/site- | | | packages/nova/virt/libvirt/driver.py", line 2818, in _hard_reboot\n vifs_already_plugged=True)\n File "/usr/lib/python2.7/site- | | | packages/nova/virt/libvirt/driver.py", line 5645, in _create_domain_and_network\n destroy_disks_on_failure)\n File "/usr/lib/python2.7/site- | | | packages/oslo_utils/excutils.py", line 220, in __exit__\n self.force_reraise()\n File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", | | | line 196, in force_reraise\n six.reraise(self.type_, self.value, self.tb)\n File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", | | | line 5614, in _create_domain_and_network\n post_xml_callback=post_xml_callback)\n File "/usr/lib/python2.7/site- | | | packages/nova/virt/libvirt/driver.py", line 5549, in _create_domain\n guest.launch(pause=pause)\n File "/usr/lib/python2.7/site- | | | packages/nova/virt/libvirt/guest.py", line 144, in launch\n self._encoded_xml, errors=\'ignore\')\n File "/usr/lib/python2.7/site- | | | packages/oslo_utils/excutils.py", line 220, in __exit__\n self.force_reraise()\n File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", | | | line 196, in force_reraise\n six.reraise(self.type_, self.value, self.tb)\n File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", | | | line 139, in launch\n return self._domain.createWithFlags(flags)\n File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in | | | doit\n result = proxy_call(self._autowrap, f, *args, **kwargs)\n File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in | | | proxy_call\n rv = execute(f, *args, **kwargs)\n File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute\n | | | six.reraise(c, e, tb)\n File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker\n rv = meth(*args, **kwargs)\n File | | | "/usr/lib64/python2.7/site-packages/libvirt.py", line 1110, in createWithFlags\n if ret == -1: raise libvirtError (\'virDomainCreateWithFlags() | | | failed\', dom=self)\n', u'created': u'2020-05-23T06:56:21Z'} | ~~~ old FSID = 20c4eedc-860a-4da5-b381-4261c97af87f new FSID = 2032fc65-0794-461b-8153-7a89ea6094b0 We can see old fsid in the connection info of block_device_mapping table of nova db. ~~~ connection_info: {.... "data": {"secret_type": "ceph", "name": "cinder-volumes-ssd/volume-1542b7de-8424-4b43-b3a6-495bd96aa5cf", "encrypted": false, "cluster_name": "ceph","secret_uuid": "20c4eedc-860a-4da5-b381-4261c97af87f", ...} ~~~
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenStack Platform 13.0 bug fix and enhancement advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:2385