Bug 1545324
| Summary: | I/O latency of cinder volume after live migration increases | |||
|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Martin Schuppert <mschuppe> | |
| Component: | openstack-nova | Assignee: | Lee Yarwood <lyarwood> | |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | awaugama | |
| Severity: | high | Docs Contact: | ||
| Priority: | high | |||
| Version: | 8.0 (Liberty) | CC: | adhingra, awaugama, berrange, dasmith, dgilbert, eglynn, geguileo, jdillama, jjoyce, kchamart, khan.sana, lyarwood, mbooth, mschuppe, sbauza, scohen, sferdjao, sgordon, sputhenp, srevivo, stefanha, vromanso | |
| Target Milestone: | zstream | Keywords: | TestOnly, Triaged, ZStream | |
| Target Release: | 8.0 (Liberty) | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | openstack-nova-12.0.6-26.el7ost | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | 1463897 | |||
| : | 1545330 (view as bug list) | Environment: | ||
| Last Closed: | 2018-09-27 10:37:38 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1463897, 1482921 | |||
| Bug Blocks: | 1545330 | |||
|
Description
Martin Schuppert
2018-02-14 16:17:46 UTC
OSP8 is also affected by this:
# rpm -q openstack-nova-compute
openstack-nova-compute-12.0.6-21.el7ost.noarch
* before migration:
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
<auth username='cinder'>
<secret type='ceph' uuid='475b69d9-9ea3-4356-ac22-762b17a875e3'/>
</auth>
<source protocol='rbd' name='osp8-vms/9715a493-60be-4d76-9d4c-34b37dad7366_disk'>
<host name='192.168.122.5' port='6789'/>
<host name='192.168.122.6' port='6789'/>
<host name='192.168.122.7' port='6789'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='writeback'/>
<auth username='cinder'>
<secret type='ceph' uuid='475b69d9-9ea3-4356-ac22-762b17a875e3'/>
</auth>
<source protocol='rbd' name='osp8-volumes/volume-ce556e6c-dab1-40c2-b186-762d1f8afd4e'>
<host name='192.168.122.5' port='6789'/>
<host name='192.168.122.6' port='6789'/>
<host name='192.168.122.7' port='6789'/>
</source>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<serial>ce556e6c-dab1-40c2-b186-762d1f8afd4e</serial>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
* after migration:
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
<auth username='cinder'>
<secret type='ceph' uuid='475b69d9-9ea3-4356-ac22-762b17a875e3'/>
</auth>
<source protocol='rbd' name='osp8-vms/9715a493-60be-4d76-9d4c-34b37dad7366_disk'>
<host name='192.168.122.5' port='6789'/>
<host name='192.168.122.6' port='6789'/>
<host name='192.168.122.7' port='6789'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<auth username='cinder'>
<secret type='ceph' uuid='475b69d9-9ea3-4356-ac22-762b17a875e3'/>
</auth>
<source protocol='rbd' name='osp8-volumes/volume-ce556e6c-dab1-40c2-b186-762d1f8afd4e'>
<host name='192.168.122.5' port='6789'/>
<host name='192.168.122.6' port='6789'/>
<host name='192.168.122.7' port='6789'/>
</source>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<serial>ce556e6c-dab1-40c2-b186-762d1f8afd4e</serial>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
works with:
# diff -u driver.py.org driver.py
--- driver.py.org 2018-02-14 11:00:23.986251918 -0500
+++ driver.py 2018-02-14 11:12:07.310126939 -0500
@@ -1074,8 +1074,10 @@
driver.disconnect_volume(connection_info, disk_dev)
def _get_volume_config(self, connection_info, disk_info):
- driver = self._get_volume_driver(connection_info)
- return driver.get_config(connection_info, disk_info)
+ vol_driver = self._get_volume_driver(connection_info)
+ conf = vol_driver.get_config(connection_info, disk_info)
+ self._set_cache_mode(conf)
+ return conf
def _get_volume_encryptor(self, connection_info, encryption):
encryptor = encryptors.get_volume_encryptor(connection_info,
@@ -1119,7 +1121,6 @@
instance, CONF.libvirt.virt_type, image_meta, bdm)
self._connect_volume(connection_info, disk_info)
conf = self._get_volume_config(connection_info, disk_info)
- self._set_cache_mode(conf)
try:
state = guest.get_power_state(self._host)
@@ -3489,9 +3490,6 @@
vol['connection_info'] = connection_info
vol.save()
- for d in devices:
- self._set_cache_mode(d)
-
if image_meta.properties.get('hw_scsi_model'):
hw_scsi_model = image_meta.properties.hw_scsi_model
scsi_controller = vconfig.LibvirtConfigGuestController()
According to our records, this should be resolved by openstack-nova-12.0.6-28.el7ost. This build is available now. |