The problem is caused by the I/O operations performed by the glance client when uploading the volume to the image in Glance, as they seem to be blocking the greenthread switching, as the issues happen after we have attached the volume to the Cinder node, confirmed that we can do I/O on it using dd, and right when the code opens the device and calls the Glance client with the file descriptor.
Duplication of the original bug. Not needed.
Reopen this bug to track a fix for bug #1661356 in OPS14.
Verified on: openstack-cinder-13.0.3-0.20190118014305.44c5314.el7ost 1. Uploaded a rhel (~500mb) image to glance. glance image-create --disk-format qcow2 --container-format bare --file rhel7.3_root_qum5net.qcow2 --name rhel Created a volume from this image: #cinder create 10 --image rhel Upload volume to a new image: openstack image create --volume 2eada5c4-a970-4551-b01f-1cedef03f38c imageFromVol While doing this I watched -d -n 2 openstack volume service list +------------------+-------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+-------------------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller-2 | nova | enabled | up | 2019-02-28T15:38:25.000000 | | cinder-scheduler | controller-1 | nova | enabled | up | 2019-02-28T15:38:28.000000 | | cinder-scheduler | controller-0 | nova | enabled | up | 2019-02-28T15:38:29.000000 | | cinder-volume | hostgroup@tripleo_iscsi | nova | enabled | up | 2019-02-28T15:38:30.000000 | +------------------+-------------------------+------+---------+-------+----------------------------+ Cinder volume remains up the whole time, tried a few time same "up" status remains, never it go down. Image is available: #glance image-list +--------------------------------------+--------------+ | ID | Name | +--------------------------------------+--------------+ | 8e33e4de-c7ef-49d7-b61b-e6ce150c2a1d | cirros | | 5e4d538d-57b5-4a82-aab9-4588208d9d3b | imageFromVol | | 0188e8aa-e05f-48d0-b1f9-5d81a79a03be | rhel | +--------------------------------------+--------------+ Looks good to verify.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0586