Bug 1646940 - cinder-volume service will become down when creating image from volume
Summary: cinder-volume service will become down when creating image from volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 13.0 (Queens)
Assignee: Sofia Enriquez
QA Contact: Tzach Shefi
Kim Nylander
URL:
Whiteboard:
Depends On: 1661356
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-06 11:01 UTC by Meiyan Zheng
Modified: 2023-09-07 19:34 UTC (History)
7 users (show)

Fixed In Version: openstack-cinder-12.0.4-4.el7ost
Doc Type: Bug Fix
Doc Text:
Cause: Previously, when performing image operations in Cinder, users experienced errors on RabbitMQ and DB connections because file I/O operations blocked greenthreads, preventing switching to another greenthread on I/O Consequence: As a result, the cinder-volume service was `down` when creating an image from a volume. Fix: With this update, image operations that can prevent greenthread switching are executed in native threads. Result: As a result, the cinder-volume service no longer appears `down` to the scheduler.
Clone Of:
: 1661356 (view as bug list)
Environment:
Last Closed: 2019-03-14 13:47:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1801958 0 None None None 2018-11-06 16:30:09 UTC
OpenStack gerrit 615934 0 'None' MERGED Ensure image utils don't block greenthreads 2021-01-29 08:54:35 UTC
Red Hat Issue Tracker OSP-28204 0 None None None 2023-09-07 19:34:10 UTC
Red Hat Product Errata RHBA-2019:0560 0 None None None 2019-03-14 13:47:55 UTC

Comment 3 Sofia Enriquez 2018-12-13 21:57:58 UTC
External Backport Openstack Gerrit 624497 (to Rocky)

Comment 15 Tzach Shefi 2019-02-28 15:42:28 UTC
Verified on:
openstack-cinder-12.0.4-8.el7ost.noarch

Uploaded a rhel (~500mb) image to glance. 
Created a volume from this image:
#cinder create 10 --image rhel


Upload volume to a new image:
openstack image create --volume f04101e9-2c07-46f3-84bb-9a33dc1f809a imageFromVol 

While doing this I watched -d -n 2 openstack volume service list
+------------------+-------------------------+------+---------+-------+----------------------------+
| Binary           | Host                    | Zone | Status  | State | Updated At                 |
+------------------+-------------------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller-2            | nova | enabled | up    | 2019-02-28T15:38:25.000000 |
| cinder-scheduler | controller-1            | nova | enabled | up    | 2019-02-28T15:38:28.000000 |
| cinder-scheduler | controller-0            | nova | enabled | up    | 2019-02-28T15:38:29.000000 |
| cinder-volume    | hostgroup@tripleo_iscsi | nova | enabled | up    | 2019-02-28T15:38:30.000000 |
+------------------+-------------------------+------+---------+-------+----------------------------+

Cinder volume remains up the whole time, 
tried a few time same "up" status remains, never it go down. 

Image is available:
#glance image-list
+--------------------------------------+--------------+
| ID                                   | Name         |
+--------------------------------------+--------------+
| 9e83f549-914f-4322-87f4-2322b98df0d9 | cirros       |
| 7f5d8e2f-9607-4f9e-8de2-d1eb1290c138 | imageFromVol |
| 53188403-ebf5-44ec-9e04-74d66080d342 | rhel         |
+--------------------------------------+--------------+

Looks good to verify.

Comment 17 errata-xmlrpc 2019-03-14 13:47:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0560


Note You need to log in before you can comment on or make changes to this bug.