Bug 1511842 - cinder-volume stays down since ocata
Summary: cinder-volume stays down since ocata
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 11.0 (Ocata)
Hardware: All
OS: All
urgent
urgent
Target Milestone: z4
: 11.0 (Ocata)
Assignee: Eric Harney
QA Contact: Avi Avraham
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-10 09:30 UTC by Nilesh
Modified: 2021-03-11 16:15 UTC (History)
12 users (show)

Fixed In Version: openstack-cinder-10.0.6-2.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-02-13 16:29:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 501325 0 None None None 2017-11-15 14:39:47 UTC
Red Hat Product Errata RHBA-2018:0306 0 normal SHIPPED_LIVE openstack-cinder bug fix advisory 2018-02-14 00:16:06 UTC

Description Nilesh 2017-11-10 09:30:17 UTC
While upgrading to ocata, we hit this bug:

  https://bugs.launchpad.net/cinder/+bug/1707936

Cu's ceph cluster is large. It has close to 400 volumes, some of which are very large.

This is fixed upstream in:

  https://review.openstack.org/#/c/501325/4

Cu wants to update the openstack-cinder package to include this patch?

Comment 4 Tzach Shefi 2017-11-20 13:13:46 UTC
Again unsure about needed verification steps.

1. Deploy with Ceph (external in my case)

2. Create a few volumes, do we know many were needed to trip this 10 50 100 400 ? 
Does it matter if volumes are 1G or fewer volumes but larger sized? 

3. Then see how long restart takes? 
systemctl restart openstack-cinder-volume

Comment 6 Tzach Shefi 2017-11-22 17:09:47 UTC
Verified on:
openstack-cinder-10.0.6-4.el7ost.noarch

Gorka suggested decrease these cinder.conf options below
would speed things up a bit.
60->30 for periodic interval
60 -> 5 for periodic_fuzzy_delay

Restarted service to update settings

I've created several volumes 12+ volumes, totaling ~1T+ in provisioned capacity.
Filled with: udev/random or large iso/qcow2. 
cloned and changed data nothing I did caused changes service state. 

Openstack-cinder-volume remains up since system was installed up time of  24H.
All the volumes I added were created within the last 4h
No glitch in service status while using watch -d -n 10.

Comment 28 errata-xmlrpc 2018-02-13 16:29:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0306


Note You need to log in before you can comment on or make changes to this bug.