Bug 1511842

Summary: cinder-volume stays down since ocata
Product: Red Hat OpenStack Reporter: Nilesh <nchandek>
Component: openstack-cinderAssignee: Eric Harney <eharney>
Status: CLOSED ERRATA QA Contact: Avi Avraham <aavraham>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 11.0 (Ocata)CC: cschwede, ealcaniz, eharney, geguileo, mbracho, mburns, nchandek, pablo.iranzo, pgrist, pmorey, srevivo, tshefi
Target Milestone: z4Keywords: Triaged, ZStream
Target Release: 11.0 (Ocata)   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: openstack-cinder-10.0.6-2.el7ost Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-02-13 16:29:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Nilesh 2017-11-10 09:30:17 UTC
While upgrading to ocata, we hit this bug:

  https://bugs.launchpad.net/cinder/+bug/1707936

Cu's ceph cluster is large. It has close to 400 volumes, some of which are very large.

This is fixed upstream in:

  https://review.openstack.org/#/c/501325/4

Cu wants to update the openstack-cinder package to include this patch?

Comment 4 Tzach Shefi 2017-11-20 13:13:46 UTC
Again unsure about needed verification steps.

1. Deploy with Ceph (external in my case)

2. Create a few volumes, do we know many were needed to trip this 10 50 100 400 ? 
Does it matter if volumes are 1G or fewer volumes but larger sized? 

3. Then see how long restart takes? 
systemctl restart openstack-cinder-volume

Comment 6 Tzach Shefi 2017-11-22 17:09:47 UTC
Verified on:
openstack-cinder-10.0.6-4.el7ost.noarch

Gorka suggested decrease these cinder.conf options below
would speed things up a bit.
60->30 for periodic interval
60 -> 5 for periodic_fuzzy_delay

Restarted service to update settings

I've created several volumes 12+ volumes, totaling ~1T+ in provisioned capacity.
Filled with: udev/random or large iso/qcow2. 
cloned and changed data nothing I did caused changes service state. 

Openstack-cinder-volume remains up since system was installed up time of  24H.
All the volumes I added were created within the last 4h
No glitch in service status while using watch -d -n 10.

Comment 28 errata-xmlrpc 2018-02-13 16:29:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0306