Bug 1511842
Summary: | cinder-volume stays down since ocata | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Nilesh <nchandek> |
Component: | openstack-cinder | Assignee: | Eric Harney <eharney> |
Status: | CLOSED ERRATA | QA Contact: | Avi Avraham <aavraham> |
Severity: | urgent | Docs Contact: | |
Priority: | urgent | ||
Version: | 11.0 (Ocata) | CC: | cschwede, ealcaniz, eharney, geguileo, mbracho, mburns, nchandek, pablo.iranzo, pgrist, pmorey, srevivo, tshefi |
Target Milestone: | z4 | Keywords: | Triaged, ZStream |
Target Release: | 11.0 (Ocata) | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | |||
Fixed In Version: | openstack-cinder-10.0.6-2.el7ost | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-02-13 16:29:16 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Nilesh
2017-11-10 09:30:17 UTC
Again unsure about needed verification steps. 1. Deploy with Ceph (external in my case) 2. Create a few volumes, do we know many were needed to trip this 10 50 100 400 ? Does it matter if volumes are 1G or fewer volumes but larger sized? 3. Then see how long restart takes? systemctl restart openstack-cinder-volume Verified on: openstack-cinder-10.0.6-4.el7ost.noarch Gorka suggested decrease these cinder.conf options below would speed things up a bit. 60->30 for periodic interval 60 -> 5 for periodic_fuzzy_delay Restarted service to update settings I've created several volumes 12+ volumes, totaling ~1T+ in provisioned capacity. Filled with: udev/random or large iso/qcow2. cloned and changed data nothing I did caused changes service state. Openstack-cinder-volume remains up since system was installed up time of 24H. All the volumes I added were created within the last 4h No glitch in service status while using watch -d -n 10. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0306 |