Bug 1511537
| Summary: | nova-compute logs "No calling threads waiting for msg_id" on nova-conductor RPC | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Victor Stinner <vstinner> |
| Component: | python-oslo-messaging | Assignee: | Victor Stinner <vstinner> |
| Status: | CLOSED INSUFFICIENT_DATA | QA Contact: | Udi Shkalim <ushkalim> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 8.0 (Liberty) | CC: | apevec, chjones, fpercoco, geguileo, jeckersb, lhh, mbayer, sbandyop, srevivo, vstinner |
| Target Milestone: | --- | Keywords: | Triaged |
| Target Release: | 13.0 (Queens) | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-06-05 13:50:55 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Victor Stinner
2017-11-09 14:17:09 UTC
Hello All, Thank you for the detailed explanation. @Victor : So if I understand correctly we need to suggest customer, just to set max_overflow = 50 in cinder.conf, please correct me of I am wrong. > @Victor : So if I understand correctly we need to suggest customer, just to set max_overflow = 50 in cinder.conf, please correct me of I am wrong.
Yes, but also try to reduce the number of cinder-api processes: the osapi_volume_workers option of cinder.conf.
Michael Bayer: "The ps listing shows that there are a ton of cinder-api processes running - osapi_volume_workers in cinder.conf is....48!"
Hello Victor, Thank you for your response. I have shared the feedback to the customer and have asked to monitor if the issue is still reproducible under heavy load. I will share feedback from customer when he comes backs. This issue seems to be a configuration issue, more than a bug. What is the status of this issue? Can it be closed? Or do we have new data to elaborate on the bug aspect? I close the issue since it didn't get any concrete data since last year, and my NEEDINFO didn't get any reply since 2 months. (I just removed an old NEEDINFO, the issue is already closed.) |