Bug 1707020
| Summary: | Scaling out with an additional compute node fails during ceph-ansible run | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Marius Cornea <mcornea> |
| Component: | documentation | Assignee: | Laura Marsh <lmarsh> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | RHOS Documentation Team <rhos-docs> |
| Severity: | medium | Docs Contact: | |
| Priority: | low | ||
| Version: | 15.0 (Stein) | CC: | dbecker, dcadzow, dsavinea, fpantano, gabrioux, gcharot, gfidente, johfulto, jvisser, lmarsh, mburns, morazi, ssmolyak, tenobreg |
| Target Milestone: | z1 | Keywords: | Triaged, ZStream |
| Target Release: | 15.0 (Stein) | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-10-21 19:43:57 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Marius Cornea
2019-05-06 15:59:50 UTC
We already document how a user can override the relevant parameters: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html-single/fast_forward_upgrades/index#increasing-the-restart-delay-for-large-ceph-clusters I suspect we'll be able to reproduce much less frequently (perhaps not at all?) with the following overrides: parameter_defaults: CephAnsibleExtraConfig: health_mon_check_retries: 10 health_mon_check_delay: 20 Perhaps we just need the docbug in the scale up documentation suggesting the above. As a follow up we can request higher defaults for ceph-ansible. Hi Marius, we've seen the variable names used are wrong, so I think the last attempts are not valid.
Can you try to run again the jobs using the following:
CephAnsibleExtraConfig:
handler_health_mon_check_retries: 10
handler_health_mon_check_delay: 20
Thanks.
(In reply to fpantano from comment #19) > Hi Marius, we've seen the variable names used are wrong, so I think the last > attempts are not valid. > Can you try to run again the jobs using the following: > > CephAnsibleExtraConfig: > handler_health_mon_check_retries: 10 > handler_health_mon_check_delay: 20 > > Thanks. I've had multiple runs with the new parameters in place and I wasn't able to reproduce the issue reported initially so I think we're good. OSP 15; put the content in the Director Installation & Usage Guide in the "Scaling overcloud nodes" section. We raised the timeouts in ceph-ansible itself [1], this should be even less likely to be hit, lowering severity. 1. https://bugzilla.redhat.com/show_bug.cgi?id=1718981 |