Bug 1549293
Summary: | [CEE/SD] inconsistent PG - omap_digest_mismatch_oi discovered after 2.4a -> 2.5 update, plus several "bad complete for" | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Vikhyat Umrao <vumrao> |
Component: | RADOS | Assignee: | David Zafman <dzafman> |
Status: | CLOSED ERRATA | QA Contact: | Parikshith <pbyregow> |
Severity: | urgent | Docs Contact: | Aron Gunn <agunn> |
Priority: | urgent | ||
Version: | 3.0 | CC: | agunn, ceph-eng-bugs, ceph-qe-bugs, dzafman, flucifre, hklein, hnallurv, kchai, kdreyer, mhackett, shmohan, tpetr, tserlin, vumrao |
Target Milestone: | z1 | ||
Target Release: | 3.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | RHEL: ceph-12.2.1-45.el7cp Ubuntu: ceph_12.2.1-47redhat1xenial | Doc Type: | Bug Fix |
Doc Text: |
.Slow OSD startup after upgrading to Red Hat Ceph Storage 3.0
Ceph Storage Clusters that have large `omap` databases experience slow OSD startup due to scanning and repairing during the upgrade from Red Hat Ceph Storage 2.x to 3.0. The rolling update may take longer than the specified time out of 5 minutes.
Before running the Ansible `rolling_update.yml` playbook, set the `handler_health_osd_check_delay` option to 180 in the `group_vars/all.yml` file.
|
Story Points: | --- |
Clone Of: | 1548481 | Environment: | |
Last Closed: | 2018-03-08 15:54:03 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1494421, 1544643 |
Comment 26
errata-xmlrpc
2018-03-08 15:54:03 UTC
|