Bug 1549293 - [CEE/SD] inconsistent PG - omap_digest_mismatch_oi discovered after 2.4a -> 2.5 update, plus several "bad complete for"
Summary: [CEE/SD] inconsistent PG - omap_digest_mismatch_oi discovered after 2.4a -> 2...
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RADOS
Version: 3.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: z1
: 3.0
Assignee: David Zafman
QA Contact: Parikshith
Aron Gunn
Depends On:
Blocks: 1494421 1544643
TreeView+ depends on / blocked
Reported: 2018-02-26 21:55 UTC by Vikhyat Umrao
Modified: 2018-06-08 07:29 UTC (History)
14 users (show)

Fixed In Version: RHEL: ceph-12.2.1-45.el7cp Ubuntu: ceph_12.2.1-47redhat1xenial
Doc Type: Bug Fix
Doc Text:
.Slow OSD startup after upgrading to Red Hat Ceph Storage 3.0 Ceph Storage Clusters that have large `omap` databases experience slow OSD startup due to scanning and repairing during the upgrade from Red Hat Ceph Storage 2.x to 3.0. The rolling update may take longer than the specified time out of 5 minutes. Before running the Ansible `rolling_update.yml` playbook, set the `handler_health_osd_check_delay` option to 180 in the `group_vars/all.yml` file.
Clone Of: 1548481
Last Closed: 2018-03-08 15:54:03 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3390451 None None None 2018-03-23 03:39:52 UTC
Red Hat Product Errata RHBA-2018:0474 normal SHIPPED_LIVE Red Hat Ceph Storage 3.0 bug fix update 2018-03-08 20:51:53 UTC

Comment 26 errata-xmlrpc 2018-03-08 15:54:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.