Bug 1597425

Summary: RHCS 3 - Ceph Luminous - PG's stuck activating
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vikhyat Umrao <vumrao>
Component: RADOSAssignee: Josh Durgin <jdurgin>
Status: CLOSED ERRATA QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: high Docs Contact: Bara Ancincova <bancinco>
Priority: low    
Version: 3.0CC: ceph-eng-bugs, chlong, cmedeiro, dzafman, hnallurv, jdurgin, kchai, pdhange
Target Milestone: z1Keywords: CodeChange
Target Release: 3.1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: RHEL: ceph-12.2.5-46.el7cp Ubuntu: ceph_12.2.5-31redhat1 Doc Type: Bug Fix
Doc Text:
.The default limit on PGs per OSD has been increased In some situations, such as widely varying disk sizes, the default limit on placement groups (PGs) per OSD could prevent PGs from becoming active. These limits have been increased by default to make this scenario less likely.
Story Points: ---
Clone Of:
: 1633426 (view as bug list) Environment:
Last Closed: 2018-11-09 00:59:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1584264, 1633426    

Description Vikhyat Umrao 2018-07-02 22:27:46 UTC
Description of problem:
Changing replication size to 3 to one of the pool causing this pool's pgs to stuck activating

Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 3.0.z3
ceph-osd-12.2.4-10.el7cp.x86_64

How reproducible:
Always reproducible in the customer environment.

Comment 23 errata-xmlrpc 2018-11-09 00:59:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3530