Bug 1255900 - Ceph pool size (number of replicas) set too high
Ceph pool size (number of replicas) set too high
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director (Show other bugs)
7.0 (Kilo)
Unspecified Unspecified
high Severity unspecified
: y2
: 7.0 (Kilo)
Assigned To: Giulio Fidente
: Triaged
Depends On:
  Show dependency treegraph
Reported: 2015-08-21 16:01 EDT by Ryan Brown
Modified: 2016-04-18 02:55 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2015-10-09 06:35:19 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Ryan Brown 2015-08-21 16:01:30 EDT
Description of problem:
On installation, ceph replica size was set to one more than the number of hosts available. This caused ceph to enter a degraded state until the replica count was reduced. 

Version-Release number of selected component (if applicable):
GA with 0day

How reproducible:
Unknown, issue encountered at customer site, not a lab environment. I'll try to reproduce it as soon as possible. 

The same environment had multiple successful deployed multiple times previously, so at the very least it's an intermittent problem. 

Steps to Reproduce:

Actual results:

Expected results:

Additional info:
The deployment was a 3-controller, 3-compute, 3-ceph deployment with network isolation.
Comment 3 chris alfonso 2015-08-26 12:28:06 EDT
Possibly related bug https://bugzilla.redhat.com/show_bug.cgi?id=1253801
Comment 4 Ryan Brown 2015-08-28 10:31:36 EDT
I haven't been able to reproduce this in my lab as of yet. I don't want to close this bug, but it may be (annoyingly) intermittent even in similar configurations.

Note You need to log in before you can comment on or make changes to this bug.