Bug 1224978 - Can't create cinder volume after removing one of ceph osd nodes
Summary: Can't create cinder volume after removing one of ceph osd nodes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director
Version: 7.0 (Kilo)
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ga
: Director
Assignee: Giulio Fidente
QA Contact: Yogev Rabl
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-05-26 10:47 UTC by Jan Provaznik
Modified: 2023-02-22 23:02 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-08-05 13:51:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2015:1549 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform director Release 2015-08-05 17:49:10 UTC

Description Jan Provaznik 2015-05-26 10:47:08 UTC
Description of problem:
After deploying Overcloud with 3 ceph nodes (export CEPHSTORAGESCALE=3), and then removing one of ceph nodes, it's not possible to create cinder volumes anymore (albeit ceph pool min size is set to 1).

Version-Release number of selected component (if applicable):
ceph-0.80.7-0.4.el7.x86_64
openstack-cinder-2015.1.0-2.el7.noarch

Steps to Reproduce:
1. export CEPHSTORAGESCALE=3;instack-deploy-overcloud --tuskar
2. nova stop "one-of-ceph-nodes"
3. wait for 5 minutes, then try "cinder create 1"

Actual results:
cinder volume status is "error"

Expected results:
volume is created


Additional info:
it seems that the issue is with ceph itself. After removing a node, it properly reports that one node is down, but after 5 minutes, "ceph df" reports 0 for "MAX AVAIL".

Setting high prio because this causes not-functional cinder service in rdo-director when a single node is removed.

It seems that it might be caused by this bug: http://tracker.ceph.com/issues/10257

Comment 4 Giulio Fidente 2015-05-26 10:59:07 UTC
as per initial report, this seems to be caused by http://tracker.ceph.com/issues/10257 which is not fixed in ceph-0.80.7-0.4.el7.x86_64

Comment 5 Giulio Fidente 2015-05-26 14:05:30 UTC
seems to be affecting ceph-0.80.8-4.el7cp.x86_64 as well

Comment 6 chris alfonso 2015-05-26 18:20:11 UTC
Once an updated ceph package is available, please re-test.

Comment 7 Ken Dreyer (Red Hat) 2015-05-29 20:18:55 UTC
Just checking here... I see you're using EPEL packages in the initial bug report. We're not backporting any patches to EPEL (I guess we *could* do it.) Do you need a fix in EPEL too?

Comment 8 Giulio Fidente 2015-06-01 07:32:58 UTC
hi Ken,

correct the initial report was mistakenly filed against Red Hat Storage despite using an RPM taken from EPEL, in comment #5 I confirmed same bug affecting RHS build: ceph-0.80.8-4.el7cp.x86_64

Comment 9 Giulio Fidente 2015-06-02 03:21:21 UTC
Ken,

I might have misunderstood your comment #7; from bug #1225081 I understand this is fixed in RHS 1.3.x

Yet RDO will use the version in EPEL. Should we file another bug, against RDO, to track a fix for EPEL?

Comment 10 Ken Dreyer (Red Hat) 2015-06-02 15:11:43 UTC
The fix that you need (https://github.com/ceph/ceph/pull/3826) is in Ceph itself, right? I'm fine with just cherry-picking that fix to the EPEL 7 Ceph package.

Comment 11 Giulio Fidente 2015-06-03 08:58:17 UTC
I think cherry-picking to EPEL would be great. We can track the cherry-pick with a BZ as well if you want, from what I understand EPEL issues should be filed using using [1]. 

1. https://bugzilla.redhat.com/enter_bug.cgi?product=Fedora%20EPEL

Comment 14 Jan Provaznik 2015-06-03 10:58:27 UTC
I can confirm that this issue is solved in ceph-0.94.1-11.el7cp.x86_64, with this version "ceph df" returns reasonable values after removing an OSD node and "cinder create" command works.

Comment 16 Giulio Fidente 2015-07-29 07:44:50 UTC
This BZ does not need changes in the OSP Director, it is due instead to a bug in Ceph.

ceph-0.94.1-11.el7cp.x86_64 (1.3) includes the needed fixes
ceph-0.80.7-0.4.el7.x86_64 (1.2) does not and will exhibit this problem

The BZ 1225081 tracks backport of the fix from Ceph 1.3 to 1.2

Comment 17 Yogev Rabl 2015-08-02 07:49:44 UTC
verified in
ceph-osd-0.94.1-13.el7cp.x86_64
ceph-0.94.1-13.el7cp.x86_64
ceph-common-0.94.1-13.el7cp.x86_64
ceph-mon-0.94.1-13.el7cp.x86_64

Comment 19 errata-xmlrpc 2015-08-05 13:51:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2015:1549


Note You need to log in before you can comment on or make changes to this bug.