Bug 1344360 - Ceph df returning pool's max available field as 0
Summary: Ceph df returning pool's max available field as 0
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Calamari
Version: 2.0
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: rc
: 2.0
Assignee: Christina Meno
QA Contact: anmol babu
URL:
Whiteboard:
Depends On:
Blocks: 1343229
TreeView+ depends on / blocked
 
Reported: 2016-06-09 13:25 UTC by anmol babu
Modified: 2016-08-23 19:41 UTC (History)
9 users (show)

Fixed In Version: calamari-server-1.4.1-1.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-23 19:41:17 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1755 0 normal SHIPPED_LIVE Red Hat Ceph Storage 2.0 bug fix and enhancement update 2016-08-23 23:23:52 UTC

Description anmol babu 2016-06-09 13:25:38 UTC
Description of problem:
Ceph df command returning pool's max available field as 0

Version-Release number of selected component (if applicable):
10.2.1-7.el7cp.x86_64

How reproducible:
Always

Steps to Reproduce:
1.Create ceph cluster
2.Create ceph pool
3.

Actual results:
Ceph df command returning pool's max available field as 0

Expected results:
Ceph df command should not return pool's max available field as 0

Additional info:

Comment 2 anmol babu 2016-06-09 13:27:15 UTC
[root@dhcp47-22 ~]# rpm -qa|grep ceph
python-cephfs-10.2.1-7.el7cp.x86_64
ceph-selinux-10.2.1-7.el7cp.x86_64
libcephfs1-10.2.1-7.el7cp.x86_64
ceph-common-10.2.1-7.el7cp.x86_64
ceph-base-10.2.1-7.el7cp.x86_64
ceph-osd-10.2.1-7.el7cp.x86_64

Comment 3 anmol babu 2016-06-09 13:40:56 UTC
(In reply to anmol babu from comment #2)
> [root@dhcp47-22 ~]# rpm -qa|grep ceph
> python-cephfs-10.2.1-7.el7cp.x86_64
> ceph-selinux-10.2.1-7.el7cp.x86_64
> libcephfs1-10.2.1-7.el7cp.x86_64
> ceph-common-10.2.1-7.el7cp.x86_64
> ceph-base-10.2.1-7.el7cp.x86_64
> ceph-osd-10.2.1-7.el7cp.x86_64

Sorry the issue is in the ceph of version as below:
[root@jenkins-usm4-mon3 ~]# rpm -qa|grep ceph
ceph-common-10.2.1-13.el7cp.x86_64
libcephfs1-10.2.1-13.el7cp.x86_64
python-cephfs-10.2.1-13.el7cp.x86_64
ceph-selinux-10.2.1-13.el7cp.x86_64
ceph-mon-10.2.1-13.el7cp.x86_64
ceph-base-10.2.1-13.el7cp.x86_64

Comment 4 Samuel Just 2016-06-09 16:20:46 UTC
Yeah, we're going to need a lot more information.  What command did you run?  What was the output?  How is the cluster configured?  How many osds?  How many pools?  How are the crush rules configured.  Can I just log into the nodes with the cluster?

Comment 7 Samuel Just 2016-06-09 17:48:16 UTC
Looks like size is 1 for both pools, probably related.

Comment 9 Sage Weil 2016-06-09 18:26:13 UTC
The problem is that the crush weights for all of these osds are 0.  For some reason the osd-prestart didn't set the weights on osd startup?  Or maybe rhs-c precreated the items in the crush map and the prestart's weight initialization was ignored?  (I'm guessing the latter... the prestart *only* sets the weight if the osd didn't already exist in the crush map.)

Comment 10 Samuel Just 2016-06-09 18:36:07 UTC
Sounds like a config error then?

Comment 11 Samuel Just 2016-06-09 18:54:31 UTC
Gregory: sage suggests just setting the crush weights for the osds as they are added since the tool already has the information.  Would that work?

Comment 12 Christina Meno 2016-06-09 19:10:43 UTC
It should work, What is "tool" in this context. I'm investigating why calamari didn't set the weights in the first place

Comment 13 Christina Meno 2016-06-10 04:22:58 UTC
I'm working on a fix on the machines listed here. Please don't destroy them if possible.

Comment 14 Christina Meno 2016-06-14 17:27:51 UTC
v1.4.1

Comment 16 Christina Meno 2016-06-14 23:00:09 UTC
Ok I'm done with the machines listed in the bug.

The problem was that calamari was refusing to set the crush node weights that storage console was asking for when building the crushmap.

I've corrected that and verified.

Comment 18 errata-xmlrpc 2016-08-23 19:41:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1755.html


Note You need to log in before you can comment on or make changes to this bug.