Description of problem: Ceph df command returning pool's max available field as 0 Version-Release number of selected component (if applicable): 10.2.1-7.el7cp.x86_64 How reproducible: Always Steps to Reproduce: 1.Create ceph cluster 2.Create ceph pool 3. Actual results: Ceph df command returning pool's max available field as 0 Expected results: Ceph df command should not return pool's max available field as 0 Additional info:
[root@dhcp47-22 ~]# rpm -qa|grep ceph python-cephfs-10.2.1-7.el7cp.x86_64 ceph-selinux-10.2.1-7.el7cp.x86_64 libcephfs1-10.2.1-7.el7cp.x86_64 ceph-common-10.2.1-7.el7cp.x86_64 ceph-base-10.2.1-7.el7cp.x86_64 ceph-osd-10.2.1-7.el7cp.x86_64
(In reply to anmol babu from comment #2) > [root@dhcp47-22 ~]# rpm -qa|grep ceph > python-cephfs-10.2.1-7.el7cp.x86_64 > ceph-selinux-10.2.1-7.el7cp.x86_64 > libcephfs1-10.2.1-7.el7cp.x86_64 > ceph-common-10.2.1-7.el7cp.x86_64 > ceph-base-10.2.1-7.el7cp.x86_64 > ceph-osd-10.2.1-7.el7cp.x86_64 Sorry the issue is in the ceph of version as below: [root@jenkins-usm4-mon3 ~]# rpm -qa|grep ceph ceph-common-10.2.1-13.el7cp.x86_64 libcephfs1-10.2.1-13.el7cp.x86_64 python-cephfs-10.2.1-13.el7cp.x86_64 ceph-selinux-10.2.1-13.el7cp.x86_64 ceph-mon-10.2.1-13.el7cp.x86_64 ceph-base-10.2.1-13.el7cp.x86_64
Yeah, we're going to need a lot more information. What command did you run? What was the output? How is the cluster configured? How many osds? How many pools? How are the crush rules configured. Can I just log into the nodes with the cluster?
Looks like size is 1 for both pools, probably related.
The problem is that the crush weights for all of these osds are 0. For some reason the osd-prestart didn't set the weights on osd startup? Or maybe rhs-c precreated the items in the crush map and the prestart's weight initialization was ignored? (I'm guessing the latter... the prestart *only* sets the weight if the osd didn't already exist in the crush map.)
Sounds like a config error then?
Gregory: sage suggests just setting the crush weights for the osds as they are added since the tool already has the information. Would that work?
It should work, What is "tool" in this context. I'm investigating why calamari didn't set the weights in the first place
I'm working on a fix on the machines listed here. Please don't destroy them if possible.
v1.4.1
Ok I'm done with the machines listed in the bug. The problem was that calamari was refusing to set the crush node weights that storage console was asking for when building the crushmap. I've corrected that and verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1755.html