Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1344360 - Ceph df returning pool's max available field as 0
Ceph df returning pool's max available field as 0
Status: CLOSED ERRATA
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Calamari (Show other bugs)
2.0
Unspecified Unspecified
urgent Severity high
: rc
: 2.0
Assigned To: Gregory Meno
anmol babu
:
Depends On:
Blocks: 1343229
  Show dependency treegraph
 
Reported: 2016-06-09 09:25 EDT by anmol babu
Modified: 2016-08-23 15:41 EDT (History)
9 users (show)

See Also:
Fixed In Version: calamari-server-1.4.1-1.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-08-23 15:41:17 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1755 normal SHIPPED_LIVE Red Hat Ceph Storage 2.0 bug fix and enhancement update 2016-08-23 19:23:52 EDT

  None (edit)
Description anmol babu 2016-06-09 09:25:38 EDT
Description of problem:
Ceph df command returning pool's max available field as 0

Version-Release number of selected component (if applicable):
10.2.1-7.el7cp.x86_64

How reproducible:
Always

Steps to Reproduce:
1.Create ceph cluster
2.Create ceph pool
3.

Actual results:
Ceph df command returning pool's max available field as 0

Expected results:
Ceph df command should not return pool's max available field as 0

Additional info:
Comment 2 anmol babu 2016-06-09 09:27:15 EDT
[root@dhcp47-22 ~]# rpm -qa|grep ceph
python-cephfs-10.2.1-7.el7cp.x86_64
ceph-selinux-10.2.1-7.el7cp.x86_64
libcephfs1-10.2.1-7.el7cp.x86_64
ceph-common-10.2.1-7.el7cp.x86_64
ceph-base-10.2.1-7.el7cp.x86_64
ceph-osd-10.2.1-7.el7cp.x86_64
Comment 3 anmol babu 2016-06-09 09:40:56 EDT
(In reply to anmol babu from comment #2)
> [root@dhcp47-22 ~]# rpm -qa|grep ceph
> python-cephfs-10.2.1-7.el7cp.x86_64
> ceph-selinux-10.2.1-7.el7cp.x86_64
> libcephfs1-10.2.1-7.el7cp.x86_64
> ceph-common-10.2.1-7.el7cp.x86_64
> ceph-base-10.2.1-7.el7cp.x86_64
> ceph-osd-10.2.1-7.el7cp.x86_64

Sorry the issue is in the ceph of version as below:
[root@jenkins-usm4-mon3 ~]# rpm -qa|grep ceph
ceph-common-10.2.1-13.el7cp.x86_64
libcephfs1-10.2.1-13.el7cp.x86_64
python-cephfs-10.2.1-13.el7cp.x86_64
ceph-selinux-10.2.1-13.el7cp.x86_64
ceph-mon-10.2.1-13.el7cp.x86_64
ceph-base-10.2.1-13.el7cp.x86_64
Comment 4 Samuel Just 2016-06-09 12:20:46 EDT
Yeah, we're going to need a lot more information.  What command did you run?  What was the output?  How is the cluster configured?  How many osds?  How many pools?  How are the crush rules configured.  Can I just log into the nodes with the cluster?
Comment 7 Samuel Just 2016-06-09 13:48:16 EDT
Looks like size is 1 for both pools, probably related.
Comment 9 Sage Weil 2016-06-09 14:26:13 EDT
The problem is that the crush weights for all of these osds are 0.  For some reason the osd-prestart didn't set the weights on osd startup?  Or maybe rhs-c precreated the items in the crush map and the prestart's weight initialization was ignored?  (I'm guessing the latter... the prestart *only* sets the weight if the osd didn't already exist in the crush map.)
Comment 10 Samuel Just 2016-06-09 14:36:07 EDT
Sounds like a config error then?
Comment 11 Samuel Just 2016-06-09 14:54:31 EDT
Gregory: sage suggests just setting the crush weights for the osds as they are added since the tool already has the information.  Would that work?
Comment 12 Gregory Meno 2016-06-09 15:10:43 EDT
It should work, What is "tool" in this context. I'm investigating why calamari didn't set the weights in the first place
Comment 13 Gregory Meno 2016-06-10 00:22:58 EDT
I'm working on a fix on the machines listed here. Please don't destroy them if possible.
Comment 14 Gregory Meno 2016-06-14 13:27:51 EDT
v1.4.1
Comment 16 Gregory Meno 2016-06-14 19:00:09 EDT
Ok I'm done with the machines listed in the bug.

The problem was that calamari was refusing to set the crush node weights that storage console was asking for when building the crushmap.

I've corrected that and verified.
Comment 18 errata-xmlrpc 2016-08-23 15:41:17 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1755.html

Note You need to log in before you can comment on or make changes to this bug.