Back to bug 1306842

Who When What Removed Added
Mike Hackett 2016-02-11 21:08:54 UTC Link ID Ceph Project Bug Tracker 14710
Mike Hackett 2016-02-11 21:09:24 UTC Priority unspecified high
Mike Hackett 2016-02-11 21:09:53 UTC Group redhat
Mike Hackett 2016-02-11 21:10:21 UTC Assignee sjust kchai
Ian Colle 2016-02-18 23:31:52 UTC Status NEW ASSIGNED
CC icolle
Kefu Chai 2016-02-22 15:25:14 UTC Status ASSIGNED MODIFIED
Link ID Ceph Project Bug Tracker 14710 Ceph Project Bug Tracker 13930
Federico Lucifredi 2016-05-11 15:56:37 UTC CC flucifre
Vikhyat Umrao 2016-05-23 06:17:52 UTC CC vumrao
Vikhyat Umrao 2016-06-21 14:31:49 UTC Blocks 1348597
Vikhyat Umrao 2016-06-29 17:05:14 UTC Status MODIFIED POST
Kyle Squizzato 2016-06-30 16:44:35 UTC Group redhat
CC ksquizza
Kyle Squizzato 2016-06-30 16:45:03 UTC Link ID Red Hat Knowledge Base (Solution) 2159331
Vikhyat Umrao 2016-07-13 10:57:16 UTC Target Release 1.3.3 1.3.2
Ken Dreyer (Red Hat) 2016-07-25 17:58:36 UTC CC kdreyer
Depends On 1335269
Federico Lucifredi 2016-07-25 17:59:12 UTC Target Release 1.3.2 1.3.3
Harish NV Rao 2016-08-17 11:46:14 UTC CC tserlin
CC hnallurv
Ken Dreyer (Red Hat) 2016-08-20 02:15:33 UTC Status POST MODIFIED
Fixed In Version RHEL: ceph-0.94.7-5.el7cp Ubuntu: ceph_0.94.7-3redhat1trusty
errata-xmlrpc 2016-09-01 23:24:54 UTC Status MODIFIED ON_QA
Harish NV Rao 2016-09-02 08:40:00 UTC QA Contact ceph-qe-bugs rperiyas
Ramakrishnan Periyasamy 2016-09-09 10:31:29 UTC Status ON_QA VERIFIED
Bara Ancincova 2016-09-19 15:37:12 UTC Blocks 1372735
Bara Ancincova 2016-09-21 10:51:06 UTC Docs Contact bancinco
Doc Text ."ceph df" now shows proper value of "MAX AVAIL" as expected

When adding a new OSD node to the cluster by using the `ceph-deploy` utility with the `osd_crush_initial_weight` option set to `0`, the value of the `MAX AVAIL` field in the output of the `ceph df` command was `0` for each pool instead of the proper numerical value. As a consequence, other applications using Ceph, such as OpenStack Cinder, assumed that there is no space available to provision new volumes. This bug has been fixed, and `ceph df` now shows proper value of `MAX AVAIL` as expected.
Flags needinfo?(kchai)
Kefu Chai 2016-09-23 01:40:34 UTC Flags needinfo?(kchai)
Bara Ancincova 2016-09-26 12:18:53 UTC Doc Text ."ceph df" now shows proper value of "MAX AVAIL" as expected

When adding a new OSD node to the cluster by using the `ceph-deploy` utility with the `osd_crush_initial_weight` option set to `0`, the value of the `MAX AVAIL` field in the output of the `ceph df` command was `0` for each pool instead of the proper numerical value. As a consequence, other applications using Ceph, such as OpenStack Cinder, assumed that there is no space available to provision new volumes. This bug has been fixed, and `ceph df` now shows proper value of `MAX AVAIL` as expected.
."ceph df" now shows proper value of "MAX AVAIL"

When adding a new OSD node to the cluster by using the `ceph-deploy` utility with the `osd_crush_initial_weight` option set to `0`, the value of the `MAX AVAIL` field in the output of the `ceph df` command was `0` for each pool instead of the proper numerical value. As a consequence, other applications using Ceph, such as OpenStack Cinder, assumed that there is no space available to provision new volumes. This bug has been fixed, and `ceph df` now shows proper value of `MAX AVAIL` as expected.
errata-xmlrpc 2016-09-29 12:56:37 UTC Status VERIFIED CLOSED
Resolution --- ERRATA
Last Closed 2016-09-29 08:56:37 UTC
Drew Harris 2017-07-30 15:07:48 UTC Sub Component RADOS
Component Ceph RADOS
Red Hat One Jira (issues.redhat.com) 2022-07-09 08:13:41 UTC Link ID Red Hat Issue Tracker RHCEPH-4712

Back to bug 1306842