Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1330643 - ceph df - %USED per pool is wrong
ceph df - %USED per pool is wrong
Status: CLOSED ERRATA
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RADOS (Show other bugs)
1.3.2
Unspecified Unspecified
unspecified Severity unspecified
: rc
: 1.3.3
Assigned To: David Zafman
Ramakrishnan Periyasamy
Bara Ancincova
:
Depends On:
Blocks: 1372735
  Show dependency treegraph
 
Reported: 2016-04-26 12:15 EDT by Alexandre Marangone
Modified: 2017-07-30 11:15 EDT (History)
11 users (show)

See Also:
Fixed In Version: RHEL: ceph-0.94.7-5.el7cp Ubuntu: ceph_0.94.7-3redhat1trusty
Doc Type: Bug Fix
Doc Text:
.%USED now shows correct value Previously, the `%USED` column in the output of the `ceph df` command erroneously showed the size of a pool divided by the raw space available on the OSD nodes. With this update, the column correctly shows the space used by all replicas divided by the raw space available on the OSD nodes.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-09-29 08:57:52 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Ceph Project Bug Tracker 15641 None None None 2016-09-08 10:45 EDT
Red Hat Product Errata RHSA-2016:1972 normal SHIPPED_LIVE Moderate: Red Hat Ceph Storage 1.3.3 security, bug fix, and enhancement update 2016-09-29 12:51:21 EDT

  None (edit)
Description Alexandre Marangone 2016-04-26 12:15:58 EDT
Description of problem:
In "ceph df" the %USED column of the POOLS section is divided by the size of the pool. It shouldn't.

Sample output on 1.3.z 

root@stor1:~# ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED
    44664G     44518G         145G          0.33
POOLS:
    NAME             ID     USED       %USED     MAX AVAIL     OBJECTS
[...]
    bench            11     49092M      0.11        14786G       12274

%USED for the "bench" pool (size 3) should be .33%, not .11%
Comment 3 David Zafman 2016-04-27 20:40:38 EDT
Fixed by 71c4e525f27b8efd2aa4f3b5e95f4a13f123d41a in master and jewel branches.

Backport pull request created https://github.com/ceph/ceph/pull/8794
Comment 4 David Zafman 2016-05-12 19:01:16 EDT
This doesn't apply to hammer actually, so pull request 8794 is being closed.  The upstream code has the fix already. from v10.1.0 forward.
Comment 5 Ken Dreyer (Red Hat) 2016-05-12 22:52:31 EDT
Alexandre would you please provide us the exact RPM version (rpm -qv ceph) where you're getting a RAW USED %?
Comment 6 Alexandre Marangone 2016-05-13 12:10:12 EDT
Original cluster isn't available anymore. Reproduced on VMs (RHCS 1.3.2, rep size 2) 

[root@localhost ceph-deploy]# ceph df
GLOBAL:
    SIZE        AVAIL      RAW USED     %RAW USED
    100300M     91811M        8489M          8.46
POOLS:
    NAME     ID     USED      %USED     MAX AVAIL     OBJECTS
    rbd      0      3728M      3.72        45877M         932

[root@localhost ceph-deploy]# rpm -qv ceph
ceph-0.94.5-12.el7cp.x86_64
Comment 7 Ken Dreyer (Red Hat) 2016-05-13 12:29:00 EDT
David, does your PR apply in the case Alexandre posted in comment #6?
Comment 8 David Zafman 2016-05-13 20:56:45 EDT
I figured out how to fix this easily with the code from the later release.  I created pull requests https://github.com/ceph/ceph/pull/9125

The pool test is size 3.

Before
[~/ceph-hammer/src] (hammer)
$ ./ceph df
GLOBAL:
    SIZE     AVAIL      RAW USED     %RAW USED
    299G     74885M         226G         75.61
POOLS:
    NAME     ID     USED      %USED     MAX AVAIL     OBJECTS
    rbd      0          0         0        24883M           0
    test     1      1000M      0.33        24883M          10

After
[~/ceph-hammer/src] (wip-15635)
dzafman$ ./ceph df
GLOBAL:
    SIZE     AVAIL      RAW USED     %RAW USED
    299G     76451M         225G         75.10
POOLS:
    NAME     ID     USED      %USED     MAX AVAIL     OBJECTS
    rbd      0          0         0        24068M           0
    test     1      1000M      0.98        24068M          10
Comment 13 Ramakrishnan Periyasamy 2016-09-08 08:48:44 EDT
Assigning back this bug, not meeting expectation.

Still observing %USED is divided by the pool size.

[root@magna104 ubuntu]# ceph osd pool get rbd size
size: 3
[root@magna104 ubuntu]# ceph df 
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED 
    10186G     10150G       37464M          0.36 
POOLS:
    NAME      ID     USED      %USED     MAX AVAIL     OBJECTS 
    rbd       0      4873M      0.14         3378G     1247650 
    pool1     1          0         0         3378G           0 
[root@magna104 ubuntu]# ceph -v
ceph version 0.94.9-1.el7cp (72b3e852266cea8a99b982f7aa3dde8ca6b48bd3)
Comment 14 Samuel Just 2016-09-08 10:54:21 EDT
Looks right to me.  4873MB/10186000.0MB*3*100 = 0.14%
Comment 15 Ramakrishnan Periyasamy 2016-09-09 01:37:18 EDT
Bug Verified.

[root@magna104 ubuntu]# ceph osd pool get rbd size
size: 3
[root@magna104 ubuntu]# ceph df 
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED 
    10186G     10150G       37464M          0.36 
POOLS:
    NAME      ID     USED      %USED     MAX AVAIL     OBJECTS 
    rbd       0      4873M      0.14         3378G     1247650 
    pool1     1          0         0         3378G           0 
[root@magna104 ubuntu]# ceph -v
ceph version 0.94.9-1.el7cp (72b3e852266cea8a99b982f7aa3dde8ca6b48bd3)
Comment 18 errata-xmlrpc 2016-09-29 08:57:52 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-1972.html

Note You need to log in before you can comment on or make changes to this bug.