Description of problem: In "ceph df" the %USED column of the POOLS section is divided by the size of the pool. It shouldn't. Sample output on 1.3.z root@stor1:~# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 44664G 44518G 145G 0.33 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS [...] bench 11 49092M 0.11 14786G 12274 %USED for the "bench" pool (size 3) should be .33%, not .11%
Fixed by 71c4e525f27b8efd2aa4f3b5e95f4a13f123d41a in master and jewel branches. Backport pull request created https://github.com/ceph/ceph/pull/8794
This doesn't apply to hammer actually, so pull request 8794 is being closed. The upstream code has the fix already. from v10.1.0 forward.
Alexandre would you please provide us the exact RPM version (rpm -qv ceph) where you're getting a RAW USED %?
Original cluster isn't available anymore. Reproduced on VMs (RHCS 1.3.2, rep size 2) [root@localhost ceph-deploy]# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 100300M 91811M 8489M 8.46 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 3728M 3.72 45877M 932 [root@localhost ceph-deploy]# rpm -qv ceph ceph-0.94.5-12.el7cp.x86_64
David, does your PR apply in the case Alexandre posted in comment #6?
I figured out how to fix this easily with the code from the later release. I created pull requests https://github.com/ceph/ceph/pull/9125 The pool test is size 3. Before [~/ceph-hammer/src] (hammer) $ ./ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 299G 74885M 226G 75.61 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 0 0 24883M 0 test 1 1000M 0.33 24883M 10 After [~/ceph-hammer/src] (wip-15635) dzafman$ ./ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 299G 76451M 225G 75.10 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 0 0 24068M 0 test 1 1000M 0.98 24068M 10
Assigning back this bug, not meeting expectation. Still observing %USED is divided by the pool size. [root@magna104 ubuntu]# ceph osd pool get rbd size size: 3 [root@magna104 ubuntu]# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 10186G 10150G 37464M 0.36 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 4873M 0.14 3378G 1247650 pool1 1 0 0 3378G 0 [root@magna104 ubuntu]# ceph -v ceph version 0.94.9-1.el7cp (72b3e852266cea8a99b982f7aa3dde8ca6b48bd3)
Looks right to me. 4873MB/10186000.0MB*3*100 = 0.14%
Bug Verified. [root@magna104 ubuntu]# ceph osd pool get rbd size size: 3 [root@magna104 ubuntu]# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 10186G 10150G 37464M 0.36 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 4873M 0.14 3378G 1247650 pool1 1 0 0 3378G 0 [root@magna104 ubuntu]# ceph -v ceph version 0.94.9-1.el7cp (72b3e852266cea8a99b982f7aa3dde8ca6b48bd3)
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2016-1972.html