Bug 1515337 - [rbd] rbd du on empty pool does not return proper output
Summary: [rbd] rbd du on empty pool does not return proper output
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RBD
Version: 2.3
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: z2
: 3.0
Assignee: Jason Dillaman
QA Contact: Manohar Murthy
Erin Donnelly
URL:
Whiteboard:
Depends On:
Blocks: 1515341
TreeView+ depends on / blocked
 
Reported: 2017-11-20 15:51 UTC by Tomas Petr
Modified: 2021-06-10 13:38 UTC (History)
6 users (show)

Fixed In Version: RHEL: ceph-12.2.4-4.el7cp Ubuntu: ceph_12.2.1-4redhat1xenial
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1515341 (view as bug list)
Environment:
Last Closed: 2018-04-26 17:38:39 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 22200 0 None None None 2017-11-20 15:59:21 UTC
Red Hat Product Errata RHBA-2018:1259 0 None None None 2018-04-26 17:40:22 UTC

Description Tomas Petr 2017-11-20 15:51:21 UTC
Description of problem:
We have encountered bug, when since RHCS 2.3 version and newer
"rbd du --cluster {cluster_name} -p {pool_name} --format=json"  on empty pool returns "specified image" output

# ceph version # and newer
ceph version 10.2.7-27.el7cp (e0d2d4f2fac9d95a26486121257255260bbec8d5)

# rbd du --cluster ceph -p rbd --format=json
specified image 

# ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED 
    449G      449G         615M          0.13 
POOLS:
    NAME                      ID     USED     %USED     MAX AVAIL     OBJECTS 
    rbd                       0         0         0          149G           0 
    iscsi_vmware              10      115         0          149G           4 

The command in  RHCS 2.3 version and newer still works fine in case of a pool with image:
# rbd du --cluster ceph -p iscsi_vmware --format=json
{"images":[{"name":"02iscsi_vmware","provisioned_size":1073741824,"used_size":0}],"total_provisioned_size":1073741824,"total_used_size":0}

-------------
This command was working fine in RHCS 2.2 version:
# ceph version # and older
ceph version 10.2.5-37.el7cp (033f137cde8573cfc5a4662b4ed6a63b8a8d1464)
# rbd du --cluster ceph -p rbd --format=json
{"images":[],"total_provisioned_size":0,"total_used_size":0}

# ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED 
    15326M     15227M      102056k          0.65 
POOLS:
    NAME     ID     USED     %USED     MAX AVAIL     OBJECTS 
    rbd      0         0         0         5075M           0 


Version-Release number of selected component (if applicable):
10.2.7-27.el7cp and newer

How reproducible:
Always

Steps to Reproduce:
1. create ceph cluster on version 10.2.7-27.el7cp and newer
2. execute "rbd du --cluster {cluster_name} -p {pool_name} --format=json" on empty pool
3.

Actual results:


Expected results:


Additional info:

Comment 3 Tomas Petr 2017-11-20 15:55:21 UTC
It looks this was caused by some commit between these two versions:

https://github.com/ceph/ceph/blob/e407049a6a9cb588f27ab270948c404159aa2205/src/tools/rbd/action/DiskUsage.cc#L211  in 10.2.6 upstream version (culprit?):

  if (!found) {
    std::cerr << "specified image " << imgname << " is not found." << std::endl;
    return -ENOENT;
}

commit:
https://github.com/ceph/ceph/commit/ce4c801cfc114f718ca51c32b657fec638ca9aaf#diff-c57e0e173b64b0dd61751c61dcb97a04

It may be caused by other commit, I have _NOT_ tested the 10.2.7-27 without this commit.

The issue is still existing in Luminous:
[root@mons-0 ~]# ceph version
ceph version 12.2.1-39.el7cp (22e26be5a4920c95c43f647b31349484f663e4b9) luminous (stable)
[root@mons-0 ~]# rbd du --cluster ceph -p rbd --format=json
specified image [root@mons-0 ~]#

Comment 14 errata-xmlrpc 2018-04-26 17:38:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1259


Note You need to log in before you can comment on or make changes to this bug.