Bug 1326058 - [RBD] after RBD flatten the used size of clone is 0
Summary: [RBD] after RBD flatten the used size of clone is 0
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RBD
Version: 2.0
Hardware: Unspecified
OS: Linux
unspecified
medium
Target Milestone: rc
: 2.0
Assignee: Jason Dillaman
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-11 16:49 UTC by Tejas
Modified: 2022-02-21 18:17 UTC (History)
6 users (show)

Fixed In Version: RHEL: ceph-10.2.1-1.el7cp Ubuntu: ceph_10.2.1-2redhat1xenial
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-23 19:35:40 UTC
Embargoed:


Attachments (Terms of Use)
rbd-du.log (4.27 MB, text/plain)
2016-04-11 16:49 UTC, Tejas
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 14540 0 None None None 2016-04-11 17:24:24 UTC
Red Hat Product Errata RHBA-2016:1755 0 normal SHIPPED_LIVE Red Hat Ceph Storage 2.0 bug fix and enhancement update 2016-08-23 23:23:52 UTC

Description Tejas 2016-04-11 16:49:36 UTC
Created attachment 1146050 [details]
rbd-du.log

Description of problem:
After a clone is partially filled and then flattened, the size of the clone is 0

Version-Release number of selected component (if applicable):
Ceph 10.1.1

How reproducible:
Always

Steps to Reproduce:
1. Create RBD image of 10G and  fill it nearly full, using bench-write.
2. create a snap , protect it, and then create a clone of the snap.
3. create a snap of the clone. run rbd bench-write on the clone without filling it.
4. Faltten the clone, the usied size of the clone is shown as 0.

Actual results:
the used size of the clone is 0.

Expected results:
the size should not be 0

Additional info:

[root@magna080 ~]# rbd ls -l Tejas
[root@magna080 ~]# 
[root@magna080 ~]# rbd create Tejas/img --size 10G --image-feature layering,deep-flatten
[root@magna080 ~]# rbd ls -l Tejas
NAME   SIZE PARENT FMT PROT LOCK 
img  10240M          2           
[root@magna080 ~]# 
[root@magna080 ~]# rbd bench-write Tejas/img --io-pattern rand
bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern random
  SEC       OPS   OPS/SEC   BYTES/SEC
    1      8765   8689.91  35593890.45
    2     11652   5824.70  23857967.36
    3     15096   5036.07  20627753.15
    4     18327   4574.94  18738960.11
    5     19188   3777.94  15474460.57
    6     19535   2157.39  8836651.93
    7     21896   2044.49  8374233.88
^C
[root@magna080 ~]# 
[root@magna080 ~]# rbd du Tejas/img
warning: fast-diff map is not enabled for img. operation may be slow.
NAME PROVISIONED   USED 
img       10240M 10232M 
[root@magna080 ~]# 
[root@magna080 ~]# 
[root@magna080 ~]# rbd snap create Tejas/img@s1
[root@magna080 ~]# 
[root@magna080 ~]# rbd du Tejas/img@s1
warning: fast-diff map is not enabled for img. operation may be slow.
NAME   PROVISIONED   USED 
img@s1      10240M 10232M 
[root@magna080 ~]# 
[root@magna080 ~]# rbd snap protect Tejas/img@s1
[root@magna080 ~]# 
[root@magna080 ~]# rbd clone Tejas/img@s1 Tejas/c1
[root@magna080 ~]# 
[root@magna080 ~]# rbd du Tejas/c1
warning: fast-diff map is not enabled for c1. operation may be slow.
NAME PROVISIONED USED 
c1        10240M    0 
[root@magna080 ~]# 
[root@magna080 ~]# 
[root@magna080 ~]# rbd snap create Tejas/c1@s2
[root@magna080 ~]# 
[root@magna080 ~]# rbd du Tejas/c1@s2
warning: fast-diff map is not enabled for c1. operation may be slow.
NAME  PROVISIONED USED 
c1@s2      10240M    0 
[root@magna080 ~]# 
[root@magna080 ~]# rbd bench-write Tejas/c1
bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern sequential
  SEC       OPS   OPS/SEC   BYTES/SEC
    1      9068   8408.13  34439695.29
    2     13243   6322.27  25896023.06
    3     16353   5368.74  21990362.15
^C
[root@magna080 ~]# 
[root@magna080 ~]# rbd du Tejas/c1
warning: fast-diff map is not enabled for c1. operation may be slow.
NAME PROVISIONED USED 
c1        10240M 148M 
[root@magna080 ~]# 
[root@magna080 ~]# 
[root@magna080 ~]# rbd snap create Tejas/c1@s3
[root@magna080 ~]# 
[root@magna080 ~]# 
[root@magna080 ~]# rbd ls -l Tejas
NAME     SIZE PARENT       FMT PROT LOCK 
c1     10240M Tejas/img@s1   2           
c1@s2  10240M Tejas/img@s1   2           
c1@s3  10240M Tejas/img@s1   2           
img    10240M                2           
img@s1 10240M                2 yes       
[root@magna080 ~]# 


[root@magna080 ~]# 
[root@magna080 ~]# rbd flatten Tejas/c1
Image flatten: 100% complete...done.
[root@magna080 ~]# 
[root@magna080 ~]# 
[root@magna080 ~]# rbd ls -l Tejas
NAME     SIZE PARENT       FMT PROT LOCK 
c1     10240M                2           
c1@s2  10240M Tejas/img@s1   2           
c1@s3  10240M Tejas/img@s1   2           
img    10240M                2           
img@s1 10240M                2 yes       
[root@magna080 ~]# 
[root@magna080 ~]# 
[root@magna080 ~]# rbd du Tejas/c1
warning: fast-diff map is not enabled for c1. operation may be slow.
NAME PROVISIONED USED 
c1        10240M    0 



Output of :
rbd du Tejas/c1  --debug-ms 1 --debug-rbd 20 --log-file rbd-du.log 

is attached.

Comment 2 Jason Dillaman 2016-04-11 17:26:48 UTC
This is actually expected -- since you took a snapshot "s3" after writing, technically all of the usage is owned by "s3", not the HEAD revision of the image.  i.e. if you did a export-diff from s3 to HEAD you would have an empty diff file.

A couple months ago I opened a feature request ticket to add an optional to the 'du' command to generate the full usage information (including snapshots).

Comment 3 Jason Dillaman 2016-04-11 17:34:29 UTC
I would like to close this ticket as NOTABUG assuming "rbd du Tejas/c1@s2" (due to the flatten copy-up) and "rbd du Tejas/c1@s3" report usage.  Alternatively, if you delete "@s2" and "@s3", all the usage should be associated with the HEAD revision.

Comment 4 Tejas 2016-04-12 12:26:07 UTC
yes, after deleting the snapshots the du works as expected.
can we expect this to be fixed in the Ceph 2.0 timeline?

Comment 5 Jason Dillaman 2016-04-12 13:24:53 UTC
The associated ticket isn't a bug -- it's a feature request.  I am not sure if it will land for 2.0.

Comment 6 Ken Dreyer (Red Hat) 2016-04-22 16:33:00 UTC
Since this doesn't have pm_ack, I assume it's not been prioritized yet.

We can re-open if Product Management wants it.

Comment 7 Tejas 2016-04-25 06:16:22 UTC
I feel that this should be a part of Ceph 2.0.
The customers are shown incorrect info about the size of the clone

Comment 10 Jason Dillaman 2016-04-28 20:05:41 UTC
Upstream PR: https://github.com/ceph/ceph/pull/8819

Comment 11 Ken Dreyer (Red Hat) 2016-05-10 02:48:17 UTC
The PR Jason mentioned in Comment 10 above is present in master, and still needs to be backported to jewel.

Comment 12 Ken Dreyer (Red Hat) 2016-05-10 13:22:51 UTC
This is undergoing review upstream (https://github.com/ceph/ceph/pull/8870) and will be in v10.2.1.

Comment 13 Ken Dreyer (Red Hat) 2016-05-16 15:32:14 UTC
The above PR was merged to jewel and is present in v10.2.1.

Comment 15 Ken Dreyer (Red Hat) 2016-05-16 23:00:26 UTC
It's already in v10.2.1 -> clearing FutureFeature flag because we don't need explicit ack from Neil/Federico at this point.

Comment 17 Hemanth Kumar 2016-05-30 11:02:26 UTC
[root@magna020 ~]# rbd du pool2/bucket2 --cluster master
NAME            PROVISIONED USED 
bucket2@snaptwo      10240M 236M 
bucket2              10240M 236M 
<TOTAL>              10240M 472M 

[root@magna020 ~]# rbd flatten pool2/bucket2 --cluster master
Image flatten: 100% complete...done.
[root@magna020 ~]# rbd ls -l -p  pool2 --cluster master
NAME                                                                                           SIZE PARENT FMT PROT LOCK 
bucket1                                                                                      10240M          2           
bucket1@snapone                                                                              10240M          2 yes       
bucket2                                                                                      10240M          2           
bucket2@snaptwo                                                                              10240M          2           
bucket2@snapthree                                                                            10240M          2           

[root@magna020 ~]# rbd du pool2/bucket2 --cluster master
NAME              PROVISIONED   USED 
bucket2@snaptwo        10240M 10240M 
bucket2@snapthree      10240M   236M 
bucket2                10240M 10240M 
<TOTAL>                10240M 20716M 


Cloned image size is not 0 after flattening.
Moving it to verified state.

Comment 19 errata-xmlrpc 2016-08-23 19:35:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1755.html


Note You need to log in before you can comment on or make changes to this bug.