Bug 1326058
Summary: | [RBD] after RBD flatten the used size of clone is 0 | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Tejas <tchandra> | ||||
Component: | RBD | Assignee: | Jason Dillaman <jdillama> | ||||
Status: | CLOSED ERRATA | QA Contact: | ceph-qe-bugs <ceph-qe-bugs> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | 2.0 | CC: | ceph-eng-bugs, hnallurv, hyelloji, jdillama, kdreyer, tchandra | ||||
Target Milestone: | rc | ||||||
Target Release: | 2.0 | ||||||
Hardware: | Unspecified | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | RHEL: ceph-10.2.1-1.el7cp Ubuntu: ceph_10.2.1-2redhat1xenial | Doc Type: | Bug Fix | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2016-08-23 19:35:40 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
This is actually expected -- since you took a snapshot "s3" after writing, technically all of the usage is owned by "s3", not the HEAD revision of the image. i.e. if you did a export-diff from s3 to HEAD you would have an empty diff file. A couple months ago I opened a feature request ticket to add an optional to the 'du' command to generate the full usage information (including snapshots). I would like to close this ticket as NOTABUG assuming "rbd du Tejas/c1@s2" (due to the flatten copy-up) and "rbd du Tejas/c1@s3" report usage. Alternatively, if you delete "@s2" and "@s3", all the usage should be associated with the HEAD revision. yes, after deleting the snapshots the du works as expected. can we expect this to be fixed in the Ceph 2.0 timeline? The associated ticket isn't a bug -- it's a feature request. I am not sure if it will land for 2.0. Since this doesn't have pm_ack, I assume it's not been prioritized yet. We can re-open if Product Management wants it. I feel that this should be a part of Ceph 2.0. The customers are shown incorrect info about the size of the clone Upstream PR: https://github.com/ceph/ceph/pull/8819 The PR Jason mentioned in Comment 10 above is present in master, and still needs to be backported to jewel. This is undergoing review upstream (https://github.com/ceph/ceph/pull/8870) and will be in v10.2.1. The above PR was merged to jewel and is present in v10.2.1. It's already in v10.2.1 -> clearing FutureFeature flag because we don't need explicit ack from Neil/Federico at this point. [root@magna020 ~]# rbd du pool2/bucket2 --cluster master NAME PROVISIONED USED bucket2@snaptwo 10240M 236M bucket2 10240M 236M <TOTAL> 10240M 472M [root@magna020 ~]# rbd flatten pool2/bucket2 --cluster master Image flatten: 100% complete...done. [root@magna020 ~]# rbd ls -l -p pool2 --cluster master NAME SIZE PARENT FMT PROT LOCK bucket1 10240M 2 bucket1@snapone 10240M 2 yes bucket2 10240M 2 bucket2@snaptwo 10240M 2 bucket2@snapthree 10240M 2 [root@magna020 ~]# rbd du pool2/bucket2 --cluster master NAME PROVISIONED USED bucket2@snaptwo 10240M 10240M bucket2@snapthree 10240M 236M bucket2 10240M 10240M <TOTAL> 10240M 20716M Cloned image size is not 0 after flattening. Moving it to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1755.html |
Created attachment 1146050 [details] rbd-du.log Description of problem: After a clone is partially filled and then flattened, the size of the clone is 0 Version-Release number of selected component (if applicable): Ceph 10.1.1 How reproducible: Always Steps to Reproduce: 1. Create RBD image of 10G and fill it nearly full, using bench-write. 2. create a snap , protect it, and then create a clone of the snap. 3. create a snap of the clone. run rbd bench-write on the clone without filling it. 4. Faltten the clone, the usied size of the clone is shown as 0. Actual results: the used size of the clone is 0. Expected results: the size should not be 0 Additional info: [root@magna080 ~]# rbd ls -l Tejas [root@magna080 ~]# [root@magna080 ~]# rbd create Tejas/img --size 10G --image-feature layering,deep-flatten [root@magna080 ~]# rbd ls -l Tejas NAME SIZE PARENT FMT PROT LOCK img 10240M 2 [root@magna080 ~]# [root@magna080 ~]# rbd bench-write Tejas/img --io-pattern rand bench-write io_size 4096 io_threads 16 bytes 1073741824 pattern random SEC OPS OPS/SEC BYTES/SEC 1 8765 8689.91 35593890.45 2 11652 5824.70 23857967.36 3 15096 5036.07 20627753.15 4 18327 4574.94 18738960.11 5 19188 3777.94 15474460.57 6 19535 2157.39 8836651.93 7 21896 2044.49 8374233.88 ^C [root@magna080 ~]# [root@magna080 ~]# rbd du Tejas/img warning: fast-diff map is not enabled for img. operation may be slow. NAME PROVISIONED USED img 10240M 10232M [root@magna080 ~]# [root@magna080 ~]# [root@magna080 ~]# rbd snap create Tejas/img@s1 [root@magna080 ~]# [root@magna080 ~]# rbd du Tejas/img@s1 warning: fast-diff map is not enabled for img. operation may be slow. NAME PROVISIONED USED img@s1 10240M 10232M [root@magna080 ~]# [root@magna080 ~]# rbd snap protect Tejas/img@s1 [root@magna080 ~]# [root@magna080 ~]# rbd clone Tejas/img@s1 Tejas/c1 [root@magna080 ~]# [root@magna080 ~]# rbd du Tejas/c1 warning: fast-diff map is not enabled for c1. operation may be slow. NAME PROVISIONED USED c1 10240M 0 [root@magna080 ~]# [root@magna080 ~]# [root@magna080 ~]# rbd snap create Tejas/c1@s2 [root@magna080 ~]# [root@magna080 ~]# rbd du Tejas/c1@s2 warning: fast-diff map is not enabled for c1. operation may be slow. NAME PROVISIONED USED c1@s2 10240M 0 [root@magna080 ~]# [root@magna080 ~]# rbd bench-write Tejas/c1 bench-write io_size 4096 io_threads 16 bytes 1073741824 pattern sequential SEC OPS OPS/SEC BYTES/SEC 1 9068 8408.13 34439695.29 2 13243 6322.27 25896023.06 3 16353 5368.74 21990362.15 ^C [root@magna080 ~]# [root@magna080 ~]# rbd du Tejas/c1 warning: fast-diff map is not enabled for c1. operation may be slow. NAME PROVISIONED USED c1 10240M 148M [root@magna080 ~]# [root@magna080 ~]# [root@magna080 ~]# rbd snap create Tejas/c1@s3 [root@magna080 ~]# [root@magna080 ~]# [root@magna080 ~]# rbd ls -l Tejas NAME SIZE PARENT FMT PROT LOCK c1 10240M Tejas/img@s1 2 c1@s2 10240M Tejas/img@s1 2 c1@s3 10240M Tejas/img@s1 2 img 10240M 2 img@s1 10240M 2 yes [root@magna080 ~]# [root@magna080 ~]# [root@magna080 ~]# rbd flatten Tejas/c1 Image flatten: 100% complete...done. [root@magna080 ~]# [root@magna080 ~]# [root@magna080 ~]# rbd ls -l Tejas NAME SIZE PARENT FMT PROT LOCK c1 10240M 2 c1@s2 10240M Tejas/img@s1 2 c1@s3 10240M Tejas/img@s1 2 img 10240M 2 img@s1 10240M 2 yes [root@magna080 ~]# [root@magna080 ~]# [root@magna080 ~]# rbd du Tejas/c1 warning: fast-diff map is not enabled for c1. operation may be slow. NAME PROVISIONED USED c1 10240M 0 Output of : rbd du Tejas/c1 --debug-ms 1 --debug-rbd 20 --log-file rbd-du.log is attached.