Bug 1303873
| Summary: | Volume wipe with trim algorithm does not reclaim space on rbd cow clone | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Community] Virtualization Tools | Reporter: | Yang Yang <yanyang> | ||||
| Component: | libvirt | Assignee: | Libvirt Maintainers <libvirt-maint> | ||||
| Status: | CLOSED NOTABUG | QA Contact: | |||||
| Severity: | unspecified | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | unspecified | CC: | dyuan, hhan, jferlan, mzhan, rbalakri, wido | ||||
| Target Milestone: | --- | ||||||
| Target Release: | --- | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2016-02-15 01:32:17 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Yang Yang
2016-02-02 10:02:26 UTC
Created attachment 1120368 [details]
libvirtd.log
The RADOS objects will not be removed, but they will be trimmed to zero bytes inside Ceph. Can you run: $ ceph df Now trim the volume and wait 30 seconds $ ceph df The pool should now use less space. Wido
I ran the steps as you said, the pool indeed use less space after wipe. What confused me is that, the RADOS objects will be removed on general rbd image, however, they will not be removed on cow clone rbd image. Why do they behave differently
1. tested on cow clone rbd image, the pool use less space as expected, but rados objects are not removed
# rbd info yy/vol1.clone
rbd image 'vol1.clone':
size 1024 MB in 256 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.fac412bfc3f4
format: 2
features: layering
flags:
parent: yy/vol1@sn1
overlap: 1024 MB
[root@fedora_yy ~]# rados -p yy ls | grep rbd_data.fac412bfc3f4 | wc -l
128
# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
142G 119G 23206M 15.92
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 40647M 0
yy 1 924M 0.63 40647M 240
# virsh vol-wipe vol1.clone rbd --algorithm trim
Vol vol1.clone wiped
[root@fedora_yy ~]# rados -p yy ls | grep rbd_data.fac412bfc3f4 | wc -l
128
[root@fedora_yy ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
142G 121G 21525M 14.77
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 41223M 0
yy 1 414M 0.28 41223M 240
2. tested on general rbd image, the pool used less space as expected, and rados objects are removed
[root@fedora_yy ~]# virsh vol-create-as rbd vol2 1G
Vol vol2 created
[root@fedora_yy ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
142G 122G 20779M 14.26
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 41511M 0
yy 1 414M 0.28 41511M 241
Write 400M data into the image
# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
142G 119G 23057M 15.82
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 40748M 0
yy 1 828M 0.57 40748M 348
[root@fedora_yy ~]# rbd info yy/vol2
rbd image 'vol2':
size 1024 MB in 256 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.b06a35aaa966
format: 2
features: layering, striping
flags:
stripe unit: 4096 kB
stripe count: 1
[root@fedora_yy ~]# rados -p yy ls | grep rbd_data.b06a35aaa966 | wc -l
106
[root@fedora_yy ~]# virsh vol-wipe vol2 rbd --algorithm trim
Vol vol2 wiped
# rados -p yy ls | grep rbd_data.b06a35aaa966 | wc -l
0
[root@fedora_yy ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
142G 121G 20839M 14.30
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 41449M 0
yy 1 414M 0.28 41449M 242
That is a internal thing inside Ceph. The object do not occupy any space anymore and thus the result is as we want it to be. I think we can close this as it is not a bug. Trimming results in all data being trimmed from the RBD volume. |