Bug 1304319 - Manually created cow clone of rbd device is removed by virStorageBackendRBDBuildVolFrom if it has the same name as existing one
Manually created cow clone of rbd device is removed by virStorageBackendRBDBu...
Status: NEW
Product: Virtualization Tools
Classification: Community
Component: libvirt (Show other bugs)
unspecified
x86_64 Linux
unspecified Severity unspecified
: ---
: ---
Assigned To: Libvirt Maintainers
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-03 05:05 EST by yangyang
Modified: 2016-02-09 07:49 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description yangyang 2016-02-03 05:05:56 EST
Description of problem:
If I manually create a cow clone of rbd device using 'rbd clone xxx', later if I create a rbd volume which has the same name as that I created before using 'virsh vol-clone xxx'. Then the manually created image is removed


Version-Release number of selected component (if applicable):
libvirt-1.3.2-1.fc24_v1.3.1_79_g63e15ad.x86_64

How reproducible:
100%

Steps to Reproduce:
1. create a rbd pool 
# cat rbd.xml 
<pool type='rbd'>
  <name>rbd</name>
  <source>
    <host name='10.73.68.116'/>
    <host name='10.73.68.117'/>
    <host name='10.73.68.118'/>
    <name>yy</name>
  </source>
</pool>

# virsh vol-list rbd
 Name                 Path                                    
------------------------------------------------------------------------------
 vol1                 yy/vol1                                 
 vol1.clone1          yy/vol1.clone1                          
 vol1.clone2          yy/vol1.clone2                          
 vol1.clone3          yy/vol1.clone3                          
 vol1.clone4          yy/vol1.clone4

2. create a cow clone using 'rbd clone xxx'
# rbd clone yy/vol1@sn1 yy/vol1.clone5
[root@fedora_yy ~]# rbd ls yy
vol1
vol1.clone1
vol1.clone2
vol1.clone3
vol1.clone4
vol1.clone5
                        
3. create vol1.clone5 using vol-clone
[root@fedora_yy ~]# virsh vol-clone vol1 vol1.clone5 rbd
error: Failed to clone vol from vol1
error: failed to clone RBD volume vol1 to vol1.clone5: File exists

4. check if vol1.clone5 exists
[root@fedora_yy ~]# rbd ls yy
vol1
vol1.clone1
vol1.clone2
vol1.clone3
vol1.clone4

Actual results:
Manually created cow clone is removed

Expected results:
Manually created cow clone should not be removed


Additional info:
Comment 1 Wido den Hollander 2016-02-09 07:00:38 EST
I think this is because libvirt thinks the cloning operation failed and it tries to clean up.

If you would refresh the pool after creating the RBD volume manually this wouldn' t happen.

This is more a libvirt logic problem then a RBD storage driver specific issue.

Good catch though, but I don't know how to fix this right now.
Comment 2 John Ferlan 2016-02-09 07:49:25 EST
oh... ouch... Long story short, I fixed a similar issue in/from the buildVol path; however, I seem to have forgotten the buildVolFrom path mainly because there was no rbd buildVolFrom code at the time.  The two existing buildVolFrom backend API's in FS and DISK used code which ended up down in the path of buildVol, so it slipped my mind.

Anyway, see commit id '4cd7d220c9b' - there's a lot of patches that support that and quite a bit of history, but I believe if you make a similar change in storageVolCreateXMLFrom where if "buildret < 0" rather than calling DeleteInternal, call storageVolRemoveFromPool using newvol

Also helps that you can "test" my theory with a real rbd instance!

Note You need to log in before you can comment on or make changes to this bug.