Bug 1232170

Summary: Manually created LVM is deleted by virsh vol-create-as if it is having the same name
Product: Red Hat Enterprise Linux 6 Reporter: nijin ashok <nashok>
Component: libvirtAssignee: John Ferlan <jferlan>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact: Jiri Herrmann <jherrman>
Priority: unspecified    
Version: 6.6CC: canepa.n, dyuan, rbalakri, rkratky, xuzhang, yanyang, yisun
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: libvirt-0.10.2-55.el6 Doc Type: Release Note
Doc Text:
Failed logical volume creation no longer deletes existing volumes Previously, when attempting to create a logical volume in a logical-volume pool that already contained a logical volume with the specified name, libvirt in some cases deleted the existing logical volume. This update adds more checks to determine the cause of failure when creating logical volumes, which prevents libvirt from incorrectly removing existing logical volumes in the described circumstances.
Story Points: ---
Clone Of:
: 1233003 (view as bug list) Environment:
Last Closed: 2016-05-10 19:24:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1172231, 1233003, 1275757    

Description nijin ashok 2015-06-16 08:41:15 UTC
Description of problem:

Manually created LVM is deleted by virsh vol-create-as if the volume which we are trying to add is having the same name. If I create a LVM /dev/libvirt_lvm/lv1 manually using lvcreate and later if I add a volume having the same name using the command 

virsh vol-create-as guest_images_lvm lv1 100M 

then the manually created LVM is getting deleted silently.

Version-Release number of selected component (if applicable):

libvirt-0.10.2-46.el6_6.6.x86_64
libvirt-client-0.10.2-46.el6_6.6.x86_64

How reproducible:

100 %

Steps to Reproduce:

1. Create a pool

# virsh pool-list
Name                 State      Autostart 
-----------------------------------------
guest_images_lvm     active     no    

2. Create a LVM using lvcreate in the above pool volume group

# lvcreate -n lv1 -L 100M libvirt_lvm
# lvs|grep lv1
  lv1     libvirt_lvm -wi-a----- 100.00m 

3. Create a volume with same name lv1

#virsh vol-create-as guest_images_lvm lv1 100M
error: Failed to create vol lv1
error: internal error Child process (/sbin/lvcreate --name lv1 -L 102400K libvirt_lvm) unexpected exit status 5:   Logical volume "lv1" already exists in volume group "libvirt_lvm"

The volume creation fails. However the manually created lvm is deleted silently.

]# lvs|grep lv1
]#

[root@dhcp209-166 ~]# virsh vol-create-as guest_images_lvm lv1 100M
Vol lv1 created

Actual results:
Manually created lvm is deleted

Expected results:
Manually created lvm should not be deleted

Additional info:

Comment 4 yisun 2015-11-09 08:48:45 UTC
Verified on libvirt-0.10.2-55.el6.x86_64

Scenario 1: Create a logical pool and build a volumn in it with name conflicting with existing lv. 

1. #vgcreate libvirt_lvm /dev/sda6
2. #lvcreate -n lv1 -L 100M libvirt_lvm
3. # virsh pool-dumpxml lpool
<pool type='logical'>
  <name>lpool</name>
  <uuid>d74d01ca-d6dc-5b38-c066-46f27372c036</uuid>
  <capacity unit='bytes'>435104514048</capacity>
  <allocation unit='bytes'>104857600</allocation>
  <available unit='bytes'>434999656448</available>
  <source>
    <device path='/dev/sda6'/>
    <name>libvirt_lvm</name>
    <format type='lvm2'/>
  </source>
  <target>
    <path>/dev/libvirt_lvm</path>
    <permissions>
      <mode>0755</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>

4. # virsh vol-list lpool
Name                 Path                                    
-----------------------------------------
lv1                  /dev/libvirt_lvm/lv1  

5. # virsh vol-create-as lpool lv1 100M
error: Failed to create vol lv1
error: Storage volume not found: storage vol already exists

6. # virsh pool-refresh lpool
Pool lpool refreshed

7. # virsh vol-list lpool
Name                 Path                                    
-----------------------------------------
lv1                  /dev/libvirt_lvm/lv1     

8. # lvs|grep lv1
  lv1  libvirt_lvm -wi-a----- 100.00m    
<==== as above the existing lv is not deleted.

Comment 5 yisun 2015-11-11 07:57:48 UTC
Did more test in risk area and VERIFIED:
Scenario 2: use vol-create
--------
1. # virsh vol-list lpool
Name                 Path                                    
-----------------------------------------
lv1                  /dev/libvirt_lvm/lv1  

2. # cat vol.lvm 
<volume>
  <name>lv1</name>
  <key>rHEluR-r74L-11XN-XRAQ-32cY-4MWS-UThxOk</key>
  <source>
    <device path='/dev/sda5'>
      <extent start='0' end='104857600'/>
    </device>
  </source>
  <capacity unit='bytes'>204857600</capacity>
  <allocation unit='bytes'>204857600</allocation>
  <target>
    <path>/dev/libvirt_lvm/lv1</path>
    <permissions>
      <mode>0660</mode>
      <owner>0</owner>
      <group>6</group>
      <label>system_u:object_r:fixed_disk_device_t:s0</label>
    </permissions>
    <timestamps>
      <atime>1447225709.373332395</atime>
      <mtime>1447225709.373332395</mtime>
      <ctime>1447225709.373332395</ctime>
    </timestamps>
  </target>
</volume>

3. # virsh vol-create lpool vol.lvm 
error: Failed to create vol from vol.lvm
error: Storage volume not found: storage vol already exists

4. # lvs | grep lv1
  lv1  libvirt_lvm -wi-a----- 100.00m    

Scenario 3: use vol-create-from
1. # cat vol.lvm 
<volume>
  <name>lv1</name>
  <source>
    <device path='/dev/sda6'>
      <extent start='0' end='104857600'/>
    </device>
  </source>
  <capacity unit='bytes'>204857600</capacity>
  <allocation unit='bytes'>204857600</allocation>
  <target>
    <path>/dev/libvirt_lvm/lv1</path>
    <permissions>
      <mode>0660</mode>
      <owner>0</owner>
      <group>6</group>
      <label>system_u:object_r:fixed_disk_device_t:s0</label>
    </permissions>
    <timestamps>
      <atime>1447225709.373332395</atime>
      <mtime>1447225709.373332395</mtime>
      <ctime>1447225709.373332395</ctime>
    </timestamps>
  </target>
</volume>



2. # virsh vol-list lpool
Name                 Path                                    
-----------------------------------------
lv1                  /dev/libvirt_lvm/lv1                    
lv2                  /dev/libvirt_lvm/lv2      

3. # lvs | grep lv
  lv1  libvirt_lvm -wi-a----- 100.00m                                                    
  lv2  libvirt_lvm -wi-a----- 100.00m     

4. #  virsh vol-create-from lpool vol.lvm --inputpool lpool lv2
error: Failed to create vol from vol.lvm
error: internal error storage volume name 'lv1' already in use.

5. # virsh vol-list lpool
Name                 Path                                    
-----------------------------------------
lv1                  /dev/libvirt_lvm/lv1                    
lv2                  /dev/libvirt_lvm/lv2  


Scenario 4: use vol-clone

1. # virsh vol-list lpool
Name                 Path                                    
-----------------------------------------
lv1                  /dev/libvirt_lvm/lv1                    
lv2                  /dev/libvirt_lvm/lv2            

2.  # lvs | grep lv
  lv1  libvirt_lvm -wi-a----- 100.00m                                                    
  lv2  libvirt_lvm -wi-a----- 100.00m  

3. # virsh vol-clone lv2 lv1 lpool
error: Failed to clone vol from lv2
error: internal error storage volume name 'lv1' already in use.

4. # lvs | grep lv
  lv1  libvirt_lvm -wi-a----- 100.00m                                                    
  lv2  libvirt_lvm -wi-a----- 100.00m

Comment 7 errata-xmlrpc 2016-05-10 19:24:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0738.html