Bug 806416 - virsh - if you remove lvm slice by hand, you can't remove out of vol-list.
virsh - if you remove lvm slice by hand, you can't remove out of vol-list.
Status: CLOSED WORKSFORME
Product: Virtualization Tools
Classification: Community
Component: libvirt (Show other bugs)
unspecified
x86_64 All
unspecified Severity medium
: ---
: ---
Assigned To: Libvirt Maintainers
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-03-23 13:01 EDT by Brian Kruger
Modified: 2012-04-16 12:53 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-04-16 12:53:49 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Brian Kruger 2012-03-23 13:01:41 EDT
Description of problem:
If someone removes a lvm slice that was in the vol-list using lvremove, you cannot remove it from the vol-list unless you destroy the pool and re-create it.

Version-Release number of selected component (if applicable):
libvirt.x86_64                  0.9.4-23.el6_2.6          @updates              
libvirt-client.x86_64           0.9.4-23.el6_2.6          @updates              


How reproducible:
create your vm (any) using a pool and virt-install..  then remove the slice (I stopped the guest first) using lvremove.  


Steps to Reproduce:
1. Create lvvolume group. (we have a lvm already created called 'ohm' for instance)
2. sudo virsh pool-define-as --name ohm --type logical --target /dev/ohm
    sudo virsh pool-start ohm
    sudo virsh pool-autostart ohm
    sudo virsh pool-info ohm

2. sudo /usr/bin/virt-install --name testguest1 --ram 4096 --disk pool=ohm,size=80 --boot network,hd,menu=on --graphics none --network bridge=br0,mac=00:16:3e:10:59:a1 --vcpus 2 --os-variant=rhel6

3. sudo virsh destroy testguest1

4. sudo lvremove /dev/ohm/testguest1.img

5. sudo virsh vol-list ohm

6. sudo virsh vol-delete /dev/ohm/testguest1.img

7. sudo virsh vol-list ohm

  
Actual results:

bkruger@prod-hv1 (Linux_2.6.32) $ sudo lvremove /dev/ohm/testguest1.img 
Do you really want to remove active logical volume testguest1.img? [y/n]: y
  Logical volume "testguest1.img" successfully removed
bkruger@prod-hv1 (Linux_2.6.32) $ sudo virsh vol-delete /dev/ohm/testguest1.img
error: Failed to delete vol /dev/ohm/testguest1.img
error: internal error Child process (/sbin/lvremove -f /dev/ohm/testguest1.img) status unexpected: exit status 5

bkruger@prod-hv1 (Linux_2.6.32) $ sudo virsh vol-list ohm
Name                 Path                                    
-----------------------------------------
testguest1.img       /dev/ohm/testguest1.img  

logs: 16:52:41.553: 13490: error : virCommandWait:2183 : internal error Child process (/sbin/lvremove -f /dev/ohm/testguest1.img) status unexpected: exit status 5



Expected results:
vol-list no longer shows that entry, regardless whether or not it could execute the lvremove command.

Additional info:

If it doesn't exist, it should just remove it gracefully out of the list and try to clean up after itself as much as possible at least.    Or at least document it out somewhere how to fix it if it does occur or a force flag to remove it anyway regardless of errors.  Just to help keep the HV clean or if someone doesn't follow directions. 


Workaround: 

You can remove the pool and redefine it and it will be out of the list, but this would require stopping the pool, which I haven't tested while guests were running..  It does seem to bring back in the previously defined machines into the pool, as long as they were created with virsh.
Comment 1 Osier Yang 2012-03-29 23:36:36 EDT
(In reply to comment #0)
> Description of problem:
> If someone removes a lvm slice that was in the vol-list using lvremove, you
> cannot remove it from the vol-list unless you destroy the pool and re-create
> it.
> 
> Version-Release number of selected component (if applicable):
> libvirt.x86_64                  0.9.4-23.el6_2.6          @updates              
> libvirt-client.x86_64           0.9.4-23.el6_2.6          @updates              
> 
> 
> How reproducible:
> create your vm (any) using a pool and virt-install..  then remove the slice (I
> stopped the guest first) using lvremove.  
> 
> 
> Steps to Reproduce:
> 1. Create lvvolume group. (we have a lvm already created called 'ohm' for
> instance)
> 2. sudo virsh pool-define-as --name ohm --type logical --target /dev/ohm
>     sudo virsh pool-start ohm
>     sudo virsh pool-autostart ohm
>     sudo virsh pool-info ohm
> 
> 2. sudo /usr/bin/virt-install --name testguest1 --ram 4096 --disk
> pool=ohm,size=80 --boot network,hd,menu=on --graphics none --network
> bridge=br0,mac=00:16:3e:10:59:a1 --vcpus 2 --os-variant=rhel6
> 
> 3. sudo virsh destroy testguest1
> 
> 4. sudo lvremove /dev/ohm/testguest1.img
> 
> 5. sudo virsh vol-list ohm
> 
> 6. sudo virsh vol-delete /dev/ohm/testguest1.img
> 
> 7. sudo virsh vol-list ohm
> 
> 
> Actual results:
> 
> bkruger@prod-hv1 (Linux_2.6.32) $ sudo lvremove /dev/ohm/testguest1.img 
> Do you really want to remove active logical volume testguest1.img? [y/n]: y
>   Logical volume "testguest1.img" successfully removed
> bkruger@prod-hv1 (Linux_2.6.32) $ sudo virsh vol-delete /dev/ohm/testguest1.img
> error: Failed to delete vol /dev/ohm/testguest1.img
> error: internal error Child process (/sbin/lvremove -f /dev/ohm/testguest1.img)
> status unexpected: exit status 5
> 
> bkruger@prod-hv1 (Linux_2.6.32) $ sudo virsh vol-list ohm
> Name                 Path                                    
> -----------------------------------------
> testguest1.img       /dev/ohm/testguest1.img  
> 
> logs: 16:52:41.553: 13490: error : virCommandWait:2183 : internal error Child
> process (/sbin/lvremove -f /dev/ohm/testguest1.img) status unexpected: exit
> status 5
> 
> 
> 
> Expected results:
> vol-list no longer shows that entry, regardless whether or not it could execute
> the lvremove command.
> 
> Additional info:
> 
> If it doesn't exist, it should just remove it gracefully out of the list and
> try to clean up after itself as much as possible at least.    Or at least
> document it out somewhere how to fix it if it does occur or a force flag to
> remove it anyway regardless of errors.  Just to help keep the HV clean or if
> someone doesn't follow directions. 
> 
> 
> Workaround: 
> 
> You can remove the pool and redefine it and it will be out of the list, but
> this would require stopping the pool, which I haven't tested while guests were
> running..  It does seem to bring back in the previously defined machines into
> the pool, as long as they were created with virsh.

You can use pool-refresh actually, no need to undefine/define/start.
libvirt has no way to known whether the lv is removed by external app, and
any modifying/chaging on the sources managed by libvirt with external app\
is not supported.

However, it might be a good idea to trigger the pool info updating (I.e add/remove the vol from pool list, update vol info, e.g. the size of the
pool is changed) with integrating things like inotify. But it should be
an optional choice in case of there is some risk.

Osier
Comment 2 Brian Kruger 2012-04-16 12:53:49 EDT
Ok.   This seems to do the trick then.   Thanks!

Note You need to log in before you can comment on or make changes to this bug.