Bug 1398091 - virsh pool-destroy attempts to destroy a volume
Summary: virsh pool-destroy attempts to destroy a volume
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.3
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: John Ferlan
QA Contact: yisun
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-24 06:15 UTC by Marco Grigull
Modified: 2017-12-04 14:34 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-04 14:34:27 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Marco Grigull 2016-11-24 06:15:55 UTC
Description of problem:
'virsh pool-destroy foo' attempts to detroy a volume instead of only deactivating it

Version-Release number of selected component (if applicable):
libvirt-client-2.0.0-10.el7.ppc64le

How reproducible:
100%

Steps to Reproduce:
1. virsh pool-destroy default (when defined as pool type logical with other active members)
2.
3.

Actual results:
[root@ppc-hv-01 network-scripts]# virsh pool-destroy default
error: Failed to destroy pool default
error: internal error: Child process (/usr/sbin/vgchange -aln rhel_ppc-hv-01) unexpected exit status 5:   Logical volume rhel_ppc-hv-01/swap in use.
  Can't deactivate volume group "rhel_ppc-hv-01" with 2 open logical volume(s)


Expected results:
The pool should be stopped.  As per the output from 'virsh pool-destroy --help' the data should NOT be affected in any way:

```
[root@ppc-hv-01 network-scripts]# virsh pool-destroy --help
  NAME
    pool-destroy - destroy (stop) a pool

  SYNOPSIS
    pool-destroy <pool>

  DESCRIPTION
    Forcefully stop a given pool. Raw data in the pool is untouched

  OPTIONS
    [--pool] <string>  pool name or uuid
```

the current workaround is to:
a) stop libvirtd service (disruptive)
b) track down the config file containing the configuration
c) manually remove that file
d) restart libvirtd service


Additional info:
[root@ppc-hv-01 network-scripts]# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/rhel_ppc-hv-01/swap
  LV Name                swap
  VG Name                rhel_ppc-hv-01
  LV UUID                CgYNRl-iI5N-MyX6-Oog5-yS2K-t5i0-61zl7S
  LV Write Access        read/write
  LV Creation host, time ppc-hv-01.build.eng.bos.redhat.com, 2016-11-20 21:14:39 -0500
  LV Status              available
  # open                 2
  LV Size                4.00 GiB
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:7
   
  --- Logical volume ---
  LV Path                /dev/rhel_ppc-hv-01/root
  LV Name                root
  VG Name                rhel_ppc-hv-01
  LV UUID                2xehkr-sYn7-PTrB-41xl-O8xn-V0v5-XhH79h
  LV Write Access        read/write
  LV Creation host, time ppc-hv-01.build.eng.bos.redhat.com, 2016-11-20 21:14:41 -0500
  LV Status              available
  # open                 1
  LV Size                50.00 GiB
  Current LE             12800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:6



[root@ppc-hv-01 network-scripts]# virsh pool-dumpxml default
<pool type='logical'>
  <name>default</name>
  <uuid>6af096ea-8ffc-45a0-9b53-4dff41353f4e</uuid>
  <capacity unit='bytes'>3425492271104</capacity>
  <allocation unit='bytes'>57982058496</allocation>
  <available unit='bytes'>3367510212608</available>
  <source>
    <device path='/dev/mapper/mpatha3'/>
    <device path='/dev/mapper/mpathb1'/>
    <name>rhel_ppc-hv-01</name>
    <format type='lvm2'/>
  </source>
  <target>
    <path>/dev/rhel_ppc-hv-01</path>
  </target>
</pool>

Comment 1 Marco Grigull 2016-11-24 06:21:14 UTC
it is possible to remove the file and restart the serivce:

```
[root@ppc-hv-01 network-scripts]# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 default              inactive   no        

[root@ppc-hv-01 network-scripts]# virsh pool-destroy default
error: Failed to destroy pool default
error: Requested operation is not valid: storage pool 'default' is not active

[root@ppc-hv-01 network-scripts]# rm /etc/libvirt/storage/default.xml
rm: remove regular file ‘/etc/libvirt/storage/default.xml’? y
[root@ppc-hv-01 network-scripts]# service libvirtd try-restart
Redirecting to /bin/systemctl try-restart  libvirtd.service
[root@ppc-hv-01 network-scripts]# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------

[root@ppc-hv-01 network-scripts]# 
```

It's still a workaround however.

Comment 3 John Ferlan 2016-12-22 16:48:00 UTC
Not sure I follow your logic of what the issue is. Let's see if I can show you what I see...

You used pool-destroy and got an error:

[root@ppc-hv-01 network-scripts]# virsh pool-destroy default
error: Failed to destroy pool default
error: internal error: Child process (/usr/sbin/vgchange -aln rhel_ppc-hv-01) unexpected exit status 5:   Logical volume rhel_ppc-hv-01/swap in use.
  Can't deactivate volume group "rhel_ppc-hv-01" with 2 open logical volume(s)

So that says to me that after doing this, the pool should still have been active, but I cannot tell from the output you've shown the order of your logic. All the pool-destroy does is a vgchange -aln of the volume group. It does not delete any lv's.

Be careful not to mix this pool-destroy with the pool-delete command which will vgremove the volume group and pvremove each of the devices.  FWIW: From the bz subject - that's what I initially assumed was happening - the lv was being deleted on a destroy (but the code doesn't do that). 

Here's what I need... After the failed pool-destroy, show me the 'pool-list' output again. I would think it should show the pool is active. If you want, also show the 'vgs', 'lvs', and 'pvs' output after you start the pool and then after your failed stop attempt. I'm looking to determine why you believe the data has been affected and how.

Don't mess around with the /etc/libvirt/storage files. There's also another area you need to be concerned with as the pool keeps it's state in the /var/run/libvirt/storage area.  After you initially start your pool, look at both files via sdiff...

Comment 4 Marco Grigull 2017-07-04 22:41:50 UTC
Its been a while since I've worked on this. I will need to set up a test system and recheck

Comment 5 John Ferlan 2017-12-04 14:34:27 UTC
Closing this since there's been no response from the submitter. Feel free to re-open with sufficient details including the libvirt version which can be used to reproduce and exact steps on the details how to configure and steps (e.g. logged commands) related to how it was reproduced.


Note You need to log in before you can comment on or make changes to this bug.