Bug 1417203

Summary: pool-destroy command for file system pool type does not umount the filesystem
Product: Red Hat Enterprise Linux 7 Reporter: krisstoffe
Component: libvirtAssignee: Erik Skultety <eskultet>
Status: CLOSED ERRATA QA Contact: yisun
Severity: high Docs Contact:
Priority: unspecified    
Version: 7.3CC: dyuan, eskultet, lmen, rbalakri, xuzhang
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: libvirt-3.1.0-1.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-01 17:21:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description krisstoffe 2017-01-27 14:16:40 UTC
Description of problem:

pool-destroy command for file system pool type does not umount the file system, and thus the pool cannot be re-started or deleted



Version-Release number of selected component (if applicable):
libvirt-2.0.0-10.el7_3.4.x86_64



Steps to Reproduce:
[root@localhost ~]# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 data                 active     yes       
 vm-image             inactive   yes       


[root@localhost ~]# virsh pool-info vm-image
Name:           vm-image
UUID:           50fd3bf6-9041-438f-93ad-3b455da9cb1f
State:          inactive
Persistent:     yes
Autostart:      yes


[root@localhost ~]# virsh pool-dumpxml vm-image
<pool type='fs'>
  <name>vm-image</name>
  <uuid>50fd3bf6-9041-438f-93ad-3b455da9cb1f</uuid>
  <capacity unit='bytes'>2136997888</capacity>
  <allocation unit='bytes'>33734656</allocation>
  <available unit='bytes'>2103263232</available>
  <source>
    <device path='/dev/data/vm-disk'/>
    <format type='xfs'/>
  </source>
  <target>
    <path>/data/vm</path>
    <permissions>
      <mode>0755</mode>
      <owner>0</owner>
      <group>0</group>
      <label>system_u:object_r:unlabeled_t:s0</label>
    </permissions>
  </target>
</pool>


[root@localhost ~]# mount | grep data
[root@localhost ~]# 


[root@localhost ~]# virsh pool-start vm-image
Pool vm-image started

[root@localhost ~]# mount | grep data
/dev/mapper/data-vm--disk on /data/vm type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
[root@localhost ~]# 


[root@localhost ~]# virsh pool-destroy vm-image
Pool vm-image destroyed


[root@localhost ~]# virsh pool-start vm-image


Actual results:
error: Failed to start pool vm-image
error: internal error: Child process (/usr/bin/mount -t xfs /dev/data/vm-disk /data/vm) unexpected exit status 32: mount: /dev/mapper/data-vm--disk is already mounted or /data/vm busy
       /dev/mapper/data-vm--disk is already mounted on /data/vm



Expected results:
Pool vm-image started


Additional info:

Comment 2 Erik Skultety 2017-02-08 16:57:31 UTC
patches sent for review:
https://www.redhat.com/archives/libvir-list/2017-February/msg00229.html

Comment 3 Erik Skultety 2017-02-10 16:05:45 UTC
Fixed upstream by:

commit b2774db9c2bf7e53a841726fd209f6717b4ad48f
Author:     Erik Skultety <eskultet>
AuthorDate: Tue Feb 7 10:19:21 2017 +0100
Commit:     Erik Skultety <eskultet>
CommitDate: Fri Feb 10 17:01:12 2017 +0100

    storage: Fix checking whether source filesystem is mounted
    
    Right now, we use simple string comparison both on the source paths
    (mount's output vs pool's source) and the target (mount's mnt_dir vs
    pool's target). The problem are symlinks and mount indeed returns
    symlinks in its output, e.g. /dev/mappper/lvm_symlink. The same goes for
    the pool's source/target, so in order to successfully compare these two
    replace plain string comparison with virFileComparePaths which will
    resolve all symlinks and canonicalize the paths prior to comparison.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1417203
    
    Signed-off-by: Erik Skultety <eskultet>

v3.0.0-175-gb2774db

Comment 5 yisun 2017-03-22 05:50:05 UTC
Verified on:
libvirt-3.1.0-2.el7.x86_64

steps:
1. using a symlink of /dev/sdd1 for test, as follow
## ll /dev/disk/by-path/pci-0000:00:1a.0-usb-0:1.1:1.0-scsi-0:0:0:0-part1
lrwxrwxrwx. 1 root root 10 Mar 22 13:26 /dev/disk/by-path/pci-0000:00:1a.0-usb-0:1.1:1.0-scsi-0:0:0:0-part1 -> ../../sdd1

## parted /dev/sdd1 p
Model: Unknown (unknown)
Disk /dev/sdd1: 1000MB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags: 

Number  Start  End     Size    File system  Flags
 1      0.00B  1000MB  1000MB  xfs


2. define and start a fs pool
## cat pool.xml 
<pool type='fs'>
  <name>test-pool</name>
  <uuid>50fd3bf6-9041-438f-93ad-3b455da9cb1f</uuid>
  <source>
    <device path='/dev/disk/by-path/pci-0000:00:1a.0-usb-0:1.1:1.0-scsi-0:0:0:0-part1'/>
    <format type='xfs'/>
  </source>
  <target>
    <path>/tmp/mnt</path>
    <permissions>
      <mode>0755</mode>
      <owner>0</owner>
      <group>0</group>
      <label>system_u:object_r:unlabeled_t:s0</label>
    </permissions>
  </target>
</pool>


## virsh pool-define pool.xml
Pool test-pool defined from pool.xml

## virsh pool-start test-pool
Pool test-pool started

3. check the mount output
## mount | egrep "sdd|pci"
/dev/sdd1 on /tmp/mnt type xfs (rw,relatime,seclabel,attr2,inode64,noquota)


4. destroy the pool
## virsh pool-destroy test-pool
Pool test-pool destroyed

5. as expected, mount point cleared. 
## mount | egrep "sdd|pci"
<==== umounted, nothing here.

6. start the pool and check again, nothing wrong. 
 ## virsh pool-start test-pool
Pool test-pool started

## mount | egrep "sdd|pci"
/dev/sdd1 on /tmp/mnt type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
<==== mounted again

Comment 6 errata-xmlrpc 2017-08-01 17:21:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846

Comment 7 errata-xmlrpc 2017-08-02 00:01:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846