RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1417203 - pool-destroy command for file system pool type does not umount the filesystem
Summary: pool-destroy command for file system pool type does not umount the filesystem
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.3
Hardware: All
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Erik Skultety
QA Contact: yisun
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-27 14:16 UTC by krisstoffe
Modified: 2017-08-02 00:01 UTC (History)
5 users (show)

Fixed In Version: libvirt-3.1.0-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-01 17:21:45 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:1846 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2017-08-01 18:02:50 UTC

Description krisstoffe 2017-01-27 14:16:40 UTC
Description of problem:

pool-destroy command for file system pool type does not umount the file system, and thus the pool cannot be re-started or deleted



Version-Release number of selected component (if applicable):
libvirt-2.0.0-10.el7_3.4.x86_64



Steps to Reproduce:
[root@localhost ~]# virsh pool-list --all
 Name                 State      Autostart 
-------------------------------------------
 data                 active     yes       
 vm-image             inactive   yes       


[root@localhost ~]# virsh pool-info vm-image
Name:           vm-image
UUID:           50fd3bf6-9041-438f-93ad-3b455da9cb1f
State:          inactive
Persistent:     yes
Autostart:      yes


[root@localhost ~]# virsh pool-dumpxml vm-image
<pool type='fs'>
  <name>vm-image</name>
  <uuid>50fd3bf6-9041-438f-93ad-3b455da9cb1f</uuid>
  <capacity unit='bytes'>2136997888</capacity>
  <allocation unit='bytes'>33734656</allocation>
  <available unit='bytes'>2103263232</available>
  <source>
    <device path='/dev/data/vm-disk'/>
    <format type='xfs'/>
  </source>
  <target>
    <path>/data/vm</path>
    <permissions>
      <mode>0755</mode>
      <owner>0</owner>
      <group>0</group>
      <label>system_u:object_r:unlabeled_t:s0</label>
    </permissions>
  </target>
</pool>


[root@localhost ~]# mount | grep data
[root@localhost ~]# 


[root@localhost ~]# virsh pool-start vm-image
Pool vm-image started

[root@localhost ~]# mount | grep data
/dev/mapper/data-vm--disk on /data/vm type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
[root@localhost ~]# 


[root@localhost ~]# virsh pool-destroy vm-image
Pool vm-image destroyed


[root@localhost ~]# virsh pool-start vm-image


Actual results:
error: Failed to start pool vm-image
error: internal error: Child process (/usr/bin/mount -t xfs /dev/data/vm-disk /data/vm) unexpected exit status 32: mount: /dev/mapper/data-vm--disk is already mounted or /data/vm busy
       /dev/mapper/data-vm--disk is already mounted on /data/vm



Expected results:
Pool vm-image started


Additional info:

Comment 2 Erik Skultety 2017-02-08 16:57:31 UTC
patches sent for review:
https://www.redhat.com/archives/libvir-list/2017-February/msg00229.html

Comment 3 Erik Skultety 2017-02-10 16:05:45 UTC
Fixed upstream by:

commit b2774db9c2bf7e53a841726fd209f6717b4ad48f
Author:     Erik Skultety <eskultet>
AuthorDate: Tue Feb 7 10:19:21 2017 +0100
Commit:     Erik Skultety <eskultet>
CommitDate: Fri Feb 10 17:01:12 2017 +0100

    storage: Fix checking whether source filesystem is mounted
    
    Right now, we use simple string comparison both on the source paths
    (mount's output vs pool's source) and the target (mount's mnt_dir vs
    pool's target). The problem are symlinks and mount indeed returns
    symlinks in its output, e.g. /dev/mappper/lvm_symlink. The same goes for
    the pool's source/target, so in order to successfully compare these two
    replace plain string comparison with virFileComparePaths which will
    resolve all symlinks and canonicalize the paths prior to comparison.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1417203
    
    Signed-off-by: Erik Skultety <eskultet>

v3.0.0-175-gb2774db

Comment 5 yisun 2017-03-22 05:50:05 UTC
Verified on:
libvirt-3.1.0-2.el7.x86_64

steps:
1. using a symlink of /dev/sdd1 for test, as follow
## ll /dev/disk/by-path/pci-0000:00:1a.0-usb-0:1.1:1.0-scsi-0:0:0:0-part1
lrwxrwxrwx. 1 root root 10 Mar 22 13:26 /dev/disk/by-path/pci-0000:00:1a.0-usb-0:1.1:1.0-scsi-0:0:0:0-part1 -> ../../sdd1

## parted /dev/sdd1 p
Model: Unknown (unknown)
Disk /dev/sdd1: 1000MB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags: 

Number  Start  End     Size    File system  Flags
 1      0.00B  1000MB  1000MB  xfs


2. define and start a fs pool
## cat pool.xml 
<pool type='fs'>
  <name>test-pool</name>
  <uuid>50fd3bf6-9041-438f-93ad-3b455da9cb1f</uuid>
  <source>
    <device path='/dev/disk/by-path/pci-0000:00:1a.0-usb-0:1.1:1.0-scsi-0:0:0:0-part1'/>
    <format type='xfs'/>
  </source>
  <target>
    <path>/tmp/mnt</path>
    <permissions>
      <mode>0755</mode>
      <owner>0</owner>
      <group>0</group>
      <label>system_u:object_r:unlabeled_t:s0</label>
    </permissions>
  </target>
</pool>


## virsh pool-define pool.xml
Pool test-pool defined from pool.xml

## virsh pool-start test-pool
Pool test-pool started

3. check the mount output
## mount | egrep "sdd|pci"
/dev/sdd1 on /tmp/mnt type xfs (rw,relatime,seclabel,attr2,inode64,noquota)


4. destroy the pool
## virsh pool-destroy test-pool
Pool test-pool destroyed

5. as expected, mount point cleared. 
## mount | egrep "sdd|pci"
<==== umounted, nothing here.

6. start the pool and check again, nothing wrong. 
 ## virsh pool-start test-pool
Pool test-pool started

## mount | egrep "sdd|pci"
/dev/sdd1 on /tmp/mnt type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
<==== mounted again

Comment 6 errata-xmlrpc 2017-08-01 17:21:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846

Comment 7 errata-xmlrpc 2017-08-02 00:01:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846


Note You need to log in before you can comment on or make changes to this bug.