Bug 1723247 - Storage pool destroy doesn't umount target netfs directory
Summary: Storage pool destroy doesn't umount target netfs directory
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: 8.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 8.1
Assignee: Ján Tomko
QA Contact: gaojianan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-24 02:12 UTC by gaojianan
Modified: 2020-11-19 09:45 UTC (History)
8 users (show)

Fixed In Version: libvirt-5.5.0-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-06 07:17:15 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
/proc/mounts after 'destroy the pool' (2.96 KB, text/plain)
2019-06-24 02:12 UTC, gaojianan
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:3723 0 None None None 2019-11-06 07:18:01 UTC

Description gaojianan 2019-06-24 02:12:02 UTC
Created attachment 1583815 [details]
/proc/mounts after 'destroy the pool'

Description of problem:
Cmd "virsh destroy" doesn't umount target netfs directory

Version-Release number of selected component (if applicable):
libvirt-5.4.0-1.module+el8.1.0+3304+7eb41d5f.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Create a netfs pool by xml
nfs.pool:
<pool type='netfs'>
<name>netfs</name>
<source>
<host name='10.72.12.185'/>
<dir path='/nfs/'/>
<format type='nfs'/>
#<protocol ver='4'/>
</source>
<target>
<path>/mnt</path>
</target>
</pool>

# virsh pool-create nfs.pool
Pool netfs created from nfs.pool

2.Check the mount status:
# df -h
devtmpfs               3.8G     0  3.8G    0% /dev
tmpfs                  3.8G   16M  3.8G    1% /dev/shm
tmpfs                  3.8G  9.4M  3.8G    1% /run
tmpfs                  3.8G     0  3.8G    0% /sys/fs/cgroup
/dev/mapper/rhel-root   50G   46G  4.1G   92% /
/dev/sda4             1014M  434M  581M   43% /boot
/dev/mapper/rhel-home   62G  8.7G   53G   15% /home
tmpfs                  765M   16K  764M    1% /run/user/42
tmpfs                  765M  4.6M  760M    1% /run/user/1000
tmpfs                  765M  4.0K  765M    1% /run/user/0
10.72.12.185:/nfs       49G   25G   22G   54% /mnt

3.Destroy the netfs pool then check the mount status
# virsh pool-destroy netfs
Pool netfs destroyed

# df -h
...
tmpfs                  765M  4.6M  760M    1% /run/user/1000
tmpfs                  765M  4.0K  765M    1% /run/user/0
10.72.12.185:/nfs       49G   25G   22G   54% /mnt
The mount list still exist
But the pool has been destroyed

Actual results:
As step 3

Expected results:
The pool should be destroyed and it should umount the target directory(/mnt)

Additional info:

Comment 1 Han Han 2019-06-25 02:11:47 UTC
Could this bug reporduced on other netfs pools like glusterfs or cifs (https://libvirt.org/storage.html#StorageBackendNetFS) or other fs pools (https://libvirt.org/storage.html#StorageBackendFS) or other libvirt versions (rhel7.7, rhel8.0-av) ?

Comment 2 gaojianan 2019-06-25 02:35:48 UTC
(In reply to Han Han from comment #1)
> Could this bug reporduced on other netfs pools like glusterfs or cifs
> (https://libvirt.org/storage.html#StorageBackendNetFS) or other fs pools
> (https://libvirt.org/storage.html#StorageBackendFS) or other libvirt
> versions (rhel7.7, rhel8.0-av) ?

This bug can't be reproduced on other netfs pools,
and it can be reproduced on nfs netfs pools in other version(RHEL7.7,rhel8.0.1av,rhel8.1).

Comment 3 Ján Tomko 2019-06-25 11:44:45 UTC
Upstream patch:
https://www.redhat.com/archives/libvir-list/2019-June/msg01174.html

Comment 4 Ján Tomko 2019-06-25 15:13:08 UTC
Pushed upstream as:
commit 738dc3f4adcde371d8c6c23d63ec688c1f3f1458
Author:     Ján Tomko <jtomko>
CommitDate: 2019-06-25 17:11:56 +0200

    conf: storage: also sanitize source dir
    
    Commit a7fb2258 added sanitization of storage pool target paths,
    however source dir paths were left unsanitized.
    
    A netfs pool with:
    <source>
      <host name='10.20.30.40'/>
      <dir path='/nfs/'/>
    </source>
    will not be correctly detected as mounted by
    virStorageBackendFileSystemIsMounted, because it shows up in the
    mount list without the trailing slash.
    
    Sanitize the source dir as well.
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1723247
    
    Signed-off-by: Ján Tomko <jtomko>
    Acked-by: Peter Krempa <pkrempa>

git describe: v5.4.0-353-g738dc3f4ad

Comment 6 gaojianan 2019-08-08 08:43:13 UTC
Verified at libvirt-5.6.0-1.virtcov.el8.x86_64:

1.Create a netfs pool by xml
nfs.pool:
<pool type='netfs'>
<name>netfs</name>
<source>
<host name='10.72.12.150'/>
<dir path='/nfs/'/>
<format type='nfs'/>
#<protocol ver='4'/>
</source>
<target>
<path>/mnt/tmp</path>
</target>
</pool>


# virsh pool-create nfs.pool
Pool netfs created from nfs.pool

2.Check the mount status:
# df -h
Filesystem                              Size  Used Avail Use% Mounted on
devtmpfs                                1.9G     0  1.9G   0% /dev
tmpfs                                   1.9G     0  1.9G   0% /dev/shm
tmpfs                                   1.9G   17M  1.9G   1% /run
tmpfs                                   1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/mapper/rhel_kvm--08--guest22-root   46G  3.7G   43G   8% /
/dev/vda1                              1014M  172M  843M  17% /boot
tmpfs                                   379M     0  379M   0% /run/user/0
10.72.12.150:/nfs                        69G   13G   53G  20% /mnt/tmp


3.Destroy the netfs pool then check the mount status
# virsh pool-destroy netfs
Pool netfs destroyed

# df -h
Filesystem                              Size  Used Avail Use% Mounted on
devtmpfs                                1.9G     0  1.9G   0% /dev
tmpfs                                   1.9G     0  1.9G   0% /dev/shm
tmpfs                                   1.9G   17M  1.9G   1% /run
tmpfs                                   1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/mapper/rhel_kvm--08--guest22-root   46G  3.7G   43G   8% /
/dev/vda1                              1014M  172M  843M  17% /boot
tmpfs                                   379M     0  379M   0% /run/user/0

The mount list /mnt/tmp has disappeared
And the pool has been destroyed
Worked as expected.

Code coverage report:
 diff-cover /tmp/coverage.xml --diff-file /tmp/patch --code-path /builddir/build/BUILD/libvirt-4.5.0/ --html-report /tmp/example.html
-------------
Diff Coverage
Diff: origin/master...HEAD, staged, and unstaged changes
-------------
src/conf/storage_conf.c (100%)
tests/storagepoolxml2xmltest.c (0.0%): Missing lines 76
-------------
Total:   2 lines
Missing: 1 line
Coverage: 50%
-------------
Cover all the executed code.

Comment 8 errata-xmlrpc 2019-11-06 07:17:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3723

Comment 9 gaojianan 2019-12-25 03:29:02 UTC
Hello,i met a related problem recently associate with this bug.
In this bug we have fixed the problem that when destroying the pool meanwhile umount the target.
But if we create the pool firstly,then umount the target,if the pool should still in active status?
Now,umount the target will have no influence to the pool status.
I know maybe it's difficult to check the mount status in libvirt,but i still want to know is it a problem(bug) or not ?

Comment 10 Ján Tomko 2020-02-13 16:46:04 UTC
(In reply to gaojianan from comment #9)
> But if we create the pool firstly,then umount the target,if the pool should
> still in active status?
> Now,umount the target will have no influence to the pool status.
> I know maybe it's difficult to check the mount status in libvirt,but i still
> want to know is it a problem(bug) or not ?

Generally, if you alter the storage pool by tools other than libvirt, it will
not know about the changes until you do a refresh - libvirtd does not monitor
the changes.


Note You need to log in before you can comment on or make changes to this bug.