Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 919857 - VDSM does not clean unused mounts
VDSM does not clean unused mounts
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm (Show other bugs)
3.2.0
Unspecified Unspecified
unspecified Severity medium
: ---
: 3.2.0
Assigned To: Federico Simoncelli
Gadi Ickowicz
storage
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-03-10 10:11 EDT by Ido Begun
Modified: 2016-02-10 15:26 EST (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Release Note
Doc Text:
The "Force Remove" data center option should only be used after the storage is no longer needed or has been destroyed. If you have leftover data on the storage, manually remove any files under /rhev/data-center, and unmount any mount points that exist there.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-03-20 05:43:54 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
VDSM log (2.34 MB, text/x-log)
2013-03-10 10:11 EDT, Ido Begun
no flags Details

  None (edit)
Description Ido Begun 2013-03-10 10:11:00 EDT
Created attachment 707909 [details]
VDSM log

Description of problem:
VDSM does not clean unused mounts.

Version-Release number of selected component (if applicable):
vdsm-4.10.2-10.0


How reproducible:
100%

Steps to Reproduce:
1. Create NFS storage domain
2. Block communication between host and storage
3. Wait for host to become non-operational
4. Force remove data center
5. Unblock communication between host and storage
6. Restart VDSM

Actual results:
Storage is still mounted in /rhev/data-center/mnt

Expected results:
Storage should not be mounted in /rhev/data-center/mnt

Additional info:
Comment 1 Ido Begun 2013-03-10 11:03:16 EDT
Share cannot be unmounted while VDSM is running because it is used by VDSM. Stopping VDSM and manually unmounting does not solve the problem, as VDSM remounts the share when starting up.
Comment 2 Ayal Baron 2013-03-13 06:24:50 EDT
Haim, please test with latest vdsm.  Fede introduced a patch that makes sure that monitoring doesn't hang on NFS.
Comment 4 Gadi Ickowicz 2013-03-18 05:08:26 EDT
Tested and reproduced with vdsm-4.10.2-11.0.el6ev.x86_64

After restarting vdsm, nfs share is mounted again.
Comment 5 Ayal Baron 2013-03-19 04:31:27 EDT
Fede, please take a look.
Comment 6 Federico Simoncelli 2013-03-19 05:39:00 EDT
Briefly looking at this it looks to me as we cannot fix this behavior.
Clicking on "Force remove data center" you agree to the fact that you're removing the DC only from the engine database (since you're not able to contact any vdsm host or the storage domain has been lost/corrupted).

It's not recommended to try and umount *all* the shares (as the pool is not yet known) on vdsm restart as there might be VMs running on them (on regular basis, I'm not referring to this specific case).

We can try to detect if there are files in use on the share (VMs) or even try to umount and expect it to fail for the majority of cases (regular VDSM restart) but it looks risky and it might break other flows (e.g. in this way all the connectStorageServer are always required before connectStoragePool).

I think it's too complex trying to handle this corner case where the admin specifically choose to force an operation that generally has a clean flow.
Comment 7 Ayal Baron 2013-03-20 05:43:54 EDT
Force remove is a destructive operation that by definition requires manual cleanup.

User can reboot host

Note You need to log in before you can comment on or make changes to this bug.