Bug 919857
| Summary: | VDSM does not clean unused mounts | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Ido Begun <ibegun> | ||||
| Component: | vdsm | Assignee: | Federico Simoncelli <fsimonce> | ||||
| Status: | CLOSED WONTFIX | QA Contact: | Gadi Ickowicz <gickowic> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 3.2.0 | CC: | abaron, amureini, bazulay, chetan, hateya, iheim, lpeer, nlevinki, oramraz, ykaul | ||||
| Target Milestone: | --- | ||||||
| Target Release: | 3.2.0 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | storage | ||||||
| Fixed In Version: | Doc Type: | Release Note | |||||
| Doc Text: |
The "Force Remove" data center option should only be used after the storage is no longer needed or has been destroyed. If you have leftover data on the storage, manually remove any files under /rhev/data-center, and unmount any mount points that exist there.
|
Story Points: | --- | ||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2013-03-20 09:43:54 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
Share cannot be unmounted while VDSM is running because it is used by VDSM. Stopping VDSM and manually unmounting does not solve the problem, as VDSM remounts the share when starting up. Haim, please test with latest vdsm. Fede introduced a patch that makes sure that monitoring doesn't hang on NFS. Tested and reproduced with vdsm-4.10.2-11.0.el6ev.x86_64 After restarting vdsm, nfs share is mounted again. Fede, please take a look. Briefly looking at this it looks to me as we cannot fix this behavior. Clicking on "Force remove data center" you agree to the fact that you're removing the DC only from the engine database (since you're not able to contact any vdsm host or the storage domain has been lost/corrupted). It's not recommended to try and umount *all* the shares (as the pool is not yet known) on vdsm restart as there might be VMs running on them (on regular basis, I'm not referring to this specific case). We can try to detect if there are files in use on the share (VMs) or even try to umount and expect it to fail for the majority of cases (regular VDSM restart) but it looks risky and it might break other flows (e.g. in this way all the connectStorageServer are always required before connectStoragePool). I think it's too complex trying to handle this corner case where the admin specifically choose to force an operation that generally has a clean flow. Force remove is a destructive operation that by definition requires manual cleanup. User can reboot host |
Created attachment 707909 [details] VDSM log Description of problem: VDSM does not clean unused mounts. Version-Release number of selected component (if applicable): vdsm-4.10.2-10.0 How reproducible: 100% Steps to Reproduce: 1. Create NFS storage domain 2. Block communication between host and storage 3. Wait for host to become non-operational 4. Force remove data center 5. Unblock communication between host and storage 6. Restart VDSM Actual results: Storage is still mounted in /rhev/data-center/mnt Expected results: Storage should not be mounted in /rhev/data-center/mnt Additional info: