RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 853040 - 3.1 - [vdsm] we are not cleaning /rhev/data-center/mnt/ after failed mount commands
Summary: 3.1 - [vdsm] we are not cleaning /rhev/data-center/mnt/ after failed mount co...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm
Version: 6.3
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Yeela Kaplan
QA Contact: Leonid Natapov
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-08-30 10:51 UTC by Haim
Modified: 2022-07-09 05:38 UTC (History)
9 users (show)

Fixed In Version: vdsm-4.9.6-39.0
Doc Type: Bug Fix
Doc Text:
Previously, when using the Red Hat Enterprise Virtualization Manager portal to create an NFS data storage domain, an entry would be created regardless of whether or not the right path was provided to the mount command. As the manager assumed there was always a correct target, this could potentially cause serious data loss. This issue has been corrected so that if an incorrect path is provided, the action fails and the failed directory is cleaned automatically by the VDSM.
Clone Of:
Environment:
Last Closed: 2012-12-04 19:08:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
vdsm log (595.88 KB, application/x-gzip)
2012-08-30 10:52 UTC, Haim
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2012:1508 0 normal SHIPPED_LIVE Important: rhev-3.1.0 vdsm security, bug fix, and enhancement update 2012-12-04 23:48:05 UTC

Description Haim 2012-08-30 10:51:25 UTC
Description of problem:

1) create new storage domain from type NFS (using web-admin)
2) provide wrong path to mount command
3) check /rhev/data-center/mnt - new entry is added but not removed even after engine sent disconnectStorageServer to clean the failed mount.

behind the scene, it looks as follows:

Thread-3897::INFO::2012-08-30 13:37:14,350::logUtils::39::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 477, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-4047::INFO::2012-08-30 13:39:25,611::logUtils::37::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'retrans': '20', 'port': '', 'connection': '10.35.97.44:/pipi/popo', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None)
Thread-4047::DEBUG::2012-08-30 13:39:25,615::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=20 10.35.97.44:/pipi/popo /rhev/data-center/mnt/10.35.97.44:_pipi_popo' (cwd None)
Thread-4120::INFO::2012-08-30 13:41:25,757::logUtils::37::dispatcher::(wrapper) Run and protect: disconnectStorageServer(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'retrans': '20', 'port': '', 'connection': '10.35.97.44:/pipi/popo', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None)
Thread-4120::DEBUG::2012-08-30 13:41:25,758::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n /bin/umount -f -l /rhev/data-center/mnt/10.35.97.44:_pipi_popo' (cwd None)
Thread-4120::ERROR::2012-08-30 13:41:25,805::hsm::2045::Storage.HSM::(disconnectStorageServer) Could not disconnect from storageServer
  File "/usr/share/vdsm/storage/hsm.py", line 2042, in disconnectStorageServer
    return self._mountCon.disconnect()
    self._mount.umount(True, True)
  File "/usr/share/vdsm/storage/mount.py", line 226, in umount
  File "/usr/share/vdsm/storage/mount.py", line 214, in _runcmd
MountError: (1, ';umount: /rhev/data-center/mnt/10.35.97.44:_pipi_popo: not mounted\n')
Thread-4120::INFO::2012-08-30 13:41:27,274::logUtils::39::dispatcher::(wrapper) Run and protect: disconnectStorageServer, Return response: {'statuslist': [{'status': 477, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-4047::ERROR::2012-08-30 13:41:30,806::hsm::1971::Storage.HSM::(connectStorageServer) Could not connect to storageServer
  File "/usr/share/vdsm/storage/hsm.py", line 1968, in connectStorageServer
    return self._mountCon.connect()
    self._mount.mount(self.options, self._vfsType)
  File "/usr/share/vdsm/storage/mount.py", line 198, in mount
  File "/usr/share/vdsm/storage/mount.py", line 214, in _runcmd
MountError: (32, ';mount.nfs: Connection timed out\n')
Thread-4047::INFO::2012-08-30 13:41:30,810::logUtils::39::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 477, 'id': '00000000-0000-0000-0000-000000000000'}]}

new dir is not cleaned (even after vdsm restart).

[root@nott-vds1 ~]# ll /rhev/data-center/mnt/
total 208
drwxr-xr-x.  2 vdsm kvm 4096 Aug 30 13:39 10.35.97.44:_pipi_popo

Comment 1 Haim 2012-08-30 10:52:39 UTC
Created attachment 608151 [details]
vdsm log

Comment 3 Ayal Baron 2012-09-02 06:39:26 UTC
it should be cleaned during the connect command, not the disconnect command.
also note that this has severe implications: 853011

Comment 4 Yeela Kaplan 2012-10-21 13:12:27 UTC
http://gerrit.ovirt.org/#/c/8695/

Comment 7 Haim 2012-10-24 16:31:47 UTC
verified on 4.9.6-39.

Comment 9 errata-xmlrpc 2012-12-04 19:08:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2012-1508.html


Note You need to log in before you can comment on or make changes to this bug.