Bug 978285 - Unable to add storage domain because nfs daemons not running
Unable to add storage domain because nfs daemons not running
Status: CLOSED DUPLICATE of bug 977940
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.3.0
Unspecified Unspecified
unspecified Severity medium
: ---
: ---
Assigned To: Nobody's working on this, feel free to take it
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-06-26 05:24 EDT by Jiri Belka
Modified: 2013-06-26 06:41 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-06-26 06:41:39 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
vdsm.log, ovirt-20130626110254-10.34.63.222-2bdb391c.log, engine.log (483.06 KB, application/x-gzip)
2013-06-26 05:24 EDT, Jiri Belka
no flags Details

  None (edit)
Description Jiri Belka 2013-06-26 05:24:27 EDT
Created attachment 765482 [details]
vdsm.log, ovirt-20130626110254-10.34.63.222-2bdb391c.log, engine.log

Description of problem:

When you add a host and your SD type is NFS, it should be guaranteed that the host has all nfs daemons enabled and running as well.

I manually stopped nfs daemons (rcp*, nfs) and then I added the host via Admin Portal. The host is up, then I added data SD and it fails with

~~~
Error while executing action: Cannot remove Storage. Storage connection id is empty.
~~~

vdsm.log pukes ERROR messages about nfs subdaemons not running. host deployment should double check nfs subdaemons are correctly setup.

~~~
Thread-191::ERROR::2013-06-26 11:10:07,631::storageServer::209::StorageServer.MountConnection::(connect) Mount failed: (32, ";mount.nfs: rpc.statd is not runni
ng but is required for remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local, or start statd.\nmount.nfs: an incorrect mount option was specif
ied\n")
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/storageServer.py", line 207, in connect
    self._mount.mount(self.options, self._vfsType)
  File "/usr/share/vdsm/storage/mount.py", line 222, in mount
    return self._runcmd(cmd, timeout)
  File "/usr/share/vdsm/storage/mount.py", line 238, in _runcmd
    raise MountError(rc, ";".join((out, err)))
MountError: (32, ";mount.nfs: rpc.statd is not running but is required for remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local, or start sta
td.\nmount.nfs: an incorrect mount option was specified\n")
Thread-191::ERROR::2013-06-26 11:10:07,631::hsm::2307::Storage.HSM::(connectStorageServer) Could not connect to storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2304, in connectStorageServer
    conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 320, in connect
    return self._mountCon.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 215, in connect
    raise e
MountError: (32, ";mount.nfs: rpc.statd is not running but is required for remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local, or start sta
td.\nmount.nfs: an incorrect mount option was specified\n")
~~~



Version-Release number of selected component (if applicable):
is2

How reproducible:
100%

Steps to Reproduce:
1. login to host and stop nfs subdaemons (rpcbind, rpcidmapd, nfs)
2. add host from engine
3. add data SD

Actual results:
unable to add data SD

Expected results:
successful host deployment should double-check nfs subdaemons are enabled and started to be ready for later configuration (to add SD)

Additional info:

Manually starting

  rpcbind, rpcidmapd, nfs

makes engine to add data SD correctly.
Comment 1 Haim 2013-06-26 06:41:39 EDT

*** This bug has been marked as a duplicate of bug 977940 ***

Note You need to log in before you can comment on or make changes to this bug.