Bug 978285

Summary: Unable to add storage domain because nfs daemons not running
Product: Red Hat Enterprise Virtualization Manager Reporter: Jiri Belka <jbelka>
Component: ovirt-engineAssignee: Nobody's working on this, feel free to take it <nobody>
Status: CLOSED DUPLICATE QA Contact:
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.3.0CC: acathrow, dyasny, hateya, iheim, lpeer, Rhev-m-bugs, yeylon, ykaul
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-06-26 10:41:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
vdsm.log, ovirt-20130626110254-10.34.63.222-2bdb391c.log, engine.log none

Description Jiri Belka 2013-06-26 09:24:27 UTC
Created attachment 765482 [details]
vdsm.log, ovirt-20130626110254-10.34.63.222-2bdb391c.log, engine.log

Description of problem:

When you add a host and your SD type is NFS, it should be guaranteed that the host has all nfs daemons enabled and running as well.

I manually stopped nfs daemons (rcp*, nfs) and then I added the host via Admin Portal. The host is up, then I added data SD and it fails with

~~~
Error while executing action: Cannot remove Storage. Storage connection id is empty.
~~~

vdsm.log pukes ERROR messages about nfs subdaemons not running. host deployment should double check nfs subdaemons are correctly setup.

~~~
Thread-191::ERROR::2013-06-26 11:10:07,631::storageServer::209::StorageServer.MountConnection::(connect) Mount failed: (32, ";mount.nfs: rpc.statd is not runni
ng but is required for remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local, or start statd.\nmount.nfs: an incorrect mount option was specif
ied\n")
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/storageServer.py", line 207, in connect
    self._mount.mount(self.options, self._vfsType)
  File "/usr/share/vdsm/storage/mount.py", line 222, in mount
    return self._runcmd(cmd, timeout)
  File "/usr/share/vdsm/storage/mount.py", line 238, in _runcmd
    raise MountError(rc, ";".join((out, err)))
MountError: (32, ";mount.nfs: rpc.statd is not running but is required for remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local, or start sta
td.\nmount.nfs: an incorrect mount option was specified\n")
Thread-191::ERROR::2013-06-26 11:10:07,631::hsm::2307::Storage.HSM::(connectStorageServer) Could not connect to storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2304, in connectStorageServer
    conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 320, in connect
    return self._mountCon.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 215, in connect
    raise e
MountError: (32, ";mount.nfs: rpc.statd is not running but is required for remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local, or start sta
td.\nmount.nfs: an incorrect mount option was specified\n")
~~~



Version-Release number of selected component (if applicable):
is2

How reproducible:
100%

Steps to Reproduce:
1. login to host and stop nfs subdaemons (rpcbind, rpcidmapd, nfs)
2. add host from engine
3. add data SD

Actual results:
unable to add data SD

Expected results:
successful host deployment should double-check nfs subdaemons are enabled and started to be ready for later configuration (to add SD)

Additional info:

Manually starting

  rpcbind, rpcidmapd, nfs

makes engine to add data SD correctly.

Comment 1 Haim 2013-06-26 10:41:39 UTC

*** This bug has been marked as a duplicate of bug 977940 ***