Bug 1559811 - Deploy HE failed on the [Add NFS storage domain] task while selecting NFS v4_1
Summary: Deploy HE failed on the [Add NFS storage domain] task while selecting NFS v4_1
Keywords:
Status: CLOSED DUPLICATE of bug 1554922
Alias: None
Product: cockpit-ovirt
Classification: oVirt
Component: Hosted Engine
Version: 0.11.19
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ovirt-4.2.3
: ---
Assignee: Ryan Barry
QA Contact: Yihui Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-23 10:37 UTC by Yihui Zhao
Modified: 2018-04-09 09:12 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-09 09:12:00 UTC
oVirt Team: Node
Embargoed:
rule-engine: ovirt-4.2+
yzhao: testing_ack+


Attachments (Terms of Use)
vdsm_log (1.90 MB, text/plain)
2018-03-23 13:56 UTC, Yihui Zhao
no flags Details
engine_log (124.04 KB, text/plain)
2018-03-23 13:57 UTC, Yihui Zhao
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1554922 0 unspecified CLOSED Failures creating a storage domain via ansible module/REST API doesn't report a meaningful error message 2021-02-22 00:41:40 UTC

Internal Links: 1554922

Description Yihui Zhao 2018-03-23 10:37:24 UTC
Description of problem: 
Deploy HE failed on the [Add NFS storage domain] task  while selecting NFS v4_1.
"""

[ INFO ] TASK [Add NFS storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."}
"""

From the log:
2018-03-23 18:07:07,336+0800 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u'Add NFS storage domain', 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True, u\'exception\': u\'Traceback (most recent call last):\\n  File "/tmp/ansible_QadH6J/ansible_module_ovirt_storage_domains.py", line 541, in main\\n    sd_id = storage_domains_module.create()[\\\'id\\\']\\n  File "/tmp/ansible_QadH6J/ansible_modlib.zip/ansible/module_utils/ovirt.py", l\nrepr: {\'_ansible_parsed\': True, u\'exception\': u\'Traceback (most recent call last):\\n  File "/tmp/ansible_QadH6J/ansible_module_ovirt_storage_domains.py", line 541, in main\\n    sd_id = storage_domains_module.create()[\\\'id\\\']\\n  File "/tmp/ansible_QadH6J/ansible_modlib.zip/ansible/module_utils/ovirt.py", l\ndir: [\'__class__\', \'__cmp__\', \'__contains__\', \'__delattr__\', \'__delitem__\', \'__doc__\', \'__eq__\', \'__format__\', \'__ge__\', \'__getattribute__\', \'__getitem__\', \'__gt__\', \'__hash__\', \'__init__\', \'__iter__\', \'__le__\', \'__len__\', \'__lt__\', \'__ne__\', \'__new__\', \'__reduce__\', \'__reduce_ex__\', \'__repr__\', \'__setattr__\', \'__setitem__\', \'__sizeof__\', \'__str__\', \'__subclasshook__\', \'clear\', \'copy\', \'fromkeys\', \'get\', \'has_key\', \'items\', \'iteritems\', \'iterkeys\', \'itervalues\', \'keys\', \'pop\', \'popitem\', \'setdefault\', \'update\', \'values\', \'viewitems\', \'viewkeys\', \'viewvalues\']\npprint: {\'_ansible_no_log\': False,\n \'_ansible_parsed\': True,\n \'changed\': False,\n u\'exception\': u\'Traceback (most recent call last):\\n  File "/tmp/ansible_QadH6J/ansible_module_ovirt_storage_domains.py", line 541, in main\\n    sd_id = storage_domains_module.create()[\\\'id\\\']\\n  File "/tmp/ansible_QadH6J/ansib\n{\'_ansible_parsed\': True, u\'exception\': u\'Traceback (most recent call last):\\n  File "/tmp/ansible_Q.__doc__: "dict() -> new empty dictionary\\ndict(mapping) -> new dictionary initialized from a mapping object\'s\\n    (key, value) pairs\\ndict(iterable) -> new dictionary initialized as if via:\\n    d = {}\\n    for k, v in iterable:\\n        d[k] = v\\ndict(**kwargs) -> new dictionary initialized with the name=value pairs\\n    in the keyword argument list.  For example:  dict(one=1, two=2)"\n{\'_ansible_parsed\': True, u\'exception\': u\'Traceback (most recent call last):\\n  File "/tmp/ansible_Q.__hash__: None', 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/create_storage_domain.yml'}


Version-Release number of selected component (if applicable): 
rhvh-4.2.2.0-0.20180322.0+1
cockpit-ovirt-dashboard-0.11.19-1.el7ev.noarch
ovirt-hosted-engine-setup-2.2.14-1.el7ev.noarch
ovirt-hosted-engine-ha-2.2.7-1.el7ev.noarch
rhvm-appliance-4.2-20180322.0.el7.noarch


How reproducible: 
100%


Steps to Reproduce: 
1. Deploy HE with NFSv4_1 from the cockpit with ansible

Actual results:  
1. Deploy HE failed while selecting the NFS v4_1

Expected results: 
Ansible deployment successfully.

Additional info:
Auto, v3, v4 have no this issue.

Comment 1 Simone Tiraboschi 2018-03-23 10:45:53 UTC
Can we mount that share with NFS v4.1?
Can you please attach vdsm and engine.log?

Comment 2 Yihui Zhao 2018-03-23 13:56:45 UTC
Created attachment 1412122 [details]
vdsm_log

Comment 3 Yihui Zhao 2018-03-23 13:57:56 UTC
Created attachment 1412123 [details]
engine_log

Comment 4 Simone Tiraboschi 2018-03-23 14:12:15 UTC
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."}
"""

Error message on ansible/REST API side is really poor as for https://bugzilla.redhat.com/1554922


On engine side:
 2018-03-23 21:47:33,685+08 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-54) [9ccb5f90-cee9-41ee-8714-e7744b307680] START, ConnectStorageServerVDSCommand(HostName = ibm-x3650m5-05.lab.eng.pek2.redhat.com, StorageServerConnectionManagementVDSParameters:{hostId='4636e9be-6af1-4a56-ad72-efd90956dce0', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='NFS', connectionList='[StorageServerConnections:{id='null', connection='10.66.148.11:/home/yzhao/nfs9', iqn='null', vfsType='null', mountOptions='null', nfsVersion='V4_1', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 7f8f1a7e
 2018-03-23 21:47:33,742+08 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-54) [9ccb5f90-cee9-41ee-8714-e7744b307680] FINISH, ConnectStorageServerVDSCommand, return: {00000000-0000-0000-0000-000000000000=477}, log id: 7f8f1a7e
 2018-03-23 21:47:33,754+08 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-54) [9ccb5f90-cee9-41ee-8714-e7744b307680] EVENT_ID: STORAGE_DOMAIN_ERROR(996), The error message for connection 10.66.148.11:/home/yzhao/nfs9 returned by VDSM was: Problem while trying to mount target
 2018-03-23 21:47:33,754+08 ERROR [org.ovirt.engine.core.bll.storage.connection.FileStorageHelper] (default task-54) [9ccb5f90-cee9-41ee-8714-e7744b307680] The connection with details '10.66.148.11:/home/yzhao/nfs9' failed because of error code '477' and error message is: problem while trying to mount target
 
On vdsm side: 
 2018-03-23 21:47:33,692+0800 INFO  (jsonrpc/7) [vdsm.api] START connectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u'10.66.148.11:/home/yzhao/nfs9', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'4.1', u'password': '********', u'port': u''}], options=None) from=::ffff:192.168.124.70,57618, flow_id=9ccb5f90-cee9-41ee-8714-e7744b307680, task_id=4a320158-b15e-41c3-ab6c-24c5b8873463 (api:46)
 2018-03-23 21:47:33,695+0800 INFO  (jsonrpc/7) [storage.StorageServer.MountConnection] Creating directory u'/rhev/data-center/mnt/10.66.148.11:_home_yzhao_nfs9' (storageServer:167)
 2018-03-23 21:47:33,695+0800 INFO  (jsonrpc/7) [storage.fileUtils] Creating directory: /rhev/data-center/mnt/10.66.148.11:_home_yzhao_nfs9 mode: None (fileUtils:197)
 2018-03-23 21:47:33,696+0800 INFO  (jsonrpc/7) [storage.Mount] mounting 10.66.148.11:/home/yzhao/nfs9 at /rhev/data-center/mnt/10.66.148.11:_home_yzhao_nfs9 (mount:204)
 2018-03-23 21:47:33,725+0800 ERROR (jsonrpc/7) [storage.HSM] Could not connect to storageServer (hsm:2398)
 Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2395, in connectStorageServer
     conObj.connect()
   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 430, in connect
     return self._mountCon.connect()
   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 179, in connect
     six.reraise(t, v, tb)
   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 171, in connect
     self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
   File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207, in mount
     cgroup=cgroup)
   File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in __call__
     return callMethod()
   File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53, in <lambda>
     **kwargs)
   File "<string>", line 2, in mount
   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod
     raise convert_to_error(kind, result)
 MountError: (32, ';mount.nfs: Protocol not supported\n')
 2018-03-23 21:47:33,727+0800 INFO  (jsonrpc/7) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 477, 'id': u'00000000-0000-0000-0000-000000000000'}]} from=::ffff:192.168.124.70,57618, flow_id=9ccb5f90-cee9-41ee-8714-e7744b307680, task_id=4a320158-b15e-41c3-ab6c-24c5b8873463 (api:52)

Comment 5 Simone Tiraboschi 2018-04-09 09:12:00 UTC
Unable to reproduce, from cockpit, with a storage server that supports NFS v4.1.

'mount.nfs: Protocol not supported' should still be clearly reported to the user if the storage server doesn't support it.
Closing as a duplicate of 1554922

*** This bug has been marked as a duplicate of bug 1554922 ***


Note You need to log in before you can comment on or make changes to this bug.