Description of problem: ovirt-hosted-engine-setup initially call connectStorageServer as soon as the user provides the storage server details. The call fails: 2015-09-02 13:54:59 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._storageServerConnection:618 connectStorageServer 2015-09-02 13:54:59 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._storageServerConnection:681 {'status': {'message': 'OK', 'code': 0}, 'statuslist': [{'status': 100, 'id': '40d3d176-c857-440d-808f-c52474b2d67a'}]} 2015-09-02 13:54:59 DEBUG otopi.context context._executeMethod:156 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 146, in _executeMethod method['method']() File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/storage.py", line 1244, in _late_customization self._getExistingDomain() File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/storage.py", line 501, in _getExistingDomain self._storageServerConnection() File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/storage.py", line 688, in _storageServerConnection _('Connection to storage server failed') RuntimeError: Connection to storage server failed In VDSM log we can find: Thread-17::DEBUG::2015-09-02 13:54:59,512::task::595::Storage.TaskManager.Task::(_updateState) Task=`8c720316-6320-43ae-a7b0-3aa16c682b6e`::moving from state init -> state preparing Thread-17::INFO::2015-09-02 13:54:59,513::logUtils::48::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'protocol_version': 3, 'connection': '192.168.10.2:/nfsshare/storage.1441191248', 'user': 'kvm', 'id': '40d3d176-c857-440d-808f-c52474b2d67a'}], options=None) Thread-17::DEBUG::2015-09-02 13:54:59,514::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt/192.168.10.2:_nfsshare_storage.1441191248 mode: None Thread-17::DEBUG::2015-09-02 13:54:59,515::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo -n /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3 192.168.10.2:/nfsshare/storage.1441191248 /rhev/data-center/mnt/192.168.10.2:_nfsshare_storage.1441191248 (cwd None) Thread-17::DEBUG::2015-09-02 13:54:59,543::__init__::298::IOProcessClient::(_run) Starting IOProcess... Thread-17::ERROR::2015-09-02 13:54:59,559::hsm::2454::Storage.HSM::(connectStorageServer) Could not connect to storageServer Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 2451, in connectStorageServer conObj.connect() File "/usr/share/vdsm/storage/storageServer.py", line 419, in connect return self._mountCon.connect() File "/usr/share/vdsm/storage/storageServer.py", line 236, in connect self.getMountObj().getRecord().fs_file) File "/usr/share/vdsm/storage/fileSD.py", line 76, in validateDirAccess (os.R_OK | os.X_OK)) File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> **kwargs) File "<string>", line 2, in validateAccess File "/usr/lib64/python2.7/multiprocessing/managers.py", line 759, in _callmethod kind, result = conn.recv() AttributeError: 'module' object has no attribute 'Timeout' Thread-17::DEBUG::2015-09-02 13:54:59,561::hsm::2478::Storage.HSM::(connectStorageServer) knownSDs: {} Thread-17::INFO::2015-09-02 13:54:59,561::logUtils::51::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 100, 'id': '40d3d176-c857-440d-808f-c52474b2d67a'}]} Thread-17::DEBUG::2015-09-02 13:54:59,561::task::1191::Storage.TaskManager.Task::(prepare) Task=`8c720316-6320-43ae-a7b0-3aa16c682b6e`::finished: {'statuslist': [{'status': 100, 'id': '40d3d176-c857-440d-808f-c52474b2d67a'}]} Thread-17::DEBUG::2015-09-02 13:54:59,561::task::595::Storage.TaskManager.Task::(_updateState) Task=`8c720316-6320-43ae-a7b0-3aa16c682b6e`::moving from state preparing -> state finished while in supervdsm logs: MainProcess|Thread-17::DEBUG::2015-09-02 13:54:59,555::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), '/rhev/data-center/mnt/192.168.10.2:_nfsshare_storage.1441191248', 5) {} MainProcess|Thread-17::ERROR::2015-09-02 13:54:59,558::supervdsmServer::113::SuperVdsm.ServerCallback::(wrapper) Error in validateAccess Traceback (most recent call last): File "/usr/share/vdsm/supervdsmServer", line 111, in wrapper res = func(*args, **kwargs) File "/usr/share/vdsm/supervdsmServer", line 279, in validateAccess kwargs=kwargs) File "/usr/share/vdsm/supervdsmServer", line 260, in _runAs raise Timeout() Timeout Version-Release number of selected component (if applicable): vdsm noarch 4.17.4-0.el7 ovirt-3.6-snapshot 893 k How reproducible: 100% Steps to Reproduce: 1. call connectStorageServer 2. 3. Actual results: AttributeError: 'module' object has no attribute 'Timeout' Expected results: it works Additional info: It's failing also on the ovirt-hosted-engine-setup CI jobs, all the logs are available here: http://jenkins-ci.eng.lab.tlv.redhat.com/job/hosted_engine_3.6_el7_staticip_nfs3_install_on_7.1/75/
I've reproduced this in my own setup, looking into it...
*** Bug 1260608 has been marked as a duplicate of this bug. ***
I've actually encountered this in my setup (RHEL7.2): vdsm-4.17.5-1.el7ev.noarch libvirt-1.2.17-6.el7.x86_64 Greg, can I ask you if there's any WA or a new package I can download right now to fix this?
connectStorageServer works as expected using: vdsm-4.17.10-5.el7ev.noarch rhevm-3.6.0.2-0.1.el6.noarch Thread-4952::INFO::2015-11-02 14:57:45,774::logUtils::48::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'd2d67b6d-d731-49f 6-ac61-c21cd3baf023', u'connection': u'10.35.160.108:/Storage/elad/1', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'3', u'password': '********', u'port': u''}], options=None) Thread-4952::DEBUG::2015-11-02 14:57:45,776::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt/10.35.160.108:_Storage_elad_1 mode: None Thread-4952::DEBUG::2015-11-02 14:57:45,776::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo -n /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3 10.35.160.108:/Storage/elad/1 /rhev /data-center/mnt/10.35.160.108:_Storage_elad_1 (cwd None) Thread-4952::DEBUG::2015-11-02 14:57:45,831::hsm::2405::Storage.HSM::(__prefetchDomains) nfs local path: /rhev/data-center/mnt/10.35.160.108:_Storage_elad_1 Thread-4952::DEBUG::2015-11-02 14:57:45,833::hsm::2429::Storage.HSM::(__prefetchDomains) Found SD uuids: (u'a50c4d74-f8bb-48c9-93b9-5ef5f6d908df',) Thread-4952::DEBUG::2015-11-02 14:57:45,833::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs: {a50c4d74-f8bb-48c9-93b9-5ef5f6d908df: storage.nfsSD.findDomain} Thread-4952::INFO::2015-11-02 14:57:45,833::logUtils::51::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': u'd2d67b6d-d731-49f6-ac61-c21cd3baf023'}]} Thread-4952::DEBUG::2015-11-02 14:57:45,833::task::1191::Storage.TaskManager.Task::(prepare) Task=`220338c7-867b-4b01-a621-bf75f47eed88`::finished: {'statuslist': [{'status': 0, 'id': u'd2d67b6d-d731-49f6-ac61-c21cd3baf023'}]} Thread-4952::DEBUG::2015-11-02 14:57:45,833::task::595::Storage.TaskManager.Task::(_updateState) Task=`220338c7-867b-4b01-a621-bf75f47eed88`::moving from state preparing -> state finished Thread-4952::DEBUG::2015-11-02 14:57:45,834::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-4952::DEBUG::2015-11-02 14:57:45,834::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-4952::DEBUG::2015-11-02 14:57:45,834::task::993::Storage.TaskManager.Task::(_decref) Task=`220338c7-867b-4b01-a621-bf75f47eed88`::ref 0 aborting False Thread-4952::DEBUG::2015-11-02 14:57:45,834::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest) Return 'StoragePool.connectStorageServer' in bridge with [{'status': 0, 'id': u'd2d67b6d-d731-49f6-ac61-c21cd3baf023'}]
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0362.html