Created attachment 1059457 [details] vdsm.log, engine.log Description of problem: Thread-824::DEBUG::2015-08-05 08:00:20,223::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None) JsonRpc (StompReactor)::DEBUG::2015-08-05 08:00:20,227::stompreactor::235::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=u'SEND'> Thread-825::DEBUG::2015-08-05 08:00:20,227::__init__::496::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'StoragePool.disconnectStorageServer' in bridge with {u'connectionParams': [{u'id': u'7d3e446c-9ee7-41ad-82a2-e6f34934666a', u'connection': u'10.16.29.93:/jbelka-export', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'3', u'password': '********', u'port': u''}], u'storagepoolID': u'00000000-0000-0000-0000-000000000000', u'domainType': 1} JsonRpcServer::DEBUG::2015-08-05 08:00:20,228::__init__::533::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request Thread-825::DEBUG::2015-08-05 08:00:20,228::task::595::Storage.TaskManager.Task::(_updateState) Task=`a1512bf3-a9cc-4c23-a950-55db34c9b585`::moving from state init -> state preparing Thread-825::INFO::2015-08-05 08:00:20,228::logUtils::48::dispatcher::(wrapper) Run and protect: disconnectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'7d3e446c-9ee7-41ad-82a2-e6f34934666a', u'connection': u'10.16.29.93:/jbelka-export', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'3', u'password': '********', u'port': u''}], options=None) Thread-825::DEBUG::2015-08-05 08:00:20,229::mount::234::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo -n /usr/bin/umount -f -l /rhev/data-center/mnt/10.16.29.93:_jbelka-export (cwd None) Thread-825::ERROR::2015-08-05 08:00:20,510::hsm::2543::Storage.HSM::(disconnectStorageServer) Could not disconnect from storageServer Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 2539, in disconnectStorageServer conObj.disconnect() File "/usr/share/vdsm/storage/storageServer.py", line 406, in disconnect return self._mountCon.disconnect() File "/usr/share/vdsm/storage/storageServer.py", line 248, in disconnect self._mount.umount(True, True) File "/usr/share/vdsm/storage/mount.py", line 261, in umount return self._runcmd(cmd, timeout) File "/usr/share/vdsm/storage/mount.py", line 246, in _runcmd raise MountError(rc, ";".join((out, err))) MountError: (32, ';umount: /rhev/data-center/mnt/10.16.29.93:_jbelka-export: mountpoint not found\n') Version-Release number of selected component (if applicable): vdsm-4.17.0.8-1.el7ev.noarch / 3.6.0-5 iiuc ovirt-engine-backend-3.6.0-0.0.master.20150804111407.git122a3a0.el6.noarch 3.6.0-5 How reproducible: 100% Steps to Reproduce: 1. have 3.6 engine, i had one x86_64 and power8e host in the DC 2. create new fresh export domain 3. maintenance of the export domain, detach, remove 4. add the same export domain again Actual results: popup gets in progress animation and then without any error "return" the action stops and you are again in the filled import export domain dialog Expected results: should work Additional info: ps: i tried to use both cpu models hosts for re-attaching the export domain, same issue
oops, missing descr: Unable to re-attach previously newly created export domain.
This bug report has Keywords: Regression or TestBlocker. Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.
Tried to reproduce a few times with those steps: 1. Added an export domain from scratch 2. Moved the domain to maintenance 3. Detached the domain from the pool 4. Remove the domain (without checking the 'format' checkbox') 5. Imported the export domain Importing the export domain succeeded in all tries, removing the blocker flag as it seems that this is not a blocker. Aside for that the logs are trashed with: "2015-08-05 13:59:41,980 ERROR [org.ovirt.engine.core.dao.jpa.TransactionalInterceptor] (default task-6) [] Failed to run operation in a new transaction: javax.persistence.PersistenceException: org.hibernate.HibernateException: A collection with cascade="all-delete-orphan" was no longer referenced by the owning entity instance: org.ovirt.engine.core.common.job.Job.steps" Probably not related to the problem but still making it not easy to go through the logs. Jiri, can you please try to reproduce on on an clean environment without those errors?
May not be relevant, but also note it is vdsm 4.17.0.8 which is not 4.17.8
Can you recreate with latest package?
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
Rechecked and can't reproduce. libvirt-1.2.17-12.el7 vdsm-4.17.8-1.el7ev rhevm-3.6.0-0.18.el6.noarch