Bug 1250540 - Re-attaching fresh export domain fails - hsm::2543::Storage.HSM::(disconnectStorageServer) Could not disconnect from storageServer
Summary: Re-attaching fresh export domain fails - hsm::2543::Storage.HSM::(disconnectS...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: vdsm
Classification: oVirt
Component: General
Version: 4.17.0
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ovirt-3.6.0-rc3
: 4.17.8
Assignee: Tal Nisan
QA Contact: Aharon Canan
URL:
Whiteboard: storage
Depends On:
Blocks: RHEV3.6PPC 1277183 1277184
TreeView+ depends on / blocked
 
Reported: 2015-08-05 12:23 UTC by Jiri Belka
Modified: 2016-03-10 15:09 UTC (History)
17 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2015-10-19 11:17:10 UTC
oVirt Team: Storage
Embargoed:
ylavi: ovirt-3.6.0?
rule-engine: blocker?
ylavi: planning_ack+
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)
vdsm.log, engine.log (772.45 KB, application/x-bzip)
2015-08-05 12:23 UTC, Jiri Belka
no flags Details

Description Jiri Belka 2015-08-05 12:23:50 UTC
Created attachment 1059457 [details]
vdsm.log, engine.log

Description of problem:

Thread-824::DEBUG::2015-08-05 08:00:20,223::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None)
JsonRpc (StompReactor)::DEBUG::2015-08-05 08:00:20,227::stompreactor::235::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=u'SEND'>
Thread-825::DEBUG::2015-08-05 08:00:20,227::__init__::496::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'StoragePool.disconnectStorageServer' in bridge with {u'connectionParams': [{u'id': u'7d3e446c-9ee7-41ad-82a2-e6f34934666a', u'connection': u'10.16.29.93:/jbelka-export', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'3', u'password': '********', u'port': u''}], u'storagepoolID': u'00000000-0000-0000-0000-000000000000', u'domainType': 1}
JsonRpcServer::DEBUG::2015-08-05 08:00:20,228::__init__::533::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-825::DEBUG::2015-08-05 08:00:20,228::task::595::Storage.TaskManager.Task::(_updateState) Task=`a1512bf3-a9cc-4c23-a950-55db34c9b585`::moving from state init -> state preparing
Thread-825::INFO::2015-08-05 08:00:20,228::logUtils::48::dispatcher::(wrapper) Run and protect: disconnectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'7d3e446c-9ee7-41ad-82a2-e6f34934666a', u'connection': u'10.16.29.93:/jbelka-export', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'3', u'password': '********', u'port': u''}], options=None)
Thread-825::DEBUG::2015-08-05 08:00:20,229::mount::234::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo -n /usr/bin/umount -f -l /rhev/data-center/mnt/10.16.29.93:_jbelka-export (cwd None)
Thread-825::ERROR::2015-08-05 08:00:20,510::hsm::2543::Storage.HSM::(disconnectStorageServer) Could not disconnect from storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2539, in disconnectStorageServer
    conObj.disconnect()
  File "/usr/share/vdsm/storage/storageServer.py", line 406, in disconnect
    return self._mountCon.disconnect()
  File "/usr/share/vdsm/storage/storageServer.py", line 248, in disconnect
    self._mount.umount(True, True)
  File "/usr/share/vdsm/storage/mount.py", line 261, in umount
    return self._runcmd(cmd, timeout)
  File "/usr/share/vdsm/storage/mount.py", line 246, in _runcmd
    raise MountError(rc, ";".join((out, err)))
MountError: (32, ';umount: /rhev/data-center/mnt/10.16.29.93:_jbelka-export: mountpoint not found\n')


Version-Release number of selected component (if applicable):
vdsm-4.17.0.8-1.el7ev.noarch / 3.6.0-5 iiuc
ovirt-engine-backend-3.6.0-0.0.master.20150804111407.git122a3a0.el6.noarch 3.6.0-5

How reproducible:
100%

Steps to Reproduce:
1. have 3.6 engine, i had one x86_64 and power8e host in the DC
2. create new fresh export domain
3. maintenance of the export domain, detach, remove
4. add the same export domain again

Actual results:
popup gets in progress animation and then without any error "return" the action stops and you are again in the filled import export domain dialog

Expected results:
should work

Additional info:
ps: i tried to use both cpu models hosts for re-attaching the export domain, same issue

Comment 1 Jiri Belka 2015-08-05 12:25:08 UTC
oops, missing descr:

Unable to re-attach previously newly created export domain.

Comment 2 Red Hat Bugzilla Rules Engine 2015-09-22 07:43:49 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 3 Tal Nisan 2015-10-08 11:00:30 UTC
Tried to reproduce a few times with those steps:
1. Added an export domain from scratch
2. Moved the domain to maintenance
3. Detached the domain from the pool
4. Remove the domain (without checking the 'format' checkbox')
5. Imported the export domain

Importing the export domain succeeded in all tries, removing the blocker flag as it seems that this is not a blocker.

Aside for that the logs are trashed with:
"2015-08-05 13:59:41,980 ERROR [org.ovirt.engine.core.dao.jpa.TransactionalInterceptor] (default task-6) [] Failed to run operation in a new transaction: javax.persistence.PersistenceException: org.hibernate.HibernateException: A collection with cascade="all-delete-orphan" was no longer referenced by the owning entity instance: org.ovirt.engine.core.common.job.Job.steps" 
Probably not related to the problem but still making it not easy to go through the logs.

Jiri, can you please try to reproduce on on an clean environment without those errors?

Comment 4 Red Hat Bugzilla Rules Engine 2015-10-08 11:00:31 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 5 Michal Skrivanek 2015-10-14 03:02:47 UTC
May not be relevant, but also note it is vdsm 4.17.0.8 which is not 4.17.8

Comment 6 Yaniv Lavi 2015-10-14 13:34:43 UTC
Can you recreate with latest package?

Comment 7 Red Hat Bugzilla Rules Engine 2015-10-19 10:59:40 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 8 Pavel Stehlik 2015-10-22 08:44:57 UTC
Rechecked and can't reproduce.
libvirt-1.2.17-12.el7
vdsm-4.17.8-1.el7ev
rhevm-3.6.0-0.18.el6.noarch


Note You need to log in before you can comment on or make changes to this bug.