Bug 1250540 - Re-attaching fresh export domain fails - hsm::2543::Storage.HSM::(disconnectStorageServer) Could not disconnect from storageServer
Re-attaching fresh export domain fails - hsm::2543::Storage.HSM::(disconnectS...
Status: CLOSED WORKSFORME
Product: vdsm
Classification: oVirt
Component: General (Show other bugs)
4.17.0
Unspecified Unspecified
unspecified Severity urgent (vote)
: ovirt-3.6.0-rc3
: 4.17.8
Assigned To: Tal Nisan
Aharon Canan
storage
: Regression
Depends On:
Blocks: RHEV3.6PPC 1277183 1277184
  Show dependency treegraph
 
Reported: 2015-08-05 08:23 EDT by Jiri Belka
Modified: 2016-03-10 10:09 EST (History)
17 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-10-19 07:17:10 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
ylavi: ovirt‑3.6.0?
rule-engine: blocker?
ylavi: planning_ack+
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)
vdsm.log, engine.log (772.45 KB, application/x-bzip)
2015-08-05 08:23 EDT, Jiri Belka
no flags Details

  None (edit)
Description Jiri Belka 2015-08-05 08:23:50 EDT
Created attachment 1059457 [details]
vdsm.log, engine.log

Description of problem:

Thread-824::DEBUG::2015-08-05 08:00:20,223::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /usr/sbin/multipath (cwd None)
JsonRpc (StompReactor)::DEBUG::2015-08-05 08:00:20,227::stompreactor::235::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=u'SEND'>
Thread-825::DEBUG::2015-08-05 08:00:20,227::__init__::496::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'StoragePool.disconnectStorageServer' in bridge with {u'connectionParams': [{u'id': u'7d3e446c-9ee7-41ad-82a2-e6f34934666a', u'connection': u'10.16.29.93:/jbelka-export', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'3', u'password': '********', u'port': u''}], u'storagepoolID': u'00000000-0000-0000-0000-000000000000', u'domainType': 1}
JsonRpcServer::DEBUG::2015-08-05 08:00:20,228::__init__::533::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-825::DEBUG::2015-08-05 08:00:20,228::task::595::Storage.TaskManager.Task::(_updateState) Task=`a1512bf3-a9cc-4c23-a950-55db34c9b585`::moving from state init -> state preparing
Thread-825::INFO::2015-08-05 08:00:20,228::logUtils::48::dispatcher::(wrapper) Run and protect: disconnectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'7d3e446c-9ee7-41ad-82a2-e6f34934666a', u'connection': u'10.16.29.93:/jbelka-export', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'3', u'password': '********', u'port': u''}], options=None)
Thread-825::DEBUG::2015-08-05 08:00:20,229::mount::234::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo -n /usr/bin/umount -f -l /rhev/data-center/mnt/10.16.29.93:_jbelka-export (cwd None)
Thread-825::ERROR::2015-08-05 08:00:20,510::hsm::2543::Storage.HSM::(disconnectStorageServer) Could not disconnect from storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2539, in disconnectStorageServer
    conObj.disconnect()
  File "/usr/share/vdsm/storage/storageServer.py", line 406, in disconnect
    return self._mountCon.disconnect()
  File "/usr/share/vdsm/storage/storageServer.py", line 248, in disconnect
    self._mount.umount(True, True)
  File "/usr/share/vdsm/storage/mount.py", line 261, in umount
    return self._runcmd(cmd, timeout)
  File "/usr/share/vdsm/storage/mount.py", line 246, in _runcmd
    raise MountError(rc, ";".join((out, err)))
MountError: (32, ';umount: /rhev/data-center/mnt/10.16.29.93:_jbelka-export: mountpoint not found\n')


Version-Release number of selected component (if applicable):
vdsm-4.17.0.8-1.el7ev.noarch / 3.6.0-5 iiuc
ovirt-engine-backend-3.6.0-0.0.master.20150804111407.git122a3a0.el6.noarch 3.6.0-5

How reproducible:
100%

Steps to Reproduce:
1. have 3.6 engine, i had one x86_64 and power8e host in the DC
2. create new fresh export domain
3. maintenance of the export domain, detach, remove
4. add the same export domain again

Actual results:
popup gets in progress animation and then without any error "return" the action stops and you are again in the filled import export domain dialog

Expected results:
should work

Additional info:
ps: i tried to use both cpu models hosts for re-attaching the export domain, same issue
Comment 1 Jiri Belka 2015-08-05 08:25:08 EDT
oops, missing descr:

Unable to re-attach previously newly created export domain.
Comment 2 Red Hat Bugzilla Rules Engine 2015-09-22 03:43:49 EDT
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.
Comment 3 Tal Nisan 2015-10-08 07:00:30 EDT
Tried to reproduce a few times with those steps:
1. Added an export domain from scratch
2. Moved the domain to maintenance
3. Detached the domain from the pool
4. Remove the domain (without checking the 'format' checkbox')
5. Imported the export domain

Importing the export domain succeeded in all tries, removing the blocker flag as it seems that this is not a blocker.

Aside for that the logs are trashed with:
"2015-08-05 13:59:41,980 ERROR [org.ovirt.engine.core.dao.jpa.TransactionalInterceptor] (default task-6) [] Failed to run operation in a new transaction: javax.persistence.PersistenceException: org.hibernate.HibernateException: A collection with cascade="all-delete-orphan" was no longer referenced by the owning entity instance: org.ovirt.engine.core.common.job.Job.steps" 
Probably not related to the problem but still making it not easy to go through the logs.

Jiri, can you please try to reproduce on on an clean environment without those errors?
Comment 4 Red Hat Bugzilla Rules Engine 2015-10-08 07:00:31 EDT
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.
Comment 5 Michal Skrivanek 2015-10-13 23:02:47 EDT
May not be relevant, but also note it is vdsm 4.17.0.8 which is not 4.17.8
Comment 6 Yaniv Lavi (Dary) 2015-10-14 09:34:43 EDT
Can you recreate with latest package?
Comment 7 Red Hat Bugzilla Rules Engine 2015-10-19 06:59:40 EDT
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
Comment 8 Pavel Stehlik 2015-10-22 04:44:57 EDT
Rechecked and can't reproduce.
libvirt-1.2.17-12.el7
vdsm-4.17.8-1.el7ev
rhevm-3.6.0-0.18.el6.noarch

Note You need to log in before you can comment on or make changes to this bug.