Bug 1466326 - cannot import storage storage domain when 'use manage gluster volume' feature is used.
Summary: cannot import storage storage domain when 'use manage gluster volume' featur...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: Frontend.WebAdmin
Version: 4.1.2.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ovirt-4.1.8
: ---
Assignee: Gobinda Das
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks: Gluster-HC-3 1489349
TreeView+ depends on / blocked
 
Reported: 2017-06-29 12:44 UTC by RamaKasturi
Modified: 2018-02-14 13:18 UTC (History)
4 users (show)

Fixed In Version: ovirt-engine-4.1.8.1
Clone Of:
: 1489349 (view as bug list)
Environment:
Last Closed: 2017-12-11 16:30:48 UTC
oVirt Team: Gluster
Embargoed:
sabose: ovirt-4.1?
sabose: planning_ack?
rule-engine: devel_ack+
rule-engine: testing_ack+


Attachments (Terms of Use)
Attaching screenshot for the error (150.34 KB, image/png)
2017-06-29 12:45 UTC, RamaKasturi
no flags Details
Attaching screenshot showing the error (153.64 KB, image/png)
2017-06-29 12:45 UTC, RamaKasturi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 81785 0 master MERGED webadmin: Implemented code for import storage domain when 'use manage gluster volume' feature is used 2017-11-14 07:10:07 UTC
oVirt gerrit 84055 0 ovirt-engine-4.1 MERGED webadmin: Implemented code for import storage domain when 'use manage gluster volume' feature is used 2017-11-17 14:34:53 UTC

Description RamaKasturi 2017-06-29 12:44:06 UTC
Description of problem:
While importing a storage domain using 'use manager gluster volume' feature it fails with error "Error while executing action DisconnectStorageServerConnection: Error storage server disconnection" and "Error while executing action: Path filed cannot be empty".

Traceback in vdsm.log:
==========================

2017-06-28 19:56:45,514+0530 INFO  (jsonrpc/5) [dispatcher] Run and protect: disconnectStorageServer(domType=7, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'm
nt_options': u'backup-volfile-servers=10.70.36.80:10.70.36.81', u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u'', u'iqn': u'', u'user': u'', u'tpgt': u'1',
 u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}], options=None) (logUtils:51)
2017-06-28 19:56:45,515+0530 ERROR (jsonrpc/5) [storage.TaskManager.Task] (Task='d79757ac-7338-4171-b01b-b9a59222c98a') Unexpected error (task:870)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 877, in _run
    return fn(*args, **kargs)
  File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 2470, in disconnectStorageServer
    conObj = storageServer.ConnectionFactory.createConnection(conInfo)
  File "/usr/share/vdsm/storage/storageServer.py", line 638, in createConnection
    return ctor(**params)
  File "/usr/share/vdsm/storage/storageServer.py", line 245, in __init__
    self._volfileserver, volname = self._remotePath.split(":", 1)
ValueError: need more than 1 value to unpack
2017-06-28 19:56:45,555+0530 INFO  (jsonrpc/5) [storage.TaskManager.Task] (Task='d79757ac-7338-4171-b01b-b9a59222c98a') aborting: Task is aborted: u'need more than 1 value t
o unpack' - code 100 (task:1175)
2017-06-28 19:56:45,555+0530 ERROR (jsonrpc/5) [storage.Dispatcher] need more than 1 value to unpack (dispatcher:80)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/dispatcher.py", line 72, in wrapper
    result = ctask.prepare(func, *args, **kwargs)
  File "/usr/share/vdsm/storage/task.py", line 105, in wrapper
    return m(self, *a, **kw)
  File "/usr/share/vdsm/storage/task.py", line 1183, in prepare
    raise self.error
ValueError: need more than 1 value to unpack
2017-06-28 19:56:45,566+0530 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StoragePool.disconnectStorageServer failed (error 452) in 0.05 seconds (__init__:533)


Version-Release number of selected component (if applicable):
Red Hat Virtualization Manager Version: 4.1.2.2-0.1.el7

How reproducible:
Always

Steps to Reproduce:
1. Create  new storage domain from UI
2. Now detach and remove that storage domain
3. Try importing the same domain using 'use manage gluster volume' feature

Actual results:
Importing of storage domain fails with error mentioned in the description.

Expected results:
Importing of storage domain should be successful.

Additional info:
Importing storage domain with out using 'use mange gluster volume' feature works fine.

Comment 1 RamaKasturi 2017-06-29 12:45:08 UTC
Created attachment 1292869 [details]
Attaching screenshot for the error

Comment 2 RamaKasturi 2017-06-29 12:45:50 UTC
Created attachment 1292870 [details]
Attaching screenshot showing the error

Comment 3 RamaKasturi 2017-06-29 12:50:56 UTC
Logs are present in the link below:
===================================

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/1466326/

Comment 4 Sandro Bonazzola 2017-10-20 06:12:34 UTC
Sahina, this is not marked as a blocker, please either block 4.1.7 or push to 4.1.8.

Comment 5 RamaKasturi 2017-11-28 09:03:32 UTC
Verified and works fine with Red Hat Virtualization Manager Version: 4.1.8.1-0.1.el7

Followed the steps below to verify the bug.
===============================================
1) Create a gluster volume and add that as a storage domain in the UI using the feature 'Use managed gluster volume'.
2) Now moved the storage domain to maintenance, detached and removed it from UI .
3) Tried importing the same domain again using 'use managed gluster volume' feature and it worked well with out any issues.

Comment 6 Pavel Borecki 2018-02-14 13:06:24 UTC
It is back in ovirt 4.2.1 - gluster volumes created during hosted engine deployment wizard - regression?

Comment 7 Pavel Borecki 2018-02-14 13:18:05 UTC
In Storage - Storage domain import domain fail, but New Domain (using managed domain) works. Confusing at least...


Note You need to log in before you can comment on or make changes to this bug.