Bug 1587961 - Can't extend storage domain (iSCSI)
Summary: Can't extend storage domain (iSCSI)
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: Backend.Core
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ovirt-4.3.0
: 4.3.0
Assignee: Tal Nisan
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-06 11:09 UTC by Yaniv Kaul
Modified: 2019-02-13 07:48 UTC (History)
3 users (show)

Fixed In Version: ovirt-engine-4.3.0_alpha
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-02-13 07:48:07 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.3+
rule-engine: blocker+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 92496 0 master MERGED core: Fix extending of a storage domain on a multipath environment 2021-02-21 18:25:49 UTC

Description Yaniv Kaul 2018-06-06 11:09:51 UTC
Description of problem:
Trying to extend a storage domain (tried in O-S-T) fails with (engine.log):
2018-06-06 07:03:06,526-04 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-2) [2a808cc4] Command 'ConnectStorageServerVDSCommand(HostName = lago-basic-suite-master-
host-1, StorageServerConnectionManagementVDSParameters:{hostId='4d59833e-7b64-46ec-9d98-39104f5e04ec', storagePoolId='b85207b2-73d6-4b02-818d-d93871e80692', storageType='ISCSI', connectionList='[StorageServerCon
nections:{id='null', connection='192.168.200.4', iqn='iqn.2014-07.org.ovirt:storage', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'
}, StorageServerConnections:{id='null', connection='192.168.201.4', iqn='iqn.2014-07.org.ovirt:storage', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', 
netIfaceName='null'}]', sendNetworkEventOnFailure='true'})' execution failed: Duplicate key 0
2018-06-06 07:03:06,526-04 DEBUG [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-2) [2a808cc4] Exception: java.lang.IllegalStateException: Duplicate key 0
        at java.util.stream.Collectors.lambda$throwingMerger$0(Collectors.java:133) [rt.jar:1.8.0_171]
        at java.util.HashMap.merge(HashMap.java:1254) [rt.jar:1.8.0_171]
        at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1320) [rt.jar:1.8.0_171]
        at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169) [rt.jar:1.8.0_171]
        at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) [rt.jar:1.8.0_171]
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) [rt.jar:1.8.0_171]
        at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) [rt.jar:1.8.0_171]
        at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) [rt.jar:1.8.0_171]
        at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) [rt.jar:1.8.0_171]
        at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) [rt.jar:1.8.0_171]
        at org.ovirt.engine.core.vdsbroker.vdsbroker.ServerConnectionStatusReturn.convertToStatusList(ServerConnectionStatusReturn.java:44) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand.executeVdsBrokerCommand(ConnectStorageServerVDSCommand.java:49) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVdsCommandWithNetworkEvent(VdsBrokerCommand.java:123) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand.executeVDSCommand(ConnectStorageServerVDSCommand.java:41) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65) [vdsbroker.jar:]
        at org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:31) [dal.jar:]



It seem to have succeeded on the host:
2018-06-06 07:03:05,332-0400 DEBUG (jsonrpc/3) [jsonrpc.JsonRpcServer] Calling 'StoragePool.connectStorageServer' in bridge with {u'connectionParams': [{u'id': u'00000000-0000-0000-0000-000000000000', u'connecti
on': u'192.168.200.4', u'iqn': u'iqn.2014-07.org.ovirt:storage', u'user': u'username', u'tpgt': u'1', u'password': '********', u'port': u'3260'}, {u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u
'192.168.201.4', u'iqn': u'iqn.2014-07.org.ovirt:storage', u'user': u'username', u'tpgt': u'1', u'password': '********', u'port': u'3260'}], u'storagepoolID': u'b85207b2-73d6-4b02-818d-d93871e80692', u'domainTyp
e': 3} (__init__:328)
2018-06-06 07:03:05,333-0400 INFO  (jsonrpc/3) [vdsm.api] START connectStorageServer(domType=3, spUUID=u'b85207b2-73d6-4b02-818d-d93871e80692', conList=[{u'id': u'00000000-0000-0000-0000-000000000000', u'connect
ion': u'192.168.200.4', u'iqn': u'iqn.2014-07.org.ovirt:storage', u'user': u'username', u'tpgt': u'1', u'password': '********', u'port': u'3260'}, {u'id': u'00000000-0000-0000-0000-000000000000', u'connection': 
u'192.168.201.4', u'iqn': u'iqn.2014-07.org.ovirt:storage', u'user': u'username', u'tpgt': u'1', u'password': '********', u'port': u'3260'}], options=None) from=::ffff:192.168.201.4,53740, flow_id=2a808cc4, task
_id=902a415b-4a85-488a-9697-d8a12c3ff0f7 (api:47)
2018-06-06 07:03:05,468-0400 DEBUG (jsonrpc/5) [jsonrpc.JsonRpcServer] Calling 'Host.getAllVmStats' in bridge with {} (__init__:328)
2018-06-06 07:03:05,468-0400 INFO  (jsonrpc/5) [api.host] START getAllVmStats() from=::ffff:192.168.201.4,53740 (api:47)
2018-06-06 07:03:05,468-0400 INFO  (jsonrpc/5) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:192.168.201.4,53740 (api:53)
2018-06-06 07:03:05,468-0400 DEBUG (jsonrpc/5) [jsonrpc.JsonRpcServer] Return 'Host.getAllVmStats' in bridge with (suppressed) (__init__:355)
2018-06-06 07:03:05,468-0400 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:311)
2018-06-06 07:03:05,749-0400 DEBUG (jsonrpc/3) [root] /usr/bin/taskset --cpu-list 0-2 /sbin/udevadm settle --timeout=5 (cwd None) (commands:66)
2018-06-06 07:03:05,768-0400 DEBUG (jsonrpc/3) [root] SUCCESS: <err> = ''; <rc> = 0 (commands:87)
2018-06-06 07:03:06,184-0400 DEBUG (jsonrpc/3) [root] /usr/bin/taskset --cpu-list 0-2 /sbin/udevadm settle --timeout=5 (cwd None) (commands:66)
2018-06-06 07:03:06,195-0400 DEBUG (jsonrpc/3) [root] SUCCESS: <err> = ''; <rc> = 0 (commands:87)
2018-06-06 07:03:06,418-0400 DEBUG (jsonrpc/3) [root] /usr/bin/taskset --cpu-list 0-2 /sbin/udevadm settle --timeout=5 (cwd None) (commands:66)
2018-06-06 07:03:06,427-0400 DEBUG (jsonrpc/3) [root] SUCCESS: <err> = ''; <rc> = 0 (commands:87)
2018-06-06 07:03:06,515-0400 INFO  (jsonrpc/3) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'00000000-0000-0000-0000-000000000000'}, {'status': 0, 'id': u'00000000-0000-0000
-0000-000000000000'}]} from=::ffff:192.168.201.4,53740, flow_id=2a808cc4, task_id=902a415b-4a85-488a-9697-d8a12c3ff0f7 (api:53)
2018-06-06 07:03:06,516-0400 DEBUG (jsonrpc/3) [jsonrpc.JsonRpcServer] Return 'StoragePool.connectStorageServer' in bridge with [{'status': 0, 'id': u'00000000-0000-0000-0000-000000000000'}, {'status': 0, 'id': 
u'00000000-0000-0000-0000-000000000000'}] (__init__:355)
2018-06-06 07:03:06,516-0400 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 1.19 seconds (__init__:311)


Version-Release number of selected component (if applicable):
vdsm-4.30.0-405.git7b1e239.el7.x86_64
ovirt-engine-4.3.0-0.0.master.20180604134534.git3e394fa.el7.noarch
How reproducible:
Always, in O-S-T at least.

Steps to Reproduce:
1. Extend the iSCSI storage domain (with a single LUN).

Additional info:
Command sent to host:
2018-06-06 07:03:05,337-04 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-2) [2a808cc4] START, ConnectStorageServerVDSCommand(HostName = lago-basic-suite-master-ho
st-1, StorageServerConnectionManagementVDSParameters:{hostId='4d59833e-7b64-46ec-9d98-39104f5e04ec', storagePoolId='b85207b2-73d6-4b02-818d-d93871e80692', storageType='ISCSI', connectionList='[StorageServerConne
ctions:{id='null', connection='192.168.200.4', iqn='iqn.2014-07.org.ovirt:storage', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'},
 StorageServerConnections:{id='null', connection='192.168.201.4', iqn='iqn.2014-07.org.ovirt:storage', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', ne
tIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 7e0756aa

Comment 1 Red Hat Bugzilla Rules Engine 2018-06-07 12:08:41 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 2 Elad 2018-08-15 15:30:54 UTC
iSCSI storage domain extension works as expected.
Used XtremIO as storage backend.

vdsm-4.30.0-527.gitcec1054.el7.x86_64
4.3.0-0.0.master.20180814113734.gitad81cd3.el7

Comment 3 Sandro Bonazzola 2018-11-02 14:31:59 UTC
This bugzilla is included in oVirt 4.2.7 release, published on November 2nd 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.7 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.

Comment 4 Sandro Bonazzola 2018-11-02 15:00:27 UTC
Closed by mistake, moving back to qa -> verified

Comment 5 Sandro Bonazzola 2019-02-13 07:48:07 UTC
This bugzilla is included in oVirt 4.3.0 release, published on February 4th 2019.

Since the problem described in this bug report should be
resolved in oVirt 4.3.0 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.