Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1607423

Summary: getDeviceList on VDSM side returns an empty list on iSCSI target exported via Ceph
Product: [oVirt] vdsm Reporter: Gianfranco Sigrisi <gsigrisi>
Component: GeneralAssignee: Fred Rolland <frolland>
Status: CLOSED NOTABUG QA Contact: Raz Tamir <ratamir>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: ---CC: bugs, gsigrisi, stirabos, tnisan
Target Milestone: ovirt-4.2.8Flags: rule-engine: ovirt-4.2+
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-10-17 14:16:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Hosted-Engine deploy via Cockpit UI
none
Hosted-engine deploy via CLI none

Description Gianfranco Sigrisi 2018-07-23 13:46:13 UTC
Created attachment 1469960 [details]
Hosted-Engine deploy via Cockpit UI

Description of problem:

I'm not able to add storage domain for the hosted engine on a iSCSI target exported on a Ceph RBD volume.

Tried both with hosted-engine --deploy and using the Cockpit UI.

I'm able to access the RBD disk via iscsiadm command on the RHV-H where the hosted engine is going to be deployed.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Follow the hosted-engine --deploy workflow or using the Cockpit UI
2. Select iSCSI as Target for Storage Domain
3. Unable to retrieve the list of LUN's 

Actual results:

Unable to retrieve the list of LUN's 


From the /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180723152114-28ey6f.log

2018-07-23 15:33:47,198+0200 ERROR otopi.plugins.gr_he_ansiblesetup.core.storage_domain storage_domain._select_lun:373 Cannot find any LUN on the selected target
2018-07-23 15:33:47,199+0200 ERROR otopi.plugins.gr_he_ansiblesetup.core.storage_domain storage_domain._closeup:689 Unable to get target list
2018-07-23 15:33:47,199+0200 DEBUG otopi.plugins.otopi.dialog.human human.queryString:159 query OVEHOSTED_STORAGE_DOMAIN_TYPE
2018-07-23 15:33:47,199+0200 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND                 Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
2018-07-23 15:36:02,686+0200 DEBUG otopi.context context._executeMethod:143 method exception
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in _executeMethod
    method['method']()
  File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/storage_domain.py", line 592, in _closeup
    default=ohostedcons.DomainTypes.NFS,
  File "/usr/share/otopi/plugins/otopi/dialog/human.py", line 211, in queryString
    value = self._readline(hidden=hidden)
  File "/usr/lib/python2.7/site-packages/otopi/dialog.py", line 246, in _readline
    value = self.__input.readline()
  File "/usr/lib/python2.7/site-packages/otopi/main.py", line 53, in _signal
    raise RuntimeError("SIG%s" % signum)
RuntimeError: SIG2


Expected results:

The list of LUN's should be available.

Additional info:

Comment 1 Gianfranco Sigrisi 2018-07-23 13:47:00 UTC
Created attachment 1469961 [details]
Hosted-engine deploy via CLI

Comment 4 Gianfranco Sigrisi 2018-07-23 14:22:28 UTC
From CLI with iscsiadm:
# iscsiadm -m discovery -t st -p 192.168.4.3
192.168.4.3:3260,1 iqn.2003-01.com.redhat.iscsi-gw:ceph-igw
# iscsiadm   --mode node  --targetname iqn.2003-01.com.redhat.iscsi-gw:ceph-igw -p 192.168.4.3  --op=update --name node.session.auth.authmethod --value=CHAP
# iscsiadm   --mode node  --targetname iqn.2003-01.com.redhat.iscsi-gw:ceph-igw -p 192.168.4.3  --op=update --name node.session.auth.username --value=username
# iscsiadm   --mode node  --targetname iqn.2003-01.com.redhat.iscsi-gw:ceph-igw -p 192.168.4.3  --op=update --name node.session.auth.password --value=password
# iscsiadm   --mode node  --targetname iqn.2003-01.com.redhat.iscsi-gw:ceph-igw -p 192.168.4.3  --login
Logging in to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw:ceph-igw, portal: 192.168.4.3,3260] (multiple)
Login to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw:ceph-igw, portal: 192.168.4.3,3260] successful.


# dmesg
[ 6356.217005] scsi host11: iSCSI Initiator over TCP/IP
[ 6356.223797] scsi 11:0:0:0: Direct-Access     LIO-ORG  TCMU device      0002 PQ: 0 ANSI: 5
[ 6356.236254] scsi 11:0:0:0: alua: supports implicit TPGS
[ 6356.236257] scsi 11:0:0:0: alua: device naa.60014052755fc8e82fa4ee6a71d3aa21 port group 1 rel port 1
[ 6356.236259] scsi 11:0:0:0: alua: Attached
[ 6356.236628] sd 11:0:0:0: [sdi] 209715200 512-byte logical blocks: (107 GB/100 GiB)
[ 6356.236777] sd 11:0:0:0: Attached scsi generic sg8 type 0
[ 6356.239408] sd 11:0:0:0: [sdi] Write Protect is off
[ 6356.239411] sd 11:0:0:0: [sdi] Mode Sense: 2f 00 00 00
[ 6356.239559] sd 11:0:0:0: [sdi] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[ 6356.241494] sd 11:0:0:0: alua: transition timeout set to 60 seconds
[ 6356.241497] sd 11:0:0:0: alua: port group 01 state A non-preferred supports TolUsNA
[ 6356.244245] sd 11:0:0:0: [sdi] Attached SCSI disk

# lsblk |grep sdi
sdi                                 8:128  0   100G  0 disk

# parted /dev/sdi print
Error: /dev/sdi: unrecognised disk label
Model: LIO-ORG TCMU device (scsi)
Disk /dev/sdi: 107GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
[root@primitivo ~]# fdisk -l /dev/sdi

Disk /dev/sdi: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes

# smartctl -i /dev/sdi
smartctl 6.5 2016-05-07 r4318 [x86_64-linux-3.10.0-862.3.3.el7.x86_64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               LIO-ORG
Product:              TCMU device
Revision:             0002
User Capacity:        107,374,182,400 bytes [107 GB]
Logical block size:   512 bytes
LU is fully provisioned [LBPRZ=1]
Logical Unit id:      0x60014052755fc8e82fa4ee6a71d3aa21
Serial number:        2755fc8e-82fa-4ee6-a71d-3aa21fcfe1de
Device type:          disk
Transport protocol:   iSCSI
Local Time is:        Mon Jul 23 16:19:29 2018 CEST
SMART support is:     Unavailable - device lacks SMART capability.

Comment 5 Simone Tiraboschi 2018-07-23 14:40:34 UTC
getDeviceList on VDSM side returns an empty list

2018-07-23 15:33:44,205+0200 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 0.48 seconds (__init__:573)
2018-07-23 15:33:45,224+0200 INFO  (jsonrpc/7) [vdsm.api] START connectStorageServer(domType=3, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'id': '00000000-0000-0000-0000-000000000000', 'connection': '192.168.4.3', 'iqn': 'iqn.2003-01.com.redhat.iscsi-gw:ceph-igw', 'user': 'ceph', 'tpgt': '1', 'password': '********', 'port': '3260'}], options=None) from=::ffff:192.168.122.157,50684, flow_id=b702223a-4006-4dd7-ad8c-0ef73ddb4656, task_id=5dcaf8ab-9a0e-4433-98ff-71313360c03b (api:46)
2018-07-23 15:33:45,699+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]} from=::ffff:192.168.122.157,50684, flow_id=b702223a-4006-4dd7-ad8c-0ef73ddb4656, task_id=5dcaf8ab-9a0e-4433-98ff-71313360c03b (api:52)
2018-07-23 15:33:45,700+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 0.47 seconds (__init__:573)
2018-07-23 15:33:45,721+0200 INFO  (jsonrpc/0) [vdsm.api] START getDeviceList(storageType=0, guids=(), checkStatus=True, options={}) from=::ffff:192.168.122.157,50684, flow_id=c1d63335-584c-4c14-afe1-e182c9718d98, task_id=801c8ab2-e737-4d2c-bcfc-edbbb9760a45 (api:46)
2018-07-23 15:33:45,721+0200 WARN  (jsonrpc/0) [storage.HSM] Calling Host.getDeviceList with checkStatus=True without specifying guids is very slow. It is recommended to use checkStatus=False when getting all devices. (hsm:1960)
2018-07-23 15:33:45,942+0200 INFO  (jsonrpc/0) [vdsm.api] FINISH getDeviceList return={'devList': []} from=::ffff:192.168.122.157,50684, flow_id=c1d63335-584c-4c14-afe1-e182c9718d98, task_id=801c8ab2-e737-4d2c-bcfc-edbbb9760a45 (api:52)
2018-07-23 15:33:45,943+0200 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getDeviceList succeeded in 0.23 seconds (__init__:573)

Comment 6 Sandro Bonazzola 2018-07-23 15:16:13 UTC
(In reply to Simone Tiraboschi from comment #5)
> getDeviceList on VDSM side returns an empty list

Moving the bug to VDSM to be examined by storage team.

Comment 9 Gianfranco Sigrisi 2018-07-24 12:42:41 UTC
Hi, 

In my setup the RHV-H is also running a 1 cluster node for Ceph.

While running ansible playbooks for Ceph, the playbook were failing with: OSD already mounted.

At that point I noticed that the multipath was running on the host. I disabled the multipath and I got the playbooks from ceph-ansible running.

This broke the iSCSI functionality in VDSM.

I re-enabled multipath and at the moment I see 2 behaviours:

1. The LUN from the Ceph iscsi is not possible to select
2. The LUN from the QNAP is possible to select.

I'm attaching screenshots for this. 

Thanks for your inputs,
Gianfranco

Comment 12 Simone Tiraboschi 2018-07-24 12:46:20 UTC
(In reply to Gianfranco Sigrisi from comment #9)
> 1. The LUN from the Ceph iscsi is not possible to select

Status: used

please clean it before trying again.

> 2. The LUN from the QNAP is possible to select.

Comment 13 Gianfranco Sigrisi 2018-07-24 13:05:26 UTC
Hi, 
I re-created the rbd image as I was previously created a FS on it and that was why RHV could not add that LUN.

I redeployed the engine but got Failed deployment at this task:

[ INFO ] ok: [localhost]
[ INFO ] TASK [Add HE disks]


From the engine.log:
2018-07-24 14:58:21,284+02 INFO  [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-93) [6a116d05-1b77-4bd9-9d86-846186fc3339] Command 'AddDisk' (id: '8a4507c0-39b4-43b9-8e1f-05bb7ec32b76') waiting on child command id: '063b45af-4f7f-454e-8775-a15c9b71cd8e' type:'AddImageFromScratch' to complete
2018-07-24 14:58:23,287+02 INFO  [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-29) [6a116d05-1b77-4bd9-9d86-846186fc3339] Command 'AddDisk' (id: '8a4507c0-39b4-43b9-8e1f-05bb7ec32b76') waiting on child command id: '063b45af-4f7f-454e-8775-a15c9b71cd8e' type:'AddImageFromScratch' to complete
2018-07-24 14:58:27,065+02 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-24) [] Polling and updating Async Tasks: 4 tasks, 1 tasks to poll now
2018-07-24 14:58:27,070+02 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-24) [] SPMAsyncTask::PollTask: Polling task '047b095a-1654-41f3-8b4b-3eac45739237' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'success'.
2018-07-24 14:58:27,076+02 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-24) [] BaseAsyncTask::onTaskEndSuccess: Task '047b095a-1654-41f3-8b4b-3eac45739237' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended successfully.
2018-07-24 14:58:27,076+02 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-24) [] CommandAsyncTask::endActionIfNecessary: All tasks of command '063b45af-4f7f-454e-8775-a15c9b71cd8e' has ended -> executing 'endAction'
2018-07-24 14:58:27,076+02 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-24) [] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: '063b45af-4f7f-454e-8775-a15c9b71cd8e'): calling endAction '.
2018-07-24 14:58:27,076+02 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-63) [] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'AddImageFromScratch',
2018-07-24 14:58:27,081+02 INFO  [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engine-Thread-63) [6a116d05-1b77-4bd9-9d86-846186fc3339] Command [id=063b45af-4f7f-454e-8775-a15c9b71cd8e]: Updating status to 'SUCCEEDED', The command end method logic will be executed by one of its parent commands.
2018-07-24 14:58:27,081+02 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-63) [6a116d05-1b77-4bd9-9d86-846186fc3339] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' completed, handling the result.
2018-07-24 14:58:27,081+02 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-63) [6a116d05-1b77-4bd9-9d86-846186fc3339] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' succeeded, clearing tasks.
2018-07-24 14:58:27,081+02 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-63) [6a116d05-1b77-4bd9-9d86-846186fc3339] SPMAsyncTask::ClearAsyncTask: Attempting to clear task '047b095a-1654-41f3-8b4b-3eac45739237'
2018-07-24 14:58:27,082+02 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-63) [6a116d05-1b77-4bd9-9d86-846186fc3339] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='d0dc6068-8f3d-11e8-8d04-00163e19b5ca', ignoreFailoverLimit='false', taskId='047b095a-1654-41f3-8b4b-3eac45739237'}), log id: 5e6cfa5c
2018-07-24 14:58:27,084+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-63) [6a116d05-1b77-4bd9-9d86-846186fc3339] START, HSMClearTaskVDSCommand(HostName = primitivo.mgmt.zg.pinguozzo.com, HSMTaskGuidBaseVDSCommandParameters:{hostId='0260163e-3af9-48bc-9824-45b01afdbe26', taskId='047b095a-1654-41f3-8b4b-3eac45739237'}), log id: 7f3a3290
2018-07-24 14:58:27,089+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-63) [6a116d05-1b77-4bd9-9d86-846186fc3339] FINISH, HSMClearTaskVDSCommand, log id: 7f3a3290
2018-07-24 14:58:27,089+02 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-63) [6a116d05-1b77-4bd9-9d86-846186fc3339] FINISH, SPMClearTaskVDSCommand, log id: 5e6cfa5c
2018-07-24 14:58:27,092+02 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-63) [6a116d05-1b77-4bd9-9d86-846186fc3339] BaseAsyncTask::removeTaskFromDB: Removed task '047b095a-1654-41f3-8b4b-3eac45739237' from DataBase
2018-07-24 14:58:27,092+02 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-63) [6a116d05-1b77-4bd9-9d86-846186fc3339] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity '063b45af-4f7f-454e-8775-a15c9b71cd8e'
2018-07-24 14:58:27,291+02 INFO  [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-9) [6a116d05-1b77-4bd9-9d86-846186fc3339] Command 'AddDisk' id: '8a4507c0-39b4-43b9-8e1f-05bb7ec32b76' child commands '[063b45af-4f7f-454e-8775-a15c9b71cd8e]' executions were completed, status 'SUCCEEDED'
2018-07-24 14:58:28,303+02 INFO  [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [6a116d05-1b77-4bd9-9d86-846186fc3339] Ending command 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' successfully.
2018-07-24 14:58:28,308+02 INFO  [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [6a116d05-1b77-4bd9-9d86-846186fc3339] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand' successfully.
2018-07-24 14:58:28,313+02 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [6a116d05-1b77-4bd9-9d86-846186fc3339] START, GetImageInfoVDSCommand( GetImageInfoVDSCommandParameters:{storagePoolId='d0dc6068-8f3d-11e8-8d04-00163e19b5ca', ignoreFailoverLimit='false', storageDomainId='4cc2efc3-4ead-4e9f-807c-254fe0b12d9d', imageGroupId='4bd81a37-f20c-42e6-98e5-23d719cd97ea', imageId='6727e842-113a-4d36-beb9-bc7ee7e0b8aa'}), log id: 1cb9ad71
2018-07-24 14:58:28,315+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [6a116d05-1b77-4bd9-9d86-846186fc3339] START, GetVolumeInfoVDSCommand(HostName = primitivo.mgmt.zg.pinguozzo.com, GetVolumeInfoVDSCommandParameters:{hostId='0260163e-3af9-48bc-9824-45b01afdbe26', storagePoolId='d0dc6068-8f3d-11e8-8d04-00163e19b5ca', storageDomainId='4cc2efc3-4ead-4e9f-807c-254fe0b12d9d', imageGroupId='4bd81a37-f20c-42e6-98e5-23d719cd97ea', imageId='6727e842-113a-4d36-beb9-bc7ee7e0b8aa'}), log id: 78ddd744
2018-07-24 14:58:28,330+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [6a116d05-1b77-4bd9-9d86-846186fc3339] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@b0dd5b16, log id: 78ddd744
2018-07-24 14:58:28,330+02 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [6a116d05-1b77-4bd9-9d86-846186fc3339] FINISH, GetImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@b0dd5b16, log id: 1cb9ad71
2018-07-24 14:58:28,339+02 WARN  [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [] VM is null - no unlocking
2018-07-24 14:58:28,352+02 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [] EVENT_ID: USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'he_metadata' was successfully added.

Comment 14 Gianfranco Sigrisi 2018-07-24 13:07:34 UTC
From the log /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-create_target_vm-2018624145742-lhfoee.log on the RHV-H I get this:



2018-07-24 14:58:30,004+0200 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u'Add HE disks', 'ansible_result': u'type: <type \'dict\'>\nstr: {\'msg\': u\'All items completed\', \'changed\': True, \'results\': [{\'_ansible_parsed\': True, u\'exception\': u\'Traceback (most recent call last):\\n  File "/tmp/ansible_KoV9VD/ansible_module_ovirt_disk.py", line 619, in main\\n    fail_condition=lambda d: d.status == otypes.DiskStatus.ILLEGAL if lun is None e\nrepr: {\'msg\': u\'All items completed\', \'changed\': True, \'results\': [{\'_ansible_parsed\': True, u\'exception\': u\'Traceback (most recent call last):\\n  File "/tmp/ansible_KoV9VD/ansible_module_ovirt_disk.py", line 619, in main\\n    fail_condition=lambda d: d.status == otypes.DiskStatus.ILLEGAL if lun is None e\ndir: [\'__class__\', \'__cmp__\', \'__contains__\', \'__delattr__\', \'__delitem__\', \'__doc__\', \'__eq__\', \'__format__\', \'__ge__\', \'__getattribute__\', \'__getitem__\', \'__gt__\', \'__hash__\', \'__init__\', \'__iter__\', \'__le__\', \'__len__\', \'__lt__\', \'__ne__\', \'__new__\', \'__reduce__\', \'__reduce_ex__\', \'__repr__\', \'__setattr__\', \'__setitem__\', \'__sizeof__\', \'__str__\', \'__subclasshook__\', \'clear\', \'copy\', \'fromkeys\', \'get\', \'has_key\', \'items\', \'iteritems\', \'iterkeys\', \'itervalues\', \'keys\', \'pop\', \'popitem\', \'setdefault\', \'update\', \'values\', \'viewitems\', \'viewkeys\', \'viewvalues\']\npprint: {\'changed\': True,\n \'msg\': u\'All items completed\',\n \'results\': [{\'_ansible_item_label\': {u\'description\': u\'Hosted-Engine disk\',\n                                      u\'format\': u\'raw\',\n                                      u\'name\': u\'he_virtio_disk\',\n                                      u\'size\': u\'1\n{\'msg\': u\'All items completed\', \'changed\': True, \'results\': [{\'_ansible_parsed\': True, u\'exception\':.__doc__: "dict() -> new empty dictionary\\ndict(mapping) -> new dictionary initialized from a mapping object\'s\\n    (key, value) pairs\\ndict(iterable) -> new dictionary initialized as if via:\\n    d = {}\\n    for k, v in iterable:\\n        d[k] = v\\ndict(**kwargs) -> new dictionary initialized with the name=value pairs\\n    in the keyword argument list.  For example:  dict(one=1, two=2)"\n{\'msg\': u\'All items completed\', \'changed\': True, \'results\': [{\'_ansible_parsed\': True, u\'exception\':.__hash__: None', 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'}
2018-07-24 14:58:30,004+0200 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f4c93ddc9d0> kwargs ignore_errors:None
2018-07-24 14:58:30,005+0200 INFO ansible stats {'status': 'FAILED', 'ansible_playbook_duration': 46.952557, 'ansible_result': u'type: <type \'dict\'>\nstr: {u\'localhost\': {\'unreachable\': 0, \'skipped\': 0, \'ok\': 20, \'changed\': 2, \'failures\': 1}}\nrepr: {u\'localhost\': {\'unreachable\': 0, \'skipped\': 0, \'ok\': 20, \'changed\': 2, \'failures\': 1}}\ndir: [\'__class__\', \'__cmp__\', \'__contains__\', \'__delattr__\', \'__delitem__\', \'__doc__\', \'__eq__\', \'__format__\', \'__ge__\', \'__getattribute__\', \'__getitem__\', \'__gt__\', \'__hash__\', \'__init__\', \'__iter__\', \'__le__\', \'__len__\', \'__lt__\', \'__ne__\', \'__new__\', \'__reduce__\', \'__reduce_ex__\', \'__repr__\', \'__setattr__\', \'__setitem__\', \'__sizeof__\', \'__str__\', \'__subclasshook__\', \'clear\', \'copy\', \'fromkeys\', \'get\', \'has_key\', \'items\', \'iteritems\', \'iterkeys\', \'itervalues\', \'keys\', \'pop\', \'popitem\', \'setdefault\', \'update\', \'values\', \'viewitems\', \'viewkeys\', \'viewvalues\']\npprint: {u\'localhost\': {\'changed\': 2,\n                \'failures\': 1,\n                \'ok\': 20,\n                \'skipped\': 0,\n                \'unreachable\': 0}}\n{u\'localhost\': {\'unreachable\': 0, \'skipped\': 0, \'ok\': 20, \'changed\': 2, \'failures\': 1}}.__doc__: "dict() -> new empty dictionary\\ndict(mapping) -> new dictionary initialized from a mapping object\'s\\n    (key, value) pairs\\ndict(iterable) -> new dictionary initialized as if via:\\n    d = {}\\n    for k, v in iterable:\\n        d[k] = v\\ndict(**kwargs) -> new dictionary initialized with the name=value pairs\\n    in the keyword argument list.  For example:  dict(one=1, two=2)"\n{u\'localhost\': {\'unreachable\': 0, \'skipped\': 0, \'ok\': 20, \'changed\': 2, \'failures\': 1}}.__hash__: None', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml', 'ansible_type': 'finish'}
2018-07-24 14:58:30,006+0200 DEBUG ansible on_any args <ansible.executor.stats.AggregateStats object at 0x7f4c9536fdd0> kwargs

Comment 15 Fred Rolland 2018-10-03 11:32:54 UTC
Hi,

I understand that the getDeviceList is working OK.
Can you please explain what is the current issue?
Please update the title of the bugzilla, provide steps to reproduce and log files from Engine and Vdsm.

Thanks

Comment 16 Fred Rolland 2018-10-17 12:16:05 UTC
Gianfranco any update?

Comment 17 Gianfranco Sigrisi 2018-10-17 12:29:25 UTC
Hi Fred, 

This can be closed as the issue was with an already-used volume exported via iSCSI daemon in Ceph. 

The suggestion from Simone helped solving the issue.

Thanks,
Gianfranco