Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1561156

Summary: [DOC] VMs imported from Xen without uuid in the domxml should have a uuid created
Product: Red Hat Enterprise Virtualization Manager Reporter: jessi <jesler_serrano>
Component: DocumentationAssignee: Eli Marcus <emarcus>
Status: CLOSED WONTFIX QA Contact: rhev-docs <rhev-docs>
Severity: low Docs Contact:
Priority: medium    
Version: 4.5.2CC: ahadas, bugs, emarcus, guillaume.pavese, jesler_serrano, lsurette, mhicks, michal.skrivanek, rjones, smelamud, srevivo, tgolembi
Target Milestone: ---Keywords: Documentation
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: backlog
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-12-21 13:15:10 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Importation Image
none
Engine log
none
VDSM log
none
vdsm log
none
Engine log
none
Engine_log_20180502
none
Import error
none
v2v log - imported but non bootable none

Description jessi 2018-03-27 18:24:11 UTC
Created attachment 1413866 [details]
Importation Image

Description of problem:Importing Xen VM to Ovirt KVM. All chosen VMs couldn't be retrieved by the external system and therefore have been filtered. Please see log for details.


Version-Release number of selected component (if applicable):


How reproducible:
Import vm from Xen to Ovirt failed.

Steps to Reproduce:
1.sudo -u vdsm ssh-keygen
2.sudo -u vdsm ssh-copy-id user@xenhost
3.login in hosted engine, navigate to vm and click impor, select Xen(rhel).
4.Provide URI then load.
5. Select vm from Xen

Actual results:
All chosen VMs couldn't be retrieved by the external system and therefore have been filtered. Please see log for details.

Expected results:
Vm importation should proceed.

Additional info:

Comment 1 Michal Skrivanek 2018-03-28 04:58:40 UTC
Can you provide logs? engine.log, vdsm log from host

Comment 2 jessi 2018-03-28 12:33:38 UTC
Hi Michal,


here are the logs.

engine log:

2018-03-28 20:26:49,866+08 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-10) [bef926fb-a1fa-45db-869b-5552330dbd3b] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = oVirt2, GetVmsFromExternalProviderParameters:{runAsync='true', hostId='e3578b3b-658a-42c1-9a93-7ffd66d7b39f', url='xen+ssh://root@centos-xen2', username='', originType='XEN', namesOfVms='[migratevmtest]'}), log id: 483e3c9d
2018-03-28 20:26:52,029+08 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-10) [bef926fb-a1fa-45db-869b-5552330dbd3b] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [], log id: 483e3c9d


vdsm log:

2018-03-28 20:26:49,318+0800 INFO  (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:40372 (protocoldetector:72)
2018-03-28 20:26:49,330+0800 INFO  (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:40372 (protocoldetector:127)
2018-03-28 20:26:49,330+0800 INFO  (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:103)
2018-03-28 20:26:49,331+0800 INFO  (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:132)
2018-03-28 20:26:49,736+0800 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.00 seconds (__init__:539)
2018-03-28 20:26:50,163+0800 INFO  (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:40374 (protocoldetector:72)
2018-03-28 20:26:50,174+0800 INFO  (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:40374 (protocoldetector:127)
2018-03-28 20:26:50,175+0800 INFO  (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:103)
2018-03-28 20:26:50,176+0800 INFO  (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:132)
2018-03-28 20:26:50,539+0800 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539)
2018-03-28 20:26:51,754+0800 INFO  (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:40376 (protocoldetector:72)
2018-03-28 20:26:51,765+0800 INFO  (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:40376 (protocoldetector:127)
2018-03-28 20:26:51,766+0800 INFO  (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:103)
2018-03-28 20:26:51,767+0800 INFO  (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:132)
2018-03-28 20:26:52,171+0800 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.00 seconds (__init__:539)
2018-03-28 20:26:52,177+0800 INFO  (jsonrpc/0) [vdsm.api] START repoStats(options=None) from=::1,40376, task_id=3106c48d-1d15-4ad2-a59d-534ec90b596a (api:46)
2018-03-28 20:26:52,178+0800 INFO  (jsonrpc/0) [vdsm.api] FINISH repoStats return={u'5c9ce83e-47d7-4db8-97e9-7abe66a72200': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000816737', 'lastCheck': '5.4', 'valid': True}, u'd8dadca5-723f-4843-8d5b-028b840f4dae': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000744721', 'lastCheck': '1.7', 'valid': True}, u'5131a358-3eeb-4f29-b4fc-bde92cf5bf5a': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000820493', 'lastCheck': '1.5', 'valid': True}, u'0e1d9db0-1281-4b0c-9126-3c4bb7855bda': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00209881', 'lastCheck': '5.7', 'valid': True}} from=::1,40376, task_id=3106c48d-1d15-4ad2-a59d-534ec90b596a (api:52)
2018-03-28 20:26:52,179+0800 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:539)
2018-03-28 20:26:52,440+0800 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539)
2018-03-28 20:26:52,460+0800 INFO  (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:40378 (protocoldetector:72)
2018-03-28 20:26:52,471+0800 INFO  (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:40378 (protocoldetector:127)
2018-03-28 20:26:52,472+0800 INFO  (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:103)
2018-03-28 20:26:52,477+0800 INFO  (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:132)
2018-03-28 20:26:52,812+0800 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.01 seconds (__init__:539)
2018-03-28 20:26:52,820+0800 INFO  (jsonrpc/7) [vdsm.api] START repoStats(options=None) from=::1,40378, task_id=afd18f50-f7a4-45c4-9ecb-088322334f7f (api:46)
2018-03-28 20:26:52,820+0800 INFO  (jsonrpc/7) [vdsm.api] FINISH repoStats return={u'5c9ce83e-47d7-4db8-97e9-7abe66a72200': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00081765', 'lastCheck': '0.5', 'valid': True}, u'd8dadca5-723f-4843-8d5b-028b840f4dae': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000744721', 'lastCheck': '2.4', 'valid': True}, u'5131a358-3eeb-4f29-b4fc-bde92cf5bf5a': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000820493', 'lastCheck': '2.2', 'valid': True}, u'0e1d9db0-1281-4b0c-9126-3c4bb7855bda': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00209881', 'lastCheck': '6.3', 'valid': True}} from=::1,40378, task_id=afd18f50-f7a4-45c4-9ecb-088322334f7f (api:52)
2018-03-28 20:26:52,912+0800 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.10 seconds (__init__:539)


Thanks,

Comment 3 Michal Skrivanek 2018-03-28 15:23:56 UTC
we'd need complete logs:) please attach relevant timeframe

I see "xen+ssh://root@centos-xen2" in the engine log. does ssh from vdsm user to  root@centos-xen2 work without password?

Comment 4 jessi 2018-03-28 15:50:21 UTC
Hi Michal,
Yes passwordless shh work in "xen+ssh://root@centos-xen2". Here is the engine.log

2018-03-28 23:42:37,737+08 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-61) [1c29e104-35be-432d-921f-3fe56515402e] START, GetVmsNamesFromExternalProviderVDSCommand(HostName = oVirt2, GetVmsFromExternalProviderParameters:{runAsync='true', hostId='e3578b3b-658a-42c1-9a93-7ffd66d7b39f', url='xen+ssh://root@centos-xen2', username='', originType='XEN', namesOfVms='null'}), log id: 780a07cb
2018-03-28 23:42:39,707+08 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-61) [1c29e104-35be-432d-921f-3fe56515402e] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [pbx], VM [CLONE-WIFI-GATE], VM [Vicidial], VM [NAGIOS], VM [migratevmtest], VM [SambaServer], VM [ProxyServ], VM [TSTESTPC], VM [DC1], VM [TS1], VM [TS2], VM [DC2], VM [PROXY-NODE2], VM [ALogServer], VM [LogServ], VM [GAD], VM [GoalLineServer], VM [PROXY-LOGS], VM [smbproxy], VM [Domain-0], VM [ProxyServ4ComClark], VM [Ubuntu16], VM [PROXY-NODE1], VM [TSTESTPC2], VM [WIFI-GATE]], log id: 780a07cb

2018-03-28 23:42:46,398+08 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-16) [3203fb57-8ca2-4e62-8dd5-124a63ce50c1] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = oVirt2, GetVmsFromExternalProviderParameters:{runAsync='true', hostId='e3578b3b-658a-42c1-9a93-7ffd66d7b39f', url='xen+ssh://root@centos-xen2', username='', originType='XEN', namesOfVms='[migratevmtest]'}), log id: 32670b1
2018-03-28 23:42:47,930+08 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-16) [3203fb57-8ca2-4e62-8dd5-124a63ce50c1] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [], log id: 32670b1





Here is the vdsm log with the same time stamp in the engine log.

2018-03-28 23:42:37,098+0800 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getCapabilities succeeded in 0.13 seconds (__init__:539)
2018-03-28 23:42:38,308+0800 INFO  (jsonrpc/4) [vdsm.api] START getSpmStatus(spUUID=u'5a748263-02e0-039f-02ef-000000000251', options=None) from=::ffff:10.10.0.52,49978, flow_id=6331c50c, task_id=9d4d9517-c8bc-4189-80a6-bbe6cb3eee41 (api:46)
2018-03-28 23:42:38,348+0800 INFO  (jsonrpc/4) [vdsm.api] FINISH getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 28L}} from=::ffff:10.10.0.52,49978, flow_id=6331c50c, task_id=9d4d9517-c8bc-4189-80a6-bbe6cb3eee41 (api:52)
2018-03-28 23:42:38,348+0800 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.04 seconds (__init__:539)
2018-03-28 23:42:38,980+0800 INFO  (periodic/0) [vdsm.api] START repoStats(options=None) from=internal, task_id=3fc49044-bae1-413b-a725-73843661863a (api:46)
2018-03-28 23:42:38,980+0800 INFO  (periodic/0) [vdsm.api] FINISH repoStats return={u'5c9ce83e-47d7-4db8-97e9-7abe66a72200': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000838141', 'lastCheck': '6.7', 'valid': True}, u'd8dadca5-723f-4843-8d5b-028b840f4dae': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000656269', 'lastCheck': '7.9', 'valid': True}, u'5131a358-3eeb-4f29-b4fc-bde92cf5bf5a': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000648069', 'lastCheck': '2.7', 'valid': True}, u'0e1d9db0-1281-4b0c-9126-3c4bb7855bda': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000861065', 'lastCheck': '2.5', 'valid': True}} from=internal, task_id=3fc49044-bae1-413b-a725-73843661863a (api:52)
2018-03-28 23:42:39,361+0800 INFO  (jsonrpc/5) [vdsm.api] START getStoragePoolInfo(spUUID=u'5a748263-02e0-039f-02ef-000000000251', options=None) from=::ffff:10.10.0.52,50130, flow_id=6331c50c, task_id=b5b05261-3def-4af9-b1ff-293e92f93bbb (api:46)
2018-03-28 23:42:39,407+0800 INFO  (jsonrpc/5) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': u'/rhev/data-center/mnt/NAS.ph.gfl:_volume1_ISO__Files/5131a358-3eeb-4f29-b4fc-bde92cf5bf5a/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 28L, 'domains': u'd8dadca5-723f-4843-8d5b-028b840f4dae:Active,5c9ce83e-47d7-4db8-97e9-7abe66a72200:Active,5131a358-3eeb-4f29-b4fc-bde92cf5bf5a:Active,0e1d9db0-1281-4b0c-9126-3c4bb7855bda:Active', 'master_uuid': u'0e1d9db0-1281-4b0c-9126-3c4bb7855bda', 'version': '4', 'spm_id': 1, 'type': 'NFS', 'master_ver': 107}, 'dominfo': {u'd8dadca5-723f-4843-8d5b-028b840f4dae': {'status': u'Active', 'diskfree': '171450556416', 'isoprefix': '', 'alerts': [], 'disktotal': '171662376960', 'version': 0}, u'5c9ce83e-47d7-4db8-97e9-7abe66a72200': {'status': u'Active', 'diskfree': '157969399808', 'isoprefix': '', 'alerts': [], 'disktotal': '328337457152', 'version': 4}, u'5131a358-3eeb-4f29-b4fc-bde92cf5bf5a': {'status': u'Active', 'diskfree': '2703105064960', 'isoprefix': u'/rhev/data-center/mnt/NAS.ph.gfl:_volume1_ISO__Files/5131a358-3eeb-4f29-b4fc-bde92cf5bf5a/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '8916276871168', 'version': 0}, u'0e1d9db0-1281-4b0c-9126-3c4bb7855bda': {'status': u'Active', 'diskfree': '157969399808', 'isoprefix': '', 'alerts': [], 'disktotal': '328337457152', 'version': 4}}} from=::ffff:10.10.0.52,50130, flow_id=6331c50c, task_id=b5b05261-3def-4af9-b1ff-293e92f93bbb (api:52)
2018-03-28 23:42:39,408+0800 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.05 seconds (__init__:539)
2018-03-28 23:42:40,663+0800 INFO  (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:55632 (protocoldetector:72)
2018-03-28 23:42:40,674+0800 INFO  (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:55632 (protocoldetector:127)
2018-03-28 23:42:40,675+0800 INFO  (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:103)
2018-03-28 23:42:40,676+0800 INFO  (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:132)
2018-03-28 23:42:41,043+0800 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.01 seconds (__init__:539)
2018-03-28 23:42:41,050+0800 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call VM.getStats succeeded in 0.00 seconds (__init__:539)
2018-03-28 23:42:42,225+0800 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539)
2018-03-28 23:42:43,726+0800 INFO  (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:55636 (protocoldetector:72)
2018-03-28 23:42:43,738+0800 INFO  (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:55636 (protocoldetector:127)
2018-03-28 23:42:43,739+0800 INFO  (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:103)
2018-03-28 23:42:43,748+0800 INFO  (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:132)
2018-03-28 23:42:44,085+0800 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.00 seconds (__init__:539)
2018-03-28 23:42:44,093+0800 INFO  (jsonrpc/2) [vdsm.api] START getImagesList(sdUUID=u'5c9ce83e-47d7-4db8-97e9-7abe66a72200', options=None) from=::1,55636, task_id=5ff32cfc-11c4-451b-b068-9530bfa8c5e0 (api:46)
2018-03-28 23:42:44,094+0800 INFO  (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:55638 (protocoldetector:72)
2018-03-28 23:42:44,098+0800 INFO  (jsonrpc/2) [vdsm.api] FINISH getImagesList return={'imageslist': [u'44f71fa0-b1f1-4594-b946-1a920f6d3a21', u'e741236c-6755-4050-8466-06e73d49061b', u'390798dd-b368-4c68-8d9b-5272a901137e', u'319ce0da-c671-4718-b053-2cab50f89ba9']} from=::1,55636, task_id=5ff32cfc-11c4-451b-b068-9530bfa8c5e0 (api:52)
2018-03-28 23:42:44,099+0800 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call StorageDomain.getImages succeeded in 0.00 seconds (__init__:539)
2018-03-28 23:42:44,106+0800 INFO  (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:55638 (protocoldetector:127)
2018-03-28 23:42:44,106+0800 INFO  (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:103)
2018-03-28 23:42:44,108+0800 INFO  (jsonrpc/6) [vdsm.api] START getVolumesList(sdUUID=u'5c9ce83e-47d7-4db8-97e9-7abe66a72200', spUUID=u'00000000-0000-0000-0000-000000000000', imgUUID=u'44f71fa0-b1f1-4594-b946-1a920f6d3a21', options=None) from=::1,55636, task_id=9307506b-c118-4946-b104-6db1677ffa7a (api:46)
2018-03-28 23:42:44,113+0800 INFO  (jsonrpc/6) [vdsm.api] FINISH getVolumesList return={'uuidlist': [u'fbf74707-360f-4d98-9570-a6ae2db23d53']} from=::1,55636, task_id=9307506b-c118-4946-b104-6db1677ffa7a (api:52)
2018-03-28 23:42:44,114+0800 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call StorageDomain.getVolumes succeeded in 0.01 seconds (__init__:539)
2018-03-28 23:42:44,116+0800 INFO  (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:132)
2018-03-28 23:42:44,124+0800 INFO  (jsonrpc/4) [vdsm.api] START getVolumeInfo(sdUUID=u'5c9ce83e-47d7-4db8-97e9-7abe66a72200', spUUID=u'00000000-0000-0000-0000-000000000000', imgUUID=u'44f71fa0-b1f1-4594-b946-1a920f6d3a21', volUUID=u'fbf74707-360f-4d98-9570-a6ae2db23d53', options=None) from=::1,55636, task_id=a5aa2d43-b311-46a8-9baf-dfb051a4ffce (api:46)
2018-03-28 23:42:44,132+0800 INFO  (jsonrpc/4) [storage.VolumeManifest] Info request: sdUUID=5c9ce83e-47d7-4db8-97e9-7abe66a72200 imgUUID=44f71fa0-b1f1-4594-b946-1a920f6d3a21 volUUID = fbf74707-360f-4d98-9570-a6ae2db23d53  (volume:238)
2018-03-28 23:42:44,187+0800 INFO  (jsonrpc/4) [storage.VolumeManifest] 5c9ce83e-47d7-4db8-97e9-7abe66a72200/44f71fa0-b1f1-4594-b946-1a920f6d3a21/fbf74707-360f-4d98-9570-a6ae2db23d53 info is {'status': 'OK', 'domain': '5c9ce83e-47d7-4db8-97e9-7abe66a72200', 'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'generation': 0, 'image': '44f71fa0-b1f1-4594-b946-1a920f6d3a21', 'ctime': '1517584075', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 'capacity': '1048576', 'uuid': u'fbf74707-360f-4d98-9570-a6ae2db23d53', 'truesize': '1048576', 'type': 'PREALLOCATED', 'lease': {'owners': [], 'version': None}} (volume:272)
2018-03-28 23:42:44,187+0800 INFO  (jsonrpc/4) [vdsm.api] FINISH getVolumeInfo return={'info': {'status': 'OK', 'domain': '5c9ce83e-47d7-4db8-97e9-7abe66a72200', 'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'generation': 0, 'image': '44f71fa0-b1f1-4594-b946-1a920f6d3a21', 'ctime': '1517584075', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 'capacity': '1048576', 'uuid': u'fbf74707-360f-4d98-9570-a6ae2db23d53', 'truesize': '1048576', 'type': 'PREALLOCATED', 'lease': {'owners': [], 'version': None}}} from=::1,55636, task_id=a5aa2d43-b311-46a8-9baf-dfb051a4ffce (api:52)
2018-03-28 23:42:44,188+0800 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Volume.getInfo succeeded in 0.06 seconds (__init__:539)
2018-03-28 23:42:44,240+0800 INFO  (jsonrpc/5) [vdsm.api] START getVolumesList(sdUUID=u'5c9ce83e-47d7-4db8-97e9-7abe66a72200', spUUID=u'00000000-0000-0000-0000-000000000000', imgUUID=u'e741236c-6755-4050-8466-06e73d49061b', options=None) from=::1,55636, task_id=62ce4561-da13-45fb-aa3a-5a22536b8d35 (api:46)
2018-03-28 23:42:44,248+0800 INFO  (jsonrpc/5) [vdsm.api] FINISH getVolumesList return={'uuidlist': [u'c806cc32-d9db-4273-a157-17bdc6d69c54']} from=::1,55636, task_id=62ce4561-da13-45fb-aa3a-5a22536b8d35 (api:52)
2018-03-28 23:42:44,249+0800 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StorageDomain.getVolumes succeeded in 0.01 seconds (__init__:539)
2018-03-28 23:42:44,255+0800 INFO  (jsonrpc/1) [vdsm.api] START getVolumeInfo(sdUUID=u'5c9ce83e-47d7-4db8-97e9-7abe66a72200', spUUID=u'00000000-0000-0000-0000-000000000000', imgUUID=u'e741236c-6755-4050-8466-06e73d49061b', volUUID=u'c806cc32-d9db-4273-a157-17bdc6d69c54', options=None) from=::1,55636, task_id=5ce9bf3d-0b7b-4dab-befb-c4dcb6a3053d (api:46)
2018-03-28 23:42:44,263+0800 INFO  (jsonrpc/1) [storage.VolumeManifest] Info request: sdUUID=5c9ce83e-47d7-4db8-97e9-7abe66a72200 imgUUID=e741236c-6755-4050-8466-06e73d49061b volUUID = c806cc32-d9db-4273-a157-17bdc6d69c54  (volume:238)
2018-03-28 23:42:44,315+0800 INFO  (jsonrpc/1) [storage.VolumeManifest] 5c9ce83e-47d7-4db8-97e9-7abe66a72200/e741236c-6755-4050-8466-06e73d49061b/c806cc32-d9db-4273-a157-17bdc6d69c54 info is {'status': 'OK', 'domain': '5c9ce83e-47d7-4db8-97e9-7abe66a72200', 'voltype': 'LEAF', 'description': 'Hosted Engine Image', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'generation': 0, 'image': 'e741236c-6755-4050-8466-06e73d49061b', 'ctime': '1517584128', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '62277025792', 'children': [], 'pool': '', 'capacity': '62277025792', 'uuid': u'c806cc32-d9db-4273-a157-17bdc6d69c54', 'truesize': '62277025792', 'type': 'SPARSE', 'lease': {'owners': [1], 'version': 45L}} (volume:272)
2018-03-28 23:42:44,315+0800 INFO  (jsonrpc/1) [vdsm.api] FINISH getVolumeInfo return={'info': {'status': 'OK', 'domain': '5c9ce83e-47d7-4db8-97e9-7abe66a72200', 'voltype': 'LEAF', 'description': 'Hosted Engine Image', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'generation': 0, 'image': 'e741236c-6755-4050-8466-06e73d49061b', 'ctime': '1517584128', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '62277025792', 'children': [], 'pool': '', 'capacity': '62277025792', 'uuid': u'c806cc32-d9db-4273-a157-17bdc6d69c54', 'truesize': '62277025792', 'type': 'SPARSE', 'lease': {'owners': [1], 'version': 45L}}} from=::1,55636, task_id=5ce9bf3d-0b7b-4dab-befb-c4dcb6a3053d (api:52)
2018-03-28 23:42:44,316+0800 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Volume.getInfo succeeded in 0.06 seconds (__init__:539)
2018-03-28 23:42:44,320+0800 INFO  (jsonrpc/3) [vdsm.api] START getVolumesList(sdUUID=u'5c9ce83e-47d7-4db8-97e9-7abe66a72200', spUUID=u'00000000-0000-0000-0000-000000000000', imgUUID=u'390798dd-b368-4c68-8d9b-5272a901137e', options=None) from=::1,55636, task_id=982f5d42-e581-4e4b-90ff-590f8e795d87 (api:46)
2018-03-28 23:42:44,328+0800 INFO  (jsonrpc/3) [vdsm.api] FINISH getVolumesList return={'uuidlist': [u'45e8e589-64c3-492a-8052-bf6ef58ba0fa']} from=::1,55636, task_id=982f5d42-e581-4e4b-90ff-590f8e795d87 (api:52)
2018-03-28 23:42:44,329+0800 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call StorageDomain.getVolumes succeeded in 0.01 seconds (__init__:539)
2018-03-28 23:42:44,336+0800 INFO  (jsonrpc/0) [vdsm.api] START getVolumeInfo(sdUUID=u'5c9ce83e-47d7-4db8-97e9-7abe66a72200', spUUID=u'00000000-0000-0000-0000-000000000000', imgUUID=u'390798dd-b368-4c68-8d9b-5272a901137e', volUUID=u'45e8e589-64c3-492a-8052-bf6ef58ba0fa', options=None) from=::1,55636, task_id=75f0919c-2d1c-4a15-98b4-809ded83af5c (api:46)
2018-03-28 23:42:44,343+0800 INFO  (jsonrpc/0) [storage.VolumeManifest] Info request: sdUUID=5c9ce83e-47d7-4db8-97e9-7abe66a72200 imgUUID=390798dd-b368-4c68-8d9b-5272a901137e volUUID = 45e8e589-64c3-492a-8052-bf6ef58ba0fa  (volume:238)
2018-03-28 23:42:44,392+0800 INFO  (jsonrpc/0) [storage.VolumeManifest] 5c9ce83e-47d7-4db8-97e9-7abe66a72200/390798dd-b368-4c68-8d9b-5272a901137e/45e8e589-64c3-492a-8052-bf6ef58ba0fa info is {'status': 'OK', 'domain': '5c9ce83e-47d7-4db8-97e9-7abe66a72200', 'voltype': 'LEAF', 'description': 'hosted-engine.metadata', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'generation': 0, 'image': '390798dd-b368-4c68-8d9b-5272a901137e', 'ctime': '1517584079', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1028096', 'children': [], 'pool': '', 'capacity': '1028096', 'uuid': u'45e8e589-64c3-492a-8052-bf6ef58ba0fa', 'truesize': '1028096', 'type': 'PREALLOCATED', 'lease': {'owners': [], 'version': None}} (volume:272)
2018-03-28 23:42:44,393+0800 INFO  (jsonrpc/0) [vdsm.api] FINISH getVolumeInfo return={'info': {'status': 'OK', 'domain': '5c9ce83e-47d7-4db8-97e9-7abe66a72200', 'voltype': 'LEAF', 'description': 'hosted-engine.metadata', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'generation': 0, 'image': '390798dd-b368-4c68-8d9b-5272a901137e', 'ctime': '1517584079', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1028096', 'children': [], 'pool': '', 'capacity': '1028096', 'uuid': u'45e8e589-64c3-492a-8052-bf6ef58ba0fa', 'truesize': '1028096', 'type': 'PREALLOCATED', 'lease': {'owners': [], 'version': None}}} from=::1,55636, task_id=75f0919c-2d1c-4a15-98b4-809ded83af5c (api:52)
2018-03-28 23:42:44,394+0800 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Volume.getInfo succeeded in 0.06 seconds (__init__:539)
2018-03-28 23:42:44,400+0800 INFO  (jsonrpc/7) [vdsm.api] START getVolumesList(sdUUID=u'5c9ce83e-47d7-4db8-97e9-7abe66a72200', spUUID=u'00000000-0000-0000-0000-000000000000', imgUUID=u'319ce0da-c671-4718-b053-2cab50f89ba9', options=None) from=::1,55636, task_id=d1353fae-49e0-4274-8d42-19e67d64f136 (api:46)
2018-03-28 23:42:44,408+0800 INFO  (jsonrpc/7) [vdsm.api] FINISH getVolumesList return={'uuidlist': [u'0a5a53ca-b9ff-4282-8e6c-7ee075388572']} from=::1,55636, task_id=d1353fae-49e0-4274-8d42-19e67d64f136 (api:52)
2018-03-28 23:42:44,409+0800 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call StorageDomain.getVolumes succeeded in 0.01 seconds (__init__:539)
2018-03-28 23:42:44,416+0800 INFO  (jsonrpc/2) [vdsm.api] START getVolumeInfo(sdUUID=u'5c9ce83e-47d7-4db8-97e9-7abe66a72200', spUUID=u'00000000-0000-0000-0000-000000000000', imgUUID=u'319ce0da-c671-4718-b053-2cab50f89ba9', volUUID=u'0a5a53ca-b9ff-4282-8e6c-7ee075388572', options=None) from=::1,55636, task_id=f6aaedd4-2b14-4408-a8ac-efc4facae7f7 (api:46)
2018-03-28 23:42:44,423+0800 INFO  (jsonrpc/2) [storage.VolumeManifest] Info request: sdUUID=5c9ce83e-47d7-4db8-97e9-7abe66a72200 imgUUID=319ce0da-c671-4718-b053-2cab50f89ba9 volUUID = 0a5a53ca-b9ff-4282-8e6c-7ee075388572  (volume:238)
2018-03-28 23:42:44,499+0800 INFO  (jsonrpc/2) [storage.VolumeManifest] 5c9ce83e-47d7-4db8-97e9-7abe66a72200/319ce0da-c671-4718-b053-2cab50f89ba9/0a5a53ca-b9ff-4282-8e6c-7ee075388572 info is {'status': 'OK', 'domain': '5c9ce83e-47d7-4db8-97e9-7abe66a72200', 'voltype': 'LEAF', 'description': 'HostedEngineConfigurationImage', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'generation': 0, 'image': '319ce0da-c671-4718-b053-2cab50f89ba9', 'ctime': '1517584127', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '20480', 'children': [], 'pool': '', 'capacity': '1073741824', 'uuid': u'0a5a53ca-b9ff-4282-8e6c-7ee075388572', 'truesize': '20480', 'type': 'PREALLOCATED', 'lease': {'owners': [], 'version': None}} (volume:272)
2018-03-28 23:42:44,500+0800 INFO  (jsonrpc/2) [vdsm.api] FINISH getVolumeInfo return={'info': {'status': 'OK', 'domain': '5c9ce83e-47d7-4db8-97e9-7abe66a72200', 'voltype': 'LEAF', 'description': 'HostedEngineConfigurationImage', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'generation': 0, 'image': '319ce0da-c671-4718-b053-2cab50f89ba9', 'ctime': '1517584127', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '20480', 'children': [], 'pool': '', 'capacity': '1073741824', 'uuid': u'0a5a53ca-b9ff-4282-8e6c-7ee075388572', 'truesize': '20480', 'type': 'PREALLOCATED', 'lease': {'owners': [], 'version': None}}} from=::1,55636, task_id=f6aaedd4-2b14-4408-a8ac-efc4facae7f7 (api:52)
2018-03-28 23:42:44,501+0800 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Volume.getInfo succeeded in 0.08 seconds (__init__:539)
2018-03-28 23:42:44,525+0800 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.00 seconds (__init__:539)
2018-03-28 23:42:44,532+0800 INFO  (jsonrpc/4) [vdsm.api] START repoStats(options=None) from=::1,55638, task_id=94b873b6-77a2-4794-899d-3761b27a6016 (api:46)
2018-03-28 23:42:44,532+0800 INFO  (jsonrpc/4) [vdsm.api] FINISH repoStats return={u'5c9ce83e-47d7-4db8-97e9-7abe66a72200': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000763493', 'lastCheck': '2.2', 'valid': True}, u'd8dadca5-723f-4843-8d5b-028b840f4dae': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000774986', 'lastCheck': '3.4', 'valid': True}, u'5131a358-3eeb-4f29-b4fc-bde92cf5bf5a': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000805599', 'lastCheck': '3.9', 'valid': True}, u'0e1d9db0-1281-4b0c-9126-3c4bb7855bda': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000861065', 'lastCheck': '5.0', 'valid': True}} from=::1,55638, task_id=94b873b6-77a2-4794-899d-3761b27a6016 (api:52)
2018-03-28 23:42:44,608+0800 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.08 seconds (__init__:539)
2018-03-28 23:42:45,575+0800 INFO  (jsonrpc/5) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.0.52,49978, flow_id=77ffe051, task_id=31917043-289c-4e26-8f5f-93db6d9c16b3 (api:46)
2018-03-28 23:42:45,575+0800 INFO  (jsonrpc/5) [vdsm.api] FINISH repoStats return={u'5c9ce83e-47d7-4db8-97e9-7abe66a72200': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000763493', 'lastCheck': '3.3', 'valid': True}, u'd8dadca5-723f-4843-8d5b-028b840f4dae': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000774986', 'lastCheck': '4.5', 'valid': True}, u'5131a358-3eeb-4f29-b4fc-bde92cf5bf5a': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000805599', 'lastCheck': '4.9', 'valid': True}, u'0e1d9db0-1281-4b0c-9126-3c4bb7855bda': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000861065', 'lastCheck': '6.1', 'valid': True}} from=::ffff:10.10.0.52,49978, flow_id=77ffe051, task_id=31917043-289c-4e26-8f5f-93db6d9c16b3 (api:52)
2018-03-28 23:42:45,650+0800 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.07 seconds (__init__:539)
2018-03-28 23:42:46,139+0800 INFO  (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:55640 (protocoldetector:72)
2018-03-28 23:42:46,159+0800 INFO  (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:55640 (protocoldetector:127)
2018-03-28 23:42:46,160+0800 INFO  (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:103)
2018-03-28 23:42:46,170+0800 INFO  (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:132)
2018-03-28 23:42:46,528+0800 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.00 seconds (__init__:539)
2018-03-28 23:42:46,673+0800 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getCapabilities succeeded in 0.14 seconds (__init__:539)
2018-03-28 23:42:46,738+0800 INFO  (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:55642 (protocoldetector:72)
2018-03-28 23:42:46,761+0800 INFO  (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::1:55642 (protocoldetector:127)
2018-03-28 23:42:46,762+0800 INFO  (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompreactor:103)
2018-03-28 23:42:46,772+0800 INFO  (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompreactor:132)
2018-03-28 23:42:47,162+0800 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in 0.01 seconds (__init__:539)
2018-03-28 23:42:48,935+0800 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539)

is this the vdsm log that you are looking for?

thanks,

Comment 5 Tomáš Golembiovský 2018-03-29 10:03:07 UTC
The vdsm.log does not match. Either it is from different host (should be 'oVirt2') or the clock are not synced.

Comment 6 jessi 2018-03-30 02:01:03 UTC
Hi michal,

I checked the time and date of engine and host.. they have the same time and date.. Can i just attached the engine log file and vdsm log file instead?

Thanks,

Comment 7 jessi 2018-03-30 02:13:23 UTC
Created attachment 1414968 [details]
Engine log

Engine log

Comment 8 jessi 2018-03-30 02:14:00 UTC
Created attachment 1414972 [details]
VDSM log

VDSM log

Comment 9 jessi 2018-04-10 14:26:12 UTC
Hi,

Any update?


thanks!

Comment 10 Shmuel Melamud 2018-04-23 20:49:37 UTC
I see in engine.log:

2018-03-22 03:26:15,236+08 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-29) [2a3e97da-b4a1-4677-abf7-d490059c137d] EVENT_ID: IMPORTEXPORT_GET_EXTERNAL_VMS_INFO_FAILED(202), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Failed to retrieve VMs information from external server xen+ssh://root.0.3
2018-03-22 03:26:15,236+08 ERROR [org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (default task-29) [2a3e97da-b4a1-4677-abf7-d490059c137d] Query 'GetVmsFromExternalProviderQuery' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GetVmsNamesFromExternalProviderVDS, error = End of file while reading data: : Input/output error, code = 65 (Failed with error unexpected and code 16)
2018-03-22 03:26:15,236+08 ERROR [org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (default task-29) [2a3e97da-b4a1-4677-abf7-d490059c137d] Exception: org.ovirt.engine.core.common.errors.EngineException: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GetVmsNamesFromExternalProviderVDS, error = End of file while reading data: : Input/output error, code = 65 (Failed with error unexpected and code 16)
	at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:118) [bll.jar:]
...

It looks like an exception occurred in the VDSM, but vdsm.log starts at 2018-03-30, so I cannot see what is happening.

Please, send vdsm.log for 2018-03-22.

Comment 11 jessi 2018-04-25 17:45:10 UTC
Hi Shmuel Melamud

I cant send the prevoius logs any more, can I send a new engine and vdsm log? I re-installed my ovirt.

Comment 12 jessi 2018-04-25 18:07:42 UTC
Created attachment 1426781 [details]
vdsm log

Comment 13 jessi 2018-04-25 18:08:29 UTC
Created attachment 1426782 [details]
Engine log

Comment 14 Shmuel Melamud 2018-04-29 11:43:35 UTC
(In reply to jessi from comment #11)
In the recent log I see:

2018-04-23 22:43:00,015+08 ERROR [org.ovirt.engine.core.bll.GetVmsFromExternalProviderQuery] (default task-58) [383a49c3-4e83-41d5-9c21-df287b0d81bd] Exception: org.ovirt.engine.core.common.errors.EngineException: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GetVmsNamesFromExternalProviderVDS, error = Cannot recv data: Host key verification failed.: Connection reset by peer, code = 65 (Failed with error unexpected and code 16)

This means that SSH fails to validate the public key of the host.

Comment 15 jessi 2018-04-30 01:54:08 UTC
Hi Shamuel,

How can I validate the public key if the host? Sorry, I'm just a newbie.


Thanks.

Comment 16 jessi 2018-04-30 02:09:36 UTC
I already did this steps

1.sudo -u vdsm ssh-keygen
2.sudo -u vdsm ssh-copy-id user@xenhost


I can already list all the VM's in XEN using the engine, but when I click next button the error message show (All chosen VMs couldn't be retrieved by the external system and therefore have been filtered. Please see log for details.)

Comment 17 Shmuel Melamud 2018-04-30 11:14:19 UTC
(In reply to jessi from comment #15)
> How can I validate the public key if the host? Sorry, I'm just a newbie.

You need to

ssh -Hu vdsm user@xenhost

It will say that the public key of the target host is unknown and you need to confirm that this is OK. You need to use exactly the same hostname or IP address of the xenhost that is used in the Engine.

Comment 18 jessi 2018-05-02 01:28:26 UTC
Hi shmuel,

I already issued the ssh -Hu vdsm user@xenhost.. but still it has the same error



Thanks

Comment 19 Shmuel Melamud 2018-05-02 11:32:49 UTC
(In reply to jessi from comment #18)
> I already issued the ssh -Hu vdsm user@xenhost.. but still it has the same
> error

And the error in the engine.log is also the same?

Comment 20 jessi 2018-05-03 02:03:49 UTC
Hi Shmuel,

Here is what I get in engine.log when I try to Import.

2018-05-03 10:00:50,017+08 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-5) [7090702b-9390-4fa9-9990-e56461126c22] START, GetVmsNamesFromExternalProviderVDSCommand(HostName = PHD-VCluster1, GetVmsFromExternalProviderParameters:{runAsync='true', hostId='a5069872-0749-4b95-afe9-784cec04c170', url='xen+ssh://root.0.3', username='', originType='XEN', namesOfVms='null'}), log id: 134d0b98
2018-05-03 10:00:51,485+08 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-5) [7090702b-9390-4fa9-9990-e56461126c22] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [pbx], VM [CLONE-WIFI-GATE], VM [Vicidial], VM [NAGIOS], VM [SambaServer], VM [ProxyServ], VM [TSTESTPC], VM [DC1], VM [TS1], VM [TS2], VM [DC2], VM [PROXY-NODE2], VM [ALogServer], VM [LogServ], VM [GAD], VM [GoalLineServer], VM [Win7Test], VM [PROXY-LOGS], VM [smbproxy], VM [Domain-0], VM [ProxyServ4ComClark], VM [Ubuntu16], VM [PROXY-NODE1], VM [TSTESTPC2], VM [WIFI-GATE]], log id: 134d0b98
2018-05-03 10:01:00,078+08 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-13) [28d2ee7d-b15a-494e-b7f2-0c95cc45621d] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = PHD-VCluster1, GetVmsFromExternalProviderParameters:{runAsync='true', hostId='a5069872-0749-4b95-afe9-784cec04c170', url='xen+ssh://root.0.3', username='', originType='XEN', namesOfVms='[Win7Test]'}), log id: 75394b4d
2018-05-03 10:01:01,126+08 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-13) [28d2ee7d-b15a-494e-b7f2-0c95cc45621d] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [], log id: 75394b4d

Comment 21 Shmuel Melamud 2018-05-03 09:32:05 UTC
(In reply to jessi from comment #20)
Can you please attach the full log? I don't see any errors here.

Comment 22 jessi 2018-05-04 06:29:30 UTC
Created attachment 1431018 [details]
Engine_log_20180502

Comment 23 Shmuel Melamud 2018-05-13 18:09:07 UTC
(In reply to jessi from comment #22)

I do not see any errors in the log. What do you see when you try to import a VM? How the error looks like?

Comment 24 jessi 2018-05-14 03:44:24 UTC
Created attachment 1435992 [details]
Import error

Hi shmuel,

Here is the error.

Please see the attached file.


Thanks

Comment 25 Guillaume Pavese 2018-08-09 07:56:11 UTC
Hi, I encounter the same bug on ovirt 4.2.5 trying to import a xen vm from a centos 5.5 host.

I did the ssh setup as explained before and similar to here :https://ovirt.org/develop/release-management/features/virt/XenToOvirt/

I receive the same error in oVirt as the original repoter : "All chosen VMs couldn't be retrieved by the external system and therefore have been filtered. Please see log for details."

There is no ERROR in engine.log, only INFO. Here some of my many tries :

2018-08-09 09:45:24,460+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-40) [b61ee69f-1560-4399-8ebe-c09c8c50cb1a] START, GetVmsNamesFromExternalProviderVDSCommand(HostName = ps-inf-int-kvm-fr-306.hostics.fr, GetVmsFromExternalProviderParameters:{hostId='f739d8de-94a2-4314-ac9f-76a497411744', url='xen+ssh://root@ps-inf-int-xen-fr-305?no_verify=1', username='', originType='XEN', namesOfVms='null'}), log id: 1a56c45c
2018-08-09 09:45:38,102+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-39) [89a8301d-d6a6-4ed3-a1f7-7e11a6a77c7a] START, GetVmsNamesFromExternalProviderVDSCommand(HostName = ps-inf-int-kvm-fr-306.hostics.fr, GetVmsFromExternalProviderParameters:{hostId='f739d8de-94a2-4314-ac9f-76a497411744', url='xen+ssh://root@ps-inf-int-xen-fr-305?no_verify=1', username='', originType='XEN', namesOfVms='null'}), log id: 203c8aa7
2018-08-09 09:45:46,721+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-40) [5eb5a351-80bd-4240-87c9-0da6ac8797a7] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = ps-inf-int-kvm-fr-306.hostics.fr, GetVmsFromExternalProviderParameters:{hostId='f739d8de-94a2-4314-ac9f-76a497411744', url='xen+ssh://root@ps-inf-int-xen-fr-305?no_verify=1', username='', originType='XEN', namesOfVms='[vs-int-prd-web-fr-302]'}), log id: 42183133
2018-08-09 09:46:11,340+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-40) [9c13a2af-0134-41b7-8021-044a4d1342a4] START, GetVmsNamesFromExternalProviderVDSCommand(HostName = ps-inf-int-kvm-fr-306.hostics.fr, GetVmsFromExternalProviderParameters:{hostId='f739d8de-94a2-4314-ac9f-76a497411744', url='xen+ssh://root@ps-inf-int-xen-fr-305?no_verify=1', username='', originType='XEN', namesOfVms='null'}), log id: 235bb5c4
2018-08-09 09:46:17,344+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-40) [851dfa7e-c2ca-4953-a914-e32e59f0b116] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = ps-inf-int-kvm-fr-306.hostics.fr, GetVmsFromExternalProviderParameters:{hostId='f739d8de-94a2-4314-ac9f-76a497411744', url='xen+ssh://root@ps-inf-int-xen-fr-305?no_verify=1', username='', originType='XEN', namesOfVms='[vs-int-prd-web-fr-302]'}), log id: 7317efb0
2018-08-09 09:47:00,722+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-41) [77655cbe-b56f-4a1c-b488-b6a688270e78] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = ps-inf-int-kvm-fr-306.hostics.fr, GetVmsFromExternalProviderParameters:{hostId='f739d8de-94a2-4314-ac9f-76a497411744', url='xen+ssh://root@ps-inf-int-xen-fr-305?no_verify=1', username='', originType='XEN', namesOfVms='[vs-int-prd-web-fr-302]'}), log id: cb220d2
2018-08-09 09:48:25,251+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-41) [df122dae-aaf0-477d-81fb-ac1556a55cd7] START, GetVmsNamesFromExternalProviderVDSCommand(HostName = ps-inf-int-kvm-fr-306.hostics.fr, GetVmsFromExternalProviderParameters:{hostId='f739d8de-94a2-4314-ac9f-76a497411744', url='xen+ssh://root@ps-inf-int-xen-fr-305?no_verify=1', username='', originType='XEN', namesOfVms='null'}), log id: 15f8466
2018-08-09 09:48:31,240+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-41) [8f732997-4988-43bf-9335-136423d0840d] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = ps-inf-int-kvm-fr-306.hostics.fr, GetVmsFromExternalProviderParameters:{hostId='f739d8de-94a2-4314-ac9f-76a497411744', url='xen+ssh://root@ps-inf-int-xen-fr-305?no_verify=1', username='', originType='XEN', namesOfVms='[vs-int-prd-web-fr-302]'}), log id: 27d891ce

Comment 26 Guillaume Pavese 2018-08-14 03:51:36 UTC
Hi,

- I upgraded my xen server to Centos 5.11 + updates


- I verified that I can connect to libvirt and list the vm through my chosen proxy host :

[root@ps-inf-int-kvm-fr-306 ~]# sudo -u vdsm virsh -c xen+ssh://root@ps-inf-int-xen-fr-301 list --all
 Id    Name                           State
----------------------------------------------------
 0     Domain-0                       running
 -     vs-dev-prd-tls-fr-301          shut off


- I can dump the xml from the oVirt Host :
[root@ps-inf-int-kvm-fr-306 ~]# sudo -u vdsm virsh -c xen+ssh://root@ps-inf-int-xen-fr-301 dumpxml vs-dev-prd-tls-fr-301
<domain type='xen'>
  <name>vs-dev-prd-tls-fr-301</name>
  <uuid>908c584c-b246-d283-5f07-a4b977d98ba7</uuid>
  <memory>2097152</memory>
  <currentMemory>2097152</currentMemory>
  <vcpu>1</vcpu>
  <bootloader>/usr/bin/pygrub</bootloader>
  <os>
    <type arch='x86_64' machine='xenpv'>linux</type>
    <cmdline>fastboot</cmdline>
  </os>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <disk type='file' device='disk'>
      <driver name='file'/>
      <source file='/vm/vs-dev-prd-tls-fr-301-root.img'/>
      <target dev='sda1' bus='xen'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='file'/>
      <source file='/vm/vs-dev-prd-tls-fr-301-opt.img'/>
      <target dev='sda2' bus='xen'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='file'/>
      <source file='/vm/vs-dev-prd-tls-fr-301-swap.img'/>
      <target dev='sda3' bus='xen'/>
    </disk>
    <interface type='bridge'>
      <mac address='00:00:00:00:00:00'/>
      <source bridge='xenbr0_60'/>
      <script path='vif-bridge'/>
      <model type='e1000'/>
    </interface>
    <console type='pty'>
      <target port='0'/>
    </console>
  </devices>
</domain>

[root@ps-inf-int-kvm-fr-306 ~]# 



- In oVirt import page, I can list and see the vm


- Selecting it and clicking "Next" results in : "All chosen VMs couldn't be retrieved by the external system and therefore have been filtered. Please see log for details."


- The only (no ERROR, only INFO) relevant line in engine.log seems to be:

2018-08-14 05:36:10,700+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-399) [28e5d2f1-50eb-4376-84b2-f5f8913e3845] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = ps-inf-int-kvm-fr-306.hostics.fr, GetVmsFromExternalProviderParameters:{hostId='f739d8de-94a2-4314-ac9f-76a497411744', url='xen+ssh://root@ps-inf-int-xen-fr-301', username='', originType='XEN', namesOfVms='[vs-dev-prd-tls-fr-301]'}), log id: 14399a1e


-In the Host's vdsm/log, I see :

2018-08-14 05:36:11,119+0200 ERROR (jsonrpc/1) [root] Failed to get volume by path (v2v:1178)
2018-08-14 05:36:11,119+0200 ERROR (jsonrpc/1) [root] Disk has to be in storage pool (v2v:1181)

Comment 27 Guillaume Pavese 2018-08-14 18:08:28 UTC
I managed to import a xen xm by following the following steps (in chronological blocking order):

- Add an uuid to the vm's xen config file. Found by running the vm and then xm li vmname --long

That was not obvious as there where no errors I could find pointing to this requirement. Without this, the vm would be invisible to libvirt when powered off

- Create a libvirt pool on the path where the disks are.
Error message found in proxy hosts's vdsmd.log

- Install a non xen kernel

- modify vm's fstab before import to change disks names : there are 3 disks on the vm to import, originally mounted on sda1, sda2 and sda3. I had to change fstab to refer to sda sdb and sdc...

Shouldn't that last step be automatic?

Comment 28 Ryan Barry 2018-08-15 21:16:32 UTC
For the last step, it would be best to refer to the mountpoints by uuid, but that's administration (or virt-v2v). For the others, it looks like two different issues:

1) VMs imported without uuids in the domxml should have a new (generated) uuid added when imported

2) Document that libvirt pools are a necessary step for importing (pending confirmation from Richard)

Comment 29 Richard W.M. Jones 2018-08-15 21:54:21 UTC
Do we have a virt-v2v debug log for this?

> 2) Document that libvirt pools are a necessary step for importing
> (pending confirmation from Richard)

Assuming the input disks on the Xen side look something like this:

    <disk type='file' device='disk'>
      <driver name='file'/>
      <source file='/vm/vs-dev-prd-tls-fr-301-root.img'/>
      <target dev='sda1' bus='xen'/>
    </disk>

(I would need the virt-v2v debug log to really confirm this) then
virt-v2v will read the disk files directly.  It actually uses
the qemu ssh block driver to access them.  It does not need
the disks to be in a libvirt pool.

Comment 30 Guillaume Pavese 2018-08-16 06:03:51 UTC
1) I confirm that the xen config for the vm looks like this (new vm but same type, as the old one has to remain booted and this one can be kept powered off for testing):

    <disk type='file' device='disk'>
      <driver name='file'/>
      <source file='/vm2/vs-dev-prd-cns-fr-301-root.img'/>
      <target dev='sda1' bus='xen'/>
    </disk>


And I can only proceed if a pool is started with virsh on the xen host. Without that I get in ovirt host's vdsmd.log :

2018-08-16 04:46:02,385+0200 ERROR (jsonrpc/2) [root] Failed to get volume by path (v2v:1178)
2018-08-16 04:46:02,385+0200 ERROR (jsonrpc/2) [root] Disk has to be in storage pool (v2v:1181)

Where can I find the virt-v2v log?


2) Thank you for the recommandation to change disks in fstab to their uuid, that helped for some vm which where failing at being imported.

------------------------------------

3) But now I encounter a new problem :

Following what I though was a success, the vm was in fact not bootable. In Remote viewer console, it's stuck on :
     Booting from Hard Disk...
     Boot failed : nor a bootable disk
     no bootable device.

I think the cause is that original .img doesn not contain a partition table but directly the ext3 filesystem.

What should the preparing steps be in that case?

Comment 31 Richard W.M. Jones 2018-08-16 08:09:41 UTC
Please attach the virt-v2v log:

http://libguestfs.org/virt-v2v.1.html#debugging-rhv-m-import-failures

It's not possible to say much about this bug without the logs.

Comment 32 Guillaume Pavese 2018-08-17 05:37:45 UTC
Created attachment 1476548 [details]
v2v log - imported but non bootable

Comment 33 Guillaume Pavese 2018-08-17 05:41:32 UTC
This is the xen config file for the vm whose v2v log I attached previously :

[root@ps-inf-int-xen-fr-301 ~]# cat /etc/xen/vs-dev-prd-cns-fr-301
bootloader = "/usr/bin/pygrub"
memory = 1024
vcpu = 1
name = "vs-dev-prd-cns-fr-301"
uuid = "7b855926-9793-7811-279b-298f1562dd7e"
vif = [ "bridge=xenbr0_8,script=vif-bridge,model=e1000" ]
disk = [
        'file:/vm2/vs-dev-prd-cns-fr-301-root.img,sda1,w'
]
root = "/dev/sda1"
extra = "fastboot"
[root@ps-inf-int-xen-fr-301 ~]# 



In case that's relevant, what fdisk reports on the original image file:

[root@ps-inf-int-xen-fr-301 ~]# fdisk -l /vm2/vs-dev-prd-cns-fr-301-root.img
last_lba(): I don't know how to handle files with mode 81a4

Disk /vm2/vs-dev-prd-cns-fr-301-root.img: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /vm2/vs-dev-prd-cns-fr-301-root.img doesn't contain a valid partition table
[root@ps-inf-int-xen-fr-301 ~]#

Comment 35 Richard W.M. Jones 2018-08-17 07:48:13 UTC
This Xen guest used an external kernel to boot.  The disk image was
just an ext3 filesystem.  I believe this guest likely used a legacy
technology called pygrub (https://wiki.xenproject.org/wiki/PyGrub)
to boot, where pygrub, running on the host, would extract the kernel
from the ext3 disk image and use it to boot the guest.

So that's the history.  What can you do to boot this?  Firstly
this isn't going to be possible to boot using the ordinary RHV
UI import process.

I need to talk to my colleagues and gather suggestions and will get back.

Comment 36 Richard W.M. Jones 2018-09-27 07:46:11 UTC
I'm afraid I don't have much of a suggestion here.  These pygrub
using guests are not something that's supported by virt-v2v or
something we have thought about very much at all.  I'm not even
sure if we had pygrub in Red Hat's RHEL 5.

A general suggestion would be:

(1) Convert it to a local disk image:

  virt-v2v ... -o local -os /var/tmp

(2) Rename the '-sda' file as '-sda1' because it's just a partition:

  mv /var/tmp/guest-sda /var/tmp/guest-sda1

(3) Add a partition table using nbdkit (you'll need to use the
very latest version >= 1.7.3):

  nbdkit partitioning /var/tmp/guest-sda1

This will give you a fully partitioned guest visible to NBD clients.

(4) Convert the NBD source to a full disk image:

  qemu-img convert nbd:localhost:10809 /var/tmp/guest-sda
  killall nbdkit

(5) This *still* won't boot because nbdkit doesn't add a bootloader, but
you can use a tool such as virt-rescue to access the disk and fix the
bootloader (eg. installing/running grub).

  virt-rescue -a /var/tmp/guest-sda
  ><rescue> # some grub commands here

Comment 37 Tomáš Golembiovský 2021-08-31 11:40:44 UTC
So if my understanding of the comment #27 is correct then the UUID needs to be manually added before the conversion and it is not something we can do during. Because if libvirt cannot see the VM there is nothing we can do about it. We need to figure out what is the proper way of adding the UUIDs in XEN (is there a commandline command? modify XML files on disk? something else?) and document that.

That means that both points in comment #28 are just documentation issues.

Comment 40 Casper (RHV QE bot) 2022-09-20 14:30:36 UTC
This bug has low overall severity and is not going to be further verified by QE. If you believe special care is required, feel free to properly align relevant severity, flags and keywords to raise PM_Score or use one of the Bumps ('PrioBumpField', 'PrioBumpGSS', 'PrioBumpPM', 'PrioBumpQA') in Keywords to raise it's PM_Score above verification threashold (1000).