Bug 1467794 - [Scale] When activating the first host in the data center, the engine queries vdsm once per each activated block domain instead of once per the whole process
Summary: [Scale] When activating the first host in the data center, the engine queries...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.2.0
: 4.2.0
Assignee: Idan Shaby
QA Contact: Kevin Alon Goldblatt
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-07-05 07:49 UTC by Idan Shaby
Modified: 2017-12-22 07:28 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2017-12-20 11:46:25 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.2+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 78977 0 master MERGED backend: improve block domains sync on host activation 2020-05-05 00:07:37 UTC

Description Idan Shaby 2017-07-05 07:49:04 UTC
Description of problem:
When activating the first host in the data center, the engine queries vdsm once per each activated block domain instead of once per the whole process.

Version-Release number of selected component (if applicable):
ab5097fde94fb96318e8b767ecd39e31d07d1700

How reproducible:
100%

Steps to Reproduce:
1. Take all the hosts in the dc down to maintenance.
2. Activate one of the hosts.
3. Watch the log and see one call to SyncLunsInfoForBlockStorageDomainCommand (that calls GetVGInfo) per each block storage domain.

Actual results:
n calls to GetVGInfo for n block domains.

Expected results:
Can be improved to one call to GetDeviceList.

Comment 1 Kevin Alon Goldblatt 2017-08-02 11:45:34 UTC
Tested with the following code:
-------------------------------------------
ovirt-engine-4.2.0-0.0.master.20170723141021.git463826a.el7.centos.noarch
vdsm-4.20.1-271.gitac81a4d.el7.centos.x86_64


Verified with the following scenario:
-------------------------------------------
Steps to Reproduce:
1. Take all the hosts in the dc down to maintenance.
2. Activate one of the hosts.
3. Watch the log and see one call to SyncLunsInfoForBlockStorageDomainCommand (that calls GetVGInfo) per each block storage domain.


There is only on VetDeviceList query and 1 SyncLunsInfoForBlockStorageDomainsCommand per block domain. See log below:

Moving to VERIFIED!


From engine-log:
-----------------------
2017-08-02 12:04:22,384+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-38) [13d7bb98] START, GetDeviceListVDSCommand(HostNam
e = green-vdsb.qa.lab.tlv.redhat.com, GetDeviceListVDSCommandParameters:{hostId='4b917c14-d73b-4513-8df0-2f3cd9bc259e', storageType='UNKNOWN', checkStatus='false', lunIds='null'}), log id: 3b844db0
2017-08-02 12:04:22,819+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-38) [13d7bb98] FINISH, GetDeviceListVDSCommand, retur
n: [LUNs:{id='3514f0c5a51600676', physicalVolumeId='VdQjO2-fOng-dvNP-ZzDD-xfHx-X95H-r3Dvwt', volumeGroupId='1SwxnY-rqJN-EJ1t-0vN3-m9nk-zyGU-3mIEk2', serial='SXtremIO_XtremApp_XIO00153500071', lunMapping='1', ven
dorId='XtremIO', productId='XtremApp', lunConnections='[StorageServerConnections:{id='null', connection='10.35.146.129', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00', vfsType='null', mountOption
s='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.161', iqn='iqn.2008-05.com.xtremio:xio00153500071-5
14f0c50023f6c01', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.193', 
iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c04', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnect
ions:{id='null', connection='10.35.146.225', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null
', netIfaceName='null'}]', deviceSize='50', pvSize='49', peCount='null', peAllocatedCount='null', vendorName='XtremIO', pathsDictionary='[sdb=true, sdf=true, sdj=true, sdn=true]', pathsCapacity='[sdb=50, sdf=50,
 sdj=50, sdn=50]', lunType='ISCSI', status='Unknown', diskId='null', diskAlias='null', storageDomainId='null', storageDomainName='null', discardMaxSize='8388608', discardZeroesData='true'}, LUNs:{id='3514f0c5a51
600677', physicalVolumeId='cDYeL3-9AKc-Nudz-Cx0e-Jcke-a2p2-iFynDx', volumeGroupId='ymnxX2-RM9i-c6q7-kYog-fPZM-EGsT-m3YMrS', serial='SXtremIO_XtremApp_XIO00153500071', lunMapping='2', vendorId='XtremIO', productI
d='XtremApp', lunConnections='[StorageServerConnections:{id='null', connection='10.35.146.129', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00', vfsType='null', mountOptions='null', nfsVersion='nul
l', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.161', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01', vfsType
='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.193', iqn='iqn.2008-05.com.xtre
mio:xio00153500071-514f0c50023f6c04', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connecti
on='10.35.146.225', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'
, deviceSize='50', pvSize='49', peCount='null', peAllocatedCount='null', vendorName='XtremIO', pathsDictionary='[sdc=true, sdg=true, sdk=true, sdo=true]', pathsCapacity='[sdc=50, sdg=50, sdk=50, sdo=50]', lunTyp
e='ISCSI', status='Unknown', diskId='null', diskAlias='null', storageDomainId='null', storageDomainName='null', discardMaxSize='8388608', discardZeroesData='true'}, LUNs:{id='3514f0c5a51600675', physicalVolumeId
='WPkpUt-F0WY-uWgp-MoLt-wtza-7ruY-rASa19', volumeGroupId='S8n4Pd-IagN-mByN-lipB-zVip-7OZO-JhR91n', serial='SXtremIO_XtremApp_XIO00153500071', lunMapping='3', vendorId='XtremIO', productId='XtremApp', lunConnecti
ons='[StorageServerConnections:{id='null', connection='10.35.146.129', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nf
sTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.161', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01', vfsType='null', mountOptions='nu
ll', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.193', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c
50023f6c04', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.225', iqn='
iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', deviceSize='50', pvSize
='49', peCount='null', peAllocatedCount='null', vendorName='XtremIO', pathsDictionary='[sdd=true, sdh=true, sdl=true, sdp=true]', pathsCapacity='[sdd=50, sdh=50, sdl=50, sdp=50]', lunType='ISCSI', status='Unknow
n', diskId='null', diskAlias='null', storageDomainId='null', storageDomainName='null', discardMaxSize='8388608', discardZeroesData='true'}, LUNs:{id='3514f0c5a51600672', physicalVolumeId='Vy0x01-qmzE-v1zh-mQ1f-e
IIC-5iKy-DWpdbi', volumeGroupId='S8n4Pd-IagN-mByN-lipB-zVip-7OZO-JhR91n', serial='SXtremIO_XtremApp_XIO00153500071', lunMapping='4', vendorId='XtremIO', productId='XtremApp', lunConnections='[StorageServerConnec
tions:{id='null', connection='10.35.146.129', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='nul
l', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.161', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01', vfsType='null', mountOptions='null', nfsVersion='null', n
fsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.193', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c04', vfsType='nul
l', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.225', iqn='iqn.2008-05.com.xtremio:x
io00153500071-514f0c50023f6c05', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', deviceSize='50', pvSize='49', peCount='null', pe
AllocatedCount='null', vendorName='XtremIO', pathsDictionary='[sdq=true, sde=true, sdi=true, sdm=true]', pathsCapacity='[sdq=50, sde=50, sdi=50, sdm=50]', lunType='ISCSI', status='Unknown', diskId='null', diskAl
ias='null', storageDomainId='null', storageDomainName='null', discardMaxSize='8388608', discardZeroesData='true'}], log id: 3b844db0
2017-08-02 12:04:22,831+03 INFO  [org.ovirt.engine.core.bll.storage.domain.SyncLunsInfoForBlockStorageDomainCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-38) [1c825880] Lock Acquired to objec
t 'EngineLock:{exclusiveLocks='[b71445d9-7f47-4706-958f-16b5c7c068f6=STORAGE]', sharedLocks=''}'
2017-08-02 12:04:22,871+03 INFO  [org.ovirt.engine.core.bll.storage.domain.SyncLunsInfoForBlockStorageDomainCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-38) [1c825880] Running command: SyncL
unsInfoForBlockStorageDomainCommand internal: true. Entities affected :  ID: b71445d9-7f47-4706-958f-16b5c7c068f6 Type: Storage
2017-08-02 12:04:22,916+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainInfoVDSCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-38) [1c825880] START, HSMGetStorageDomainIn
foVDSCommand(HostName = green-vdsb.qa.lab.tlv.redhat.com, HSMGetStorageDomainInfoVDSCommandParameters:{hostId='4b917c14-d73b-4513-8df0-2f3cd9bc259e', storageDomainId='b71445d9-7f47-4706-958f-16b5c7c068f6'}), log
 id: 71fa68ce
2017-08-02 12:04:23,144+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainInfoVDSCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-38) [1c825880] FINISH, HSMGetStorageDomainI
nfoVDSCommand, return: <StorageDomainStatic:{name='block11', id='b71445d9-7f47-4706-958f-16b5c7c068f6'}, 554f5132-0233-44df-8266-1827c26a2731>, log id: 71fa68ce
2017-08-02 12:04:23,147+03 INFO  [org.ovirt.engine.core.bll.storage.domain.SyncLunsInfoForBlockStorageDomainCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-38) [1c825880] Lock freed to object '
EngineLock:{exclusiveLocks='[b71445d9-7f47-4706-958f-16b5c7c068f6=STORAGE]', sharedLocks=''}'
2017-08-02 12:04:23,155+03 INFO  [org.ovirt.engine.core.bll.storage.domain.SyncLunsInfoForBlockStorageDomainCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-38) [4a9b5531] Lock Acquired to objec
:

Comment 2 Sandro Bonazzola 2017-12-20 11:46:25 UTC
This bugzilla is included in oVirt 4.2.0 release, published on Dec 20th 2017.

Since the problem described in this bug report should be
resolved in oVirt 4.2.0 release, published on Dec 20th 2017, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.