Bug 1468974 - [Scale] When the SPM changes the status of previously problematic domains to active, it queries vdsm once per each activated block domain instead of once per the whole process
[Scale] When the SPM changes the status of previously problematic domains to ...
Status: CLOSED CURRENTRELEASE
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage (Show other bugs)
4.2.0
Unspecified Unspecified
unspecified Severity medium (vote)
: ovirt-4.2.0
: 4.2.0
Assigned To: Idan Shaby
Kevin Alon Goldblatt
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-07-10 02:58 EDT by Idan Shaby
Modified: 2017-12-22 02:30 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-12-20 05:50:41 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
rule-engine: ovirt‑4.2+


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 79176 master MERGED backend: improve problematic block domains sync 2017-07-10 04:24 EDT

  None (edit)
Description Idan Shaby 2017-07-10 02:58:08 EDT
Description of problem:
When the SPM changes the status of previously problematic domains to active, it queries vdsm once per each activated block domain instead of once per the whole process.

Version-Release number of selected component (if applicable):
f15a6d92a036236f9f0c250d35e1a3957aeed1d8

How reproducible:
100%

Steps to Reproduce:
1. Have more than one block domain in your data center.
2. Block the connection between the SPM and the block domains (using iptables, for example).
3. Wait for those domains to become inactive.
4. Revive the connection between the host and the domains.
5. Read the engine's log.

Actual results:
n calls to GetVGInfo, one per each block domain.

Expected results:
Can be improved to one call to GetDeviceList.
Comment 1 Kevin Alon Goldblatt 2017-08-02 08:18:45 EDT
Tested with the following code:
-------------------------------------------
ovirt-engine-4.2.0-0.0.master.20170723141021.git463826a.el7.centos.noarch
vdsm-4.20.1-271.gitac81a4d.el7.centos.x86_64


Verified with the following scenario:
-------------------------------------------
Steps to Reproduce:
1. Have more than one block domain in your data center.
2. Block the connection between the SPM and the block domains (using iptables, for example).
3. Wait for those domains to become inactive.
4. Revive the connection between the host and the domains.
5. Read the engine's log.


There is only on VetDeviceList query and 1 SyncLunsInfoForBlockStorageDomainsCommand per block domain. See log below:

Moving to VERIFIED!


From engine-log:
-----------------------
2017-08-02 15:10:02,572+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-44) [3307f87b] START, GetDeviceListVDSCommand(HostNam
e = green-vdsb.qa.lab.tlv.redhat.com, GetDeviceListVDSCommandParameters:{hostId='4b917c14-d73b-4513-8df0-2f3cd9bc259e', storageType='UNKNOWN', checkStatus='false', lunIds='null'}), log id: 2720f5c0
2017-08-02 15:10:03,082+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-44) [3307f87b] FINISH, GetDeviceListVDSCommand, retur
n: [LUNs:{id='3514f0c5a51600676', physicalVolumeId='VdQjO2-fOng-dvNP-ZzDD-xfHx-X95H-r3Dvwt', volumeGroupId='1SwxnY-rqJN-EJ1t-0vN3-m9nk-zyGU-3mIEk2', serial='SXtremIO_XtremApp_XIO00153500071', lunMapping='1', ven
dorId='XtremIO', productId='XtremApp', lunConnections='[StorageServerConnections:{id='null', connection='10.35.146.129', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00', vfsType='null', mountOption
s='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.161', iqn='iqn.2008-05.com.xtremio:xio00153500071-5
14f0c50023f6c01', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.193', 
iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c04', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnect
ions:{id='null', connection='10.35.146.225', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null
', netIfaceName='null'}]', deviceSize='50', pvSize='49', peCount='null', peAllocatedCount='null', vendorName='XtremIO', pathsDictionary='[sdb=true, sdf=true, sdj=true, sdn=true]', pathsCapacity='[sdb=50, sdf=50,
 sdj=50, sdn=50]', lunType='ISCSI', status='Unknown', diskId='null', diskAlias='null', storageDomainId='null', storageDomainName='null', discardMaxSize='8388608', discardZeroesData='true'}, LUNs:{id='3514f0c5a51
600677', physicalVolumeId='cDYeL3-9AKc-Nudz-Cx0e-Jcke-a2p2-iFynDx', volumeGroupId='ymnxX2-RM9i-c6q7-kYog-fPZM-EGsT-m3YMrS', serial='SXtremIO_XtremApp_XIO00153500071', lunMapping='2', vendorId='XtremIO', productI
d='XtremApp', lunConnections='[StorageServerConnections:{id='null', connection='10.35.146.129', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00', vfsType='null', mountOptions='null', nfsVersion='nul
l', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.161', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01', vfsType
='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.193', iqn='iqn.2008-05.com.xtre
mio:xio00153500071-514f0c50023f6c04', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connecti
on='10.35.146.225', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'
, deviceSize='50', pvSize='49', peCount='null', peAllocatedCount='null', vendorName='XtremIO', pathsDictionary='[sdc=true, sdg=true, sdk=true, sdo=true]', pathsCapacity='[sdc=50, sdg=50, sdk=50, sdo=50]', lunTyp
e='ISCSI', status='Unknown', diskId='null', diskAlias='null', storageDomainId='null', storageDomainName='null', discardMaxSize='8388608', discardZeroesData='true'}, LUNs:{id='3514f0c5a51600675', physicalVolumeId
='WPkpUt-F0WY-uWgp-MoLt-wtza-7ruY-rASa19', volumeGroupId='S8n4Pd-IagN-mByN-lipB-zVip-7OZO-JhR91n', serial='SXtremIO_XtremApp_XIO00153500071', lunMapping='3', vendorId='XtremIO', productId='XtremApp', lunConnecti
ons='[StorageServerConnections:{id='null', connection='10.35.146.129', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nf
sTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.161', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01', vfsType='null', mountOptions='nu
ll', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.193', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c
50023f6c04', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.225', iqn='
iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', deviceSize='50', pvSize
='49', peCount='null', peAllocatedCount='null', vendorName='XtremIO', pathsDictionary='[sdd=true, sdh=true, sdl=true, sdp=true]', pathsCapacity='[sdd=50, sdh=50, sdl=50, sdp=50]', lunType='ISCSI', status='Unknow
n', diskId='null', diskAlias='null', storageDomainId='null', storageDomainName='null', discardMaxSize='8388608', discardZeroesData='true'}, LUNs:{id='3514f0c5a51600672', physicalVolumeId='Vy0x01-qmzE-v1zh-mQ1f-e
IIC-5iKy-DWpdbi', volumeGroupId='S8n4Pd-IagN-mByN-lipB-zVip-7OZO-JhR91n', serial='SXtremIO_XtremApp_XIO00153500071', lunMapping='4', vendorId='XtremIO', productId='XtremApp', lunConnections='[StorageServerConnec
tions:{id='null', connection='10.35.146.129', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='nul
l', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.161', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01', vfsType='null', mountOptions='null', nfsVersion='null', n
fsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.193', iqn='iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c04', vfsType='nul
l', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='null', connection='10.35.146.225', iqn='iqn.2008-05.com.xtremio:x
io00153500071-514f0c50023f6c05', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', deviceSize='50', pvSize='49', peCount='null', peAllocatedCount='null', vendorName='XtremIO', pathsDictionary='[sdq=true, sde=true, sdi=true, sdm=true]', pathsCapacity='[sdq=50, sde=50, sdi=50, sdm=50]', lunType='ISCSI', status='Unknown', diskId='null', diskAlias='null', storageDomainId='null', storageDomainName='null', discardMaxSize='8388608', discardZeroesData='true'}], log id: 2720f5c0
2017-08-02 15:10:03,108+03 INFO  [org.ovirt.engine.core.bll.storage.domain.SyncLunsInfoForBlockStorageDomainCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-44) [5b22b501] Lock Acquired to object 'EngineLock:{exclusiveLocks='[b71445d9-7f47-4706-958f-16b5c7c068f6=STORAGE]', sharedLocks=''}'
2017-08-02 15:10:03,137+03 INFO  [org.ovirt.engine.core.bll.storage.domain.SyncLunsInfoForBlockStorageDomainCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-44) [5b22b501] Running command: SyncLunsInfoForBlockStorageDomainCommand internal: true. Entities affected :  ID: b71445d9-7f47-4706-958f-16b5c7c068f6 Type: Storage
2017-08-02 15:10:03,185+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainInfoVDSCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-44) [5b22b501] START, HSMGetStorageDomainInfoVDSCommand(HostName = green-vdsb.qa.lab.tlv.redhat.com, HSMGetStorageDomainInfoVDSCommandParameters:{hostId='4b917c14-d73b-4513-8df0-2f3cd9bc259e', storageDomainId='b71445d9-7f47-4706-958f-16b5c7c068f6'}), log id: 37d2509
2017-08-02 15:10:03,487+03 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetStorageDomainInfoVDSCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-44) [5b22b501] FINISH, HSMGetStorageDomainInfoVDSCommand, return: <StorageDomainStatic:{name='block11', id='b71445d9-7f47-4706-958f-16b5c7c068f6'}, 554f5132-0233-44df-8266-1827c26a2731>, log id: 37d2509
2017-08-02 15:10:03,491+03 INFO  [org.ovirt.engine.core.bll.storage.domain.SyncLunsInfoForBlockStorageDomainCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-44) [5b22b501] Lock freed to object 'EngineLock:{exclusiveLocks='[b71445d9-7f47-4706-958f-16b5c7c068f6=STORAGE]', sharedLocks=''}'
2017-08-02 15:10:03,509+03 INFO  [org.ovirt.engine.core.bll.storage.domain.SyncLunsInfoForBlockStorageDomainCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-44) [7982a8e1] Lock Acquired to object 'EngineLock:{exclusiveLocks='[8820b01d-dc2a-4994-93dd-4f0d2c92400a=STORAGE]', sharedLocks=''}'
2017-08-02 15:10:03,533+03 INFO  [org.ovirt.engine.core.bll.storage.domain.SyncLunsInfoForBlockStorageDomainCommand] (org.ovirt.thread.EE-ManagedThreadFactory-default-Thread-44) [7982a8e1] Running command: SyncL:
Comment 2 Sandro Bonazzola 2017-12-20 05:50:41 EST
This bugzilla is included in oVirt 4.2.0 release, published on Dec 20th 2017.

Since the problem described in this bug report should be
resolved in oVirt 4.2.0 release, published on Dec 20th 2017, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.

Note You need to log in before you can comment on or make changes to this bug.