| Summary: | Response time of the engine is significantly longer with RHVH hosts connected vs. regular RHEL hosts | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | [oVirt] ovirt-engine | Reporter: | Nelly Credi <ncredi> | ||||||||||
| Component: | Backend.Core | Assignee: | Roy Golan <rgolan> | ||||||||||
| Status: | CLOSED WONTFIX | QA Contact: | meital avital <mavital> | ||||||||||
| Severity: | high | Docs Contact: | |||||||||||
| Priority: | unspecified | ||||||||||||
| Version: | 4.0.5.5 | CC: | bugs, cshao, dguo, eberman, fdeutsch, mperina, ncredi, oourfali, pstehlik, ycui | ||||||||||
| Target Milestone: | --- | Keywords: | Performance | ||||||||||
| Target Release: | --- | Flags: | sbonazzo:
ovirt-4.2-
|
||||||||||
| Hardware: | Unspecified | ||||||||||||
| OS: | Unspecified | ||||||||||||
| Whiteboard: | |||||||||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||||||
| Doc Text: | Story Points: | --- | |||||||||||
| Clone Of: | Environment: | ||||||||||||
| Last Closed: | 2017-07-04 07:53:04 UTC | Type: | Bug | ||||||||||
| Regression: | --- | Mount Type: | --- | ||||||||||
| Documentation: | --- | CRM: | |||||||||||
| Verified Versions: | Category: | --- | |||||||||||
| oVirt Team: | Infra | RHEL 7.3 requirements from Atomic Host: | |||||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||||
| Attachments: |
|
||||||||||||
|
Description
Nelly Credi
2016-11-17 11:23:50 UTC
Created attachment 1221525 [details]
execution with rhevh
Created attachment 1221526 [details]
execution with rhels
Can you try to narrow it down, and check whether specific parts took longer? Any other differences between the two runs? we see it in each rest api call, the time until the engine answers is just longer. Please supply the engine.log with DEBUG to spot where is the time spent. Created attachment 1223056 [details]
debug logs
From the logs analysis, I was able to count 1204 times where "The exclusive lock for key *** is released and lock is removed from map" freezes for about ~3 seconds before moving to the next operation.
2016-11-16 03:06:11,838 DEBUG [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler10) [62e8b35a] The exclusive lock for key 'e8933e9f-665e-4186-80bf-f17e94c44e8cVDS_INIT' is released and lock is removed from map
2016-11-16 03:06:14,139 DEBUG [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [36c519c3] Before acquiring lock 'EngineLock:{exclusiveLocks='[fec8135e-5910-4412-bc9a-bae12f1fa757=<VDS_INIT, >]', sharedLocks='null'}'
Could it be related to gluster? In the logs I see:
2016-11-15 06:12:45,946 DEBUG [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler3) [4f6836ab] Refreshing Gluster Data [lightweight]
2016-11-15 06:12:45,946 DEBUG [org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresJdbcTemplate] (DefaultQuartzScheduler3) [4f6836ab] Executing prepared SQL query
2016-11-15 06:12:45,946 DEBUG [org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresJdbcTemplate] (DefaultQuartzScheduler3) [4f6836ab] Executing prepared SQL statement [select * from getallfromcluster(?, ?)]
2016-11-15 06:12:46,548 DEBUG [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [66ca7030] Before acquiring lock 'EngineLock:{exclusiveLocks='[e8933e9f-665e-4186-80bf-f17e94c44e8c=<VDS_INIT, >]', sharedLocks='null'}'
2016-11-15 06:12:46,548 DEBUG [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [66ca7030] Success acquiring lock 'EngineLock:{exclusiveLocks='[e8933e9f-665e-4186-80bf-f17e94c44e8c=<VDS_INIT, >]', sharedLocks='null'}' succeeded
2016-11-15 06:12:46,548 DEBUG [org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresJdbcTemplate] (DefaultQuartzScheduler8) [66ca7030] Executing prepared SQL query
2016-11-15 06:12:46,548 DEBUG [org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresJdbcTemplate] (DefaultQuartzScheduler8) [66ca7030] Executing prepared SQL statement [select * from getvdsbyvdsid(?, ?, ?)]
2016-11-15 06:12:46,554 DEBUG [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [66ca7030] Before releasing a lock 'EngineLock:{exclusiveLocks='[e8933e9f-665e-4186-80bf-f17e94c44e8c=<VDS_INIT, >]', sharedLocks='null'}'
2016-11-15 06:12:46,554 DEBUG [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) [66ca7030] The exclusive lock for key 'e8933e9f-665e-4186-80bf-f17e94c44e8cVDS_INIT' is released and lock is removed from map
2016-11-15 06:12:49,555 DEBUG [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler5) [7535ab8a] Before acquiring lock 'EngineLock:{exclusiveLocks='[e8933e9f-665e-4186-80bf-f17e94c44e8c=<VDS_INIT, >]', sharedLocks='null'}'
2016-11-15 06:12:49,555 DEBUG [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler5) [7535ab8a] Success acquiring lock 'EngineLock:{exclusiveLocks='[e8933e9f-665e-4186-80bf-f17e94c44e8c=<VDS_INIT, >]', sharedLocks='null'}' succeeded
So the locking mentioned in comment 7 seems to be accompanied by the gluster refresh.
Nelly, could you please provide the logs of the a good run? (So the RHEL logs for the same run you used to gather the logs in comment 6)
We have the same gluster version on rhels & rhvhs glusterfs-3.7.9-12.el7.x86_64 Created attachment 1226750 [details]
Engine log on rhels only
Not sure whether this still reproduces, but we haven't identified any change that might cause this. Closing as wontfix. |