Bug 741103 - VDSM: the Thread for getStoragePoolInfo should be diminished -> it creates noise in the log every 10 seconds
Summary: VDSM: the Thread for getStoragePoolInfo should be diminished -> it creates no...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: oVirt
Classification: Retired
Component: vdsm
Version: unspecified
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 3.3.4
Assignee: Dan Kenigsberg
QA Contact:
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-09-25 11:31 UTC by Dafna Ron
Modified: 2015-01-24 10:34 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-03-12 09:38:03 UTC
oVirt Team: ---


Attachments (Terms of Use)
vdsm log attached (494.84 KB, application/x-gzip)
2011-09-25 11:31 UTC, Dafna Ron
no flags Details

Description Dafna Ron 2011-09-25 11:31:56 UTC
Created attachment 524783 [details]
vdsm log attached

Description of problem:

The thread for getStoragePoolInfo contains about 19 lines each time it runs (which is every 10 seconds) out of these 10 lines about 10 are for ResourceManager. 
can you please diminish some of these lines from logger to clean the log a bit?
it feels like since this runs every 10 seconds that some of these actions might not need to be in the log. 


Version-Release number of selected component (if applicable):

vdsm-4.9-96.1.el6.x86_64

How reproducible:

100% 

Steps to Reproduce:
1. search for getStoragePoolInfo in the vdsm log
2.
3.
  
Actual results:

the query runs every 10 seconds and logs about 19 raws each time 

Expected results:

can we please remove some of these raws from vdsm log since it seems like a lot of noise in the log for one query. 

Additional info:

Comment 1 Yaniv Kaul 2011-09-25 13:45:39 UTC
Specifically, the issue is with the lines from the ResourceManager. They represent 10 lines of logs out of the total 19 of the command. Example:
Thread-2000::DEBUG::2011-09-25 13:01:02,104::resourceManager::821::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-2000::DEBUG::2011-09-25 13:01:02,104::resourceManager::517::ResourceManager::(releaseResource) Trying to release resource 'Storage.0b8809e9-1f75-40d7-9d32-8b66969ff552'
Thread-2000::DEBUG::2011-09-25 13:01:02,104::resourceManager::532::ResourceManager::(releaseResource) Released resource 'Storage.0b8809e9-1f75-40d7-9d32-8b66969ff552' (0 active users)
Thread-2000::DEBUG::2011-09-25 13:01:02,105::resourceManager::537::ResourceManager::(releaseResource) Resource 'Storage.0b8809e9-1f75-40d7-9d32-8b66969ff552' is free, finding out if anyone is waiting for it.
Thread-2000::DEBUG::2011-09-25 13:01:02,105::resourceManager::544::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.0b8809e9-1f75-40d7-9d32-8b66969ff552', Clearing records.

Comment 3 Dan Kenigsberg 2011-09-25 14:55:59 UTC
I'm very glad that you are confident in our bugless deadlockless resource subsystem. I'm a little less confident, and would like to keep the log lines, just in case a complex deadlock crops up in a customer site. Let's reconsider in 3.1, and use `grep -v ::ResourceManager::` until then.

Comment 4 Itamar Heim 2013-03-12 09:38:03 UTC
Closing old bugs. If this issue is still relevant/important in current version, please re-open the bug.


Note You need to log in before you can comment on or make changes to this bug.