Hide Forgot
This bug seems to be a regression of this https://bugzilla.redhat.com/show_bug.cgi?id=920074 , which says - fixed on vdsm-4.10.2-22.0.el6ev per comment#37. On RHEL host with vdsm-4.10.2-27.0.el6ev.x86_64, /var/log/messages file and vdsm.log are flooded with the same messages again, customer case#00978965: Nov 11 16:00:59 host02 vdsm Storage.PersistentDict WARNING data has no embedded checksum - trust it as it is Nov 11 16:01:00 host02 vdsm Storage.StorageDomain WARNING Resource namespace cbf5910f-9993-4bd4-97e4-ccc23656404f_imageNS already registered Nov 11 16:01:00 host02 vdsm Storage.StorageDomain WARNING Resource namespace cbf5910f-9993-4bd4-97e4-ccc23656404f_volumeNS already registered Nov 11 16:01:00 host02 vdsm Storage.StorageDomain WARNING Resource namespace cbf5910f-9993-4bd4-97e4-ccc23656404f_lvmActivationNS already registered Nov 11 16:03:39 host02 vdsm Storage.StorageDomain WARNING Resource namespace 072b0933-1497-4368-a78c-36c2059a44a4_imageNS already registered Nov 11 16:03:39 host02 vdsm Storage.StorageDomain WARNING Resource namespace 072b0933-1497-4368-a78c-36c2059a44a4_volumeNS already registered Nov 11 16:03:45 host02 vdsm Storage.StorageDomain WARNING Resource namespace 03a23919-b1e8-4542-8287-8cb855aba8cd_imageNS already registered Nov 11 16:03:45 host02 vdsm Storage.StorageDomain WARNING Resource namespace 03a23919-b1e8-4542-8287-8cb855aba8cd_volumeNS already registered Nov 11 16:06:00 host02 ¿<11>vdsm Storage.StorageDomainCache ERROR looking for unfetched domain cbf5910f-9993-4bd4-97e4-ccc23656404f Nov 11 16:06:00 host02 ¿<11>vdsm Storage.StorageDomainCache ERROR looking for domain cbf5910f-9993-4bd4-97e4-ccc23656404f
"I am the actual vdsm" are INFO messages. There are no INFO messages at all in the vdsm log, which means logger.conf was changed manually by the customer. When logger.cong is changed manually, it's not overridden automatically on upgrade, and that's where the fix to the original bug was. In order to do that, they should replace the following block in logger.conf: ------------------------- [logger_Storage] level=DEBUG handlers=syslog,logfile qualname=Storage propagate=0 ------------------------- with the next one: ------------------------- [logger_Storage] level=DEBUG handlers=logfile qualname=Storage propagate=0 ------------------------- Please keep me updated, I'm leaving this bug open for now.
Vered, I see indeed the difference between the code and the customer settings. Then what I can say is that: 1. vdsm upgrade should handle this, and if there changes applied to the logger config on one of the versions, those should be applied to the file on the upgraded environment as well, correct? 2. Why would this configuration matter, please explain. I see it just removes the syslog handler. It does not change the log level, which is DEBUG and should contain INFO as well. Thank you, Marina.
(In reply to Marina from comment #6) > 1. vdsm upgrade should handle this, and if there changes applied to the > logger config on one of the versions, those should be applied to the file on > the upgraded environment as well, correct? I don't understand what you mean by "one of the versions". If changes are made to the logger.config in an upgrade, normally they should override older logger.conf - UNLESS it has been modified by the user, in which case changes in the new version (is this what you meant?) should be applied manually by that user to his older logger.conf. > 2. Why would this configuration matter, please explain. I see it just > removes the syslog handler. syslog is actually /var/log/messages. When removed storage messages are no longer logged there. BTW, this was the main fix to the original bug. > It does not change the log level, which is DEBUG > and should contain INFO as well. You are correct, I'll look into that further. I hate to say
My mouse pad went crazy, above comment was sent in the middle. So - I'm looking into it anyway, but let's make sure the user manually removes the syslog handler from the Storage logger and we can close this bug. I'll open another for the missing info messages and cc you.
Actually, looking into the original bug: https://bugzilla.redhat.com/show_bug.cgi?id=920074 release notes, it says, the logger.conf would not be updated. And after thinking again - it makes sense to me. I can ask the customer to replace this. And then indeed we can close the bug. But do we need to restart vdsmd to take affect of this change? On the other hand, we need to understand what is happening on his environment and why all those messages are there.
> But do we need to restart vdsmd to take affect of this change? Yes. > On the other hand, we need to understand what is happening on his > environment and why all those messages are there. Don't you mean missing? Please look at 1034330 and see if this is what you meant. Putting you on needinfo until we know this fixes the issue over at the customer.
You are right. Based on learning the contents of customer logger.conf file, I think we should close this bug as not a bug. I will ask the customer to update their conf file and restart vdsmd. And if the problem would reproduce - I would reopen the bug. Thank you!