Description of problem: From the UI i see that there are no storage domains which are problematic, everything is up and running fine. But i see Trace back errors in vdsm logs which says below. skippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 0e219131-57b3-4e25-a50b-eb4963fe2f ce (cwd None) Thread-8668::DEBUG::2016-04-27 10:09:37,690::lvm::290::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before en abling it!\n Volume group "0e219131-57b3-4e25-a50b-eb4963fe2fce" not found\n Cannot process volume group 0e219131-57b3-4e25-a50b-eb4963fe2fce\n'; <rc> = 5 Thread-8668::WARNING::2016-04-27 10:09:37,692::lvm::375::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] [' WARNING: lvmetad is running but disabled. Restart lvmetad be fore enabling it!', ' Volume group "0e219131-57b3-4e25-a50b-eb4963fe2fce" not found', ' Cannot process volume group 0e219131-57b3-4e25-a50b-eb4963fe2fce'] Thread-8668::DEBUG::2016-04-27 10:09:37,692::lvm::415::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-8668::ERROR::2016-04-27 10:09:37,709::sdc::145::Storage.StorageDomainCache::(_findDomain) domain 0e219131-57b3-4e25-a50b-eb4963fe2fce not found Traceback (most recent call last): File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 173, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'0e219131-57b3-4e25-a50b-eb4963fe2fce',) Thread-8668::ERROR::2016-04-27 10:09:37,709::monitor::276::Storage.Monitor::(_monitorDomain) Error monitoring domain 0e219131-57b3-4e25-a50b-eb4963fe2fce Traceback (most recent call last): File "/usr/share/vdsm/storage/monitor.py", line 264, in _monitorDomain self._produceDomain() File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 769, in wrapper value = meth(self, *a, **kw) File "/usr/share/vdsm/storage/monitor.py", line 323, in _produceDomain self.domain = sdCache.produce(self.sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 100, in produce domain.getRealDomain() File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 124, in _realProduce domain = self._findDomain(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 173, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'0e219131-57b3-4e25-a50b-eb4963fe2fce',) jsonrpc.Executor/6::DEBUG::2016-04-27 10:09:37,864::task::595::Storage.TaskManager.Task::(_updateState) Task=`8c64e3b3-c3d7-485e-9f9f-cfce4e1e7a86`::moving from state init -> state preparing Version-Release number of selected component (if applicable): vdsm-4.17.23.2-1.1.el7ev.noarch How reproducible: Always Steps to Reproduce: 1. Install HC setup and bring up all storage domains. 2. 3. Actual results: Traceback errors related to storage domain does not exist are found in vdsm.logs when all the storage domains are up and running fine. Expected results: No Trace back errors should be found. Additional info:
vdsm and supervdsm logs from all the machines can be found in the link below. http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/1330827/
Moving to first RC, since things should not be targeted to second one at this point.
This happened right after creating a domain? If so, it's a dup (need to find the original bug).
what i observed is, these messages starts coming up in the vdsm.log after the domains are created in the UI.
(In reply to RamaKasturi from comment #4) > what i observed is, these messages starts coming up in the vdsm.log after > the domains are created in the UI. Dup of bug 1344314 then?
I looked at the vdsm logs and i see the same Traceback which i have reported. Agree to close this as dup of https://bugzilla.redhat.com/show_bug.cgi?id=1344314
*** This bug has been marked as a duplicate of bug 1344314 ***