Bug 1277570 - NullPointer exception reported when vdsm erroneously report storage pools as R/O
NullPointer exception reported when vdsm erroneously report storage pools as R/O
Status: CLOSED CURRENTRELEASE
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage (Show other bugs)
3.6.0
Unspecified Unspecified
unspecified Severity medium (vote)
: ovirt-3.6.1
: 3.6.1
Assigned To: Fred Rolland
Aharon Canan
storage
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-11-03 09:57 EST by Kevin Alon Goldblatt
Modified: 2016-02-10 11:47 EST (History)
12 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-01-13 09:40:20 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
rule-engine: ovirt‑3.6.z+
ylavi: planning_ack+
tnisan: devel_ack+
rule-engine: testing_ack+


Attachments (Terms of Use)
engine, vdsm, server, journal logs (3.14 MB, application/x-gzip)
2015-11-03 09:58 EST, Kevin Alon Goldblatt
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 48258 master MERGED engine: NPE in GLUSTERFSStorageHelper Never
oVirt gerrit 48378 ovirt-engine-3.6 MERGED engine: NPE in GLUSTERFSStorageHelper Never

  None (edit)
Description Kevin Alon Goldblatt 2015-11-03 09:57:05 EST
Description of problem:
Engine reports a NullPointer exception when vdsm erroneously reports all storage pools as r/o and downs all storage domains

Version-Release number of selected component (if applicable):
vdsm-4.17.8-1.el7ev.noarch
rhevm-3.6.0-0.18.el6.noarch


How reproducible:
Happened on at least 2 occasions over the last few days

Steps to Reproduce:
1. Have a rhevm setup with at least 12 storage domains (4 nfs with 2 masters, 8 gluster domains and 2 block domains)
2.During a test sessions which included creating gluster domains, while the system was idle for about and hour, the vdsm erroneously reports errors in monitoring the each of the storage pools and that the file system is R/O. All the storage domains are set to down.
3.Upon investingation on the host all files systems are actually R/W and no error is reported when accessing them
4.The engine reports a NullPointer exception


Actual results:
All the strorage pools are reported to be Read Only, and the engine reports a NullPointer exception

Expected results:
All the storage pools should be accessable

Additional info:




ENGINE LOG
-----------------------------
2015-11-03 11:47:25,276 WARN  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-7-thread-40) [] domain 'd4f33bae-ab63-4b46-9330-cd98544e9a6b:gluster4444' in problem. vds: 'blond-vdsf'
2015-11-03 11:47:25,277 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-7-thread-40) [] Host '42c269ef-04fd-4815-9482-fb9f463c336d' has reported new storage access problem to the following domains 'd
4f33bae-ab63-4b46-9330-cd98544e9a6b' marking it for storage connections and pool metadata refresh (report id: 'c9775a64-7e00-4bfc-aa6c-953649a97700')
2015-11-03 11:47:31,076 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (DefaultQuartzScheduler_Worker-45) [2f93570f] Running storage connections refresh for hosts '[42c269ef-04fd-4815-9482-fb9f463c336d]'
2015-11-03 11:47:31,078 INFO  [org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand] (org.ovirt.thread.pool-7-thread-29) [7b74054d] Running command: ConnectHostToStoragePoolServersCommand internal: true. Entities affe
cted :  ID: c5f8a645-549e-4e3f-8a65-c669b4665852 Type: StoragePool
2015-11-03 11:47:31,133 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-7-thread-29) [7b74054d] START, ConnectStorageServerVDSCommand(HostName = blond-vdsf, StorageServerConnectionM
anagementVDSParameters:{runAsync='true', hostId='42c269ef-04fd-4815-9482-fb9f463c336d', storagePoolId='c5f8a645-549e-4e3f-8a65-c669b4665852', storageType='ISCSI', connectionList='[StorageServerConnections:{id='f0d2bce9-c866-4488-a9c3-bae
29ca29056', connection='10.35.160.104', iqn='iqn.1992-04.com.emc:cx.ckm00121000438.a6', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnecti
ons:{id='e0426226-afd3-4bf7-a296-a2f30f643f73', connection='10.35.160.105', iqn='iqn.1992-04.com.emc:cx.ckm00121000438.a7', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfac
eName='null'}, StorageServerConnections:{id='b8acb774-afa6-42d1-8217-b24b49f5b43a', connection='10.35.160.106', iqn='iqn.1992-04.com.emc:cx.ckm00121000438.b6', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nf
sTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='221242ef-cb2c-4fa2-aa89-e6b9ae9a3bc3', connection='10.35.160.107', iqn='iqn.1992-04.com.emc:cx.ckm00121000438.b7', vfsType='null', mountOptions='null', nfsV
ersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 36b4ceed
2015-11-03 11:47:40,367 WARN  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-7-thread-32) [] domain '00ed0b5c-4fa3-4e12-98c2-47511ac6d2c7:gluster446' in problem. vds: 'blond-vdsf'
2015-11-03 11:47:40,371 WARN  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-7-thread-32) [] domain '67d3271b-b1c9-4401-8fa0-0b9ced7c53e7:gluster4448' in problem. vds: 'blond-vdsf'
2015-11-03 11:47:40,372 WARN  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-7-thread-32) [] domain 'bb9854bb-798b-4143-b68c-fda3879eb474:gluster4443' in problem. vds: 'blond-vdsf'
2015-11-03 11:47:40,373 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-7-thread-32) [] Host '42c269ef-04fd-4815-9482-fb9f463c336d' has reported new storage access problem to the following domains '0
0ed0b5c-4fa3-4e12-98c2-47511ac6d2c7,d4f33bae-ab63-4b46-9330-cd98544e9a6b,67d3271b-b1c9-4401-8fa0-0b9ced7c53e7,bb9854bb-798b-4143-b68c-fda3879eb474' marking it for storage connections and pool metadata refresh (report id: '31384c34-0c53-4
746-8ca7-d00552a0b9fb')
2015-11-03 11:47:44,097 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-7-thread-29) [7b74054d] FINISH, ConnectStorageServerVDSCommand, return: {221242ef-cb2c-4fa2-aa89-e6b9ae9a3bc3
=0, f0d2bce9-c866-4488-a9c3-bae29ca29056=0, e0426226-afd3-4bf7-a296-a2f30f643f73=0, b8acb774-afa6-42d1-8217-b24b49f5b43a=0}, log id: 36b4ceed
2015-11-03 11:47:44,098 ERROR [org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand] (org.ovirt.thread.pool-7-thread-29) [7b74054d] Command 'org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand' fa
iled: null
2015-11-03 11:47:44,098 ERROR [org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand] (org.ovirt.thread.pool-7-thread-29) [7b74054d] Exception: java.lang.NullPointerException
        at org.ovirt.engine.core.bll.storage.GLUSTERFSStorageHelper.prepareConnectHostToStoragePoolServers(GLUSTERFSStorageHelper.java:54) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand.connectStorageServersByType(ConnectHostToStoragePoolServersCommand.java:64) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand.connectStorageServer(ConnectHostToStoragePoolServersCommand.java:55) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand.executeCommand(ConnectHostToStoragePoolServersCommand.java:42) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1211) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1355) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1979) [bll.jar:]
        at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:174) [utils.jar:]
        at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:116) [utils.jar:]
        at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1392) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:374) [bll.jar:]
        at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:475) [bll.jar:]
        at org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:457) [bll.jar:]
        at org.ovirt.engine.core.bll.Backend.runInternalAction(Backend.java:404) [bll.jar:]
        at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown Source) [:1.8.0_51]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_51]
        at java.lang.reflect.Method.invoke(Method.java:497) [rt.jar:1.8.0_51]
        at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52) [jboss-as-ee.jar:7.5.3.Final-redhat-2]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) [jboss-as-ee.jar:7.5.3.Final-redhat-2]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:374) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:74) [jboss-as-weld.jar:7.5.3.Final-redhat-2]
        at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:84) [jboss-as-weld.jar:7.5.3.Final-redhat-2]
        at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:97) [jboss-as-weld.jar:7.5.3.Final-redhat-2]
        at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) [jboss-as-ee.jar:7.5.3.Final-redhat-2]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
       at org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) [jboss-as-ee.jar:7.5.3.Final-redhat-2]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43) [jboss-as-ejb3.jar:7.5.3.Final-redhat-2]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:93) [jboss-as-weld.jar:7.5.3.Final-redhat-2]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:53) [jboss-as-ee.jar:7.5.3.Final-redhat-2]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:52) [jboss-as-ejb3.jar:7.5.3.Final-redhat-2]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:266) [jboss-as-ejb3.jar:7.5.3.Final-redhat-2]
        at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:377) [jboss-as-ejb3.jar:7.5.3.Final-redhat-2]
        at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:246) [jboss-as-ejb3.jar:7.5.3.Final-redhat-2]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) [jboss-as-ejb3.jar:7.5.3.Final-redhat-2]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:43) [jboss-as-ejb3.jar:7.5.3.Final-redhat-2]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64) [jboss-as-ejb3.jar:7.5.3.Final-redhat-2]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:59) [jboss-as-ejb3.jar:7.5.3.Final-redhat-2]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50) [jboss-as-ee.jar:7.5.3.Final-redhat-2]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.as.ee.component.TCCLInterceptor.processInvocation(TCCLInterceptor.java:45) [jboss-as-ee.jar:7.5.3.Final-redhat-2]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:185) [jboss-as-ee.jar:7.5.3.Final-redhat-2]
        at org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185) [jboss-as-ee.jar:7.5.3.Final-redhat-2]
        at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61) [jboss-invocation.jar:1.1.2.Final-redhat-1]
        at org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:73) [jboss-as-ee.jar:7.5.3.Final-redhat-2]
        at org.ovirt.engine.core.bll.interfaces.BackendInternal$$$view2.runInternalAction(Unknown Source) [bll.jar:]
        at sun.reflect.GeneratedMethodAccessor79.invoke(Unknown Source) [:1.8.0_51]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_51]
        at java.lang.reflect.Method.invoke(Method.java:497) [rt.jar:1.8.0_51]
        at org.jboss.weld.util.reflection.SecureReflections$13.work(SecureReflections.java:267) [weld-core.jar:1.1.30.Final-redhat-1]
        at org.jboss.weld.util.reflection.SecureReflectionAccess.run(SecureReflectionAccess.java:52) [weld-core.jar:1.1.30.Final-redhat-1]
        at org.jboss.weld.util.reflection.SecureReflectionAccess.runAsInvocation(SecureReflectionAccess.java:137) [weld-core.jar:1.1.30.Final-redhat-1]
        at org.jboss.weld.util.reflection.SecureReflections.invoke(SecureReflections.java:263) [weld-core.jar:1.1.30.Final-redhat-1]
        at org.jboss.weld.bean.proxy.EnterpriseBeanProxyMethodHandler.invoke(EnterpriseBeanProxyMethodHandler.java:115) [weld-core.jar:1.1.30.Final-redhat-1]
        at org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke(EnterpriseTargetBeanInstance.java:56) [weld-core.jar:1.1.30.Final-redhat-1]
        at org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:105) [weld-core.jar:1.1.30.Final-redhat-1]
        at org.ovirt.engine.core.bll.BackendCommandObjectsHandler$BackendInternal$BackendLocal$142795634$Proxy$_$$_Weld$Proxy$.runInternalAction(BackendCommandObjectsHandler$BackendInternal$BackendLocal$142795634$Proxy$_$$_Weld$Proxy$.java) [bll.jar:]
 at org.ovirt.engine.core.bll.VdsEventListener.connectHostToDomainsInActiveOrUnknownStatus(VdsEventListener.java:344) [bll.jar:]
        at org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData$9.call(IrsProxyData.java:1537) [vdsbroker.jar:]
        at org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData$9.call(IrsProxyData.java:1533) [vdsbroker.jar:]
        at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalCallable.call(ThreadPoolUtil.java:114) [utils.jar:]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_51]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_51]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_51]
        at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_51]

VDSM LOG
---------------------------

Thread-498::DEBUG::2015-11-03 12:47:37,693::fileSD::173::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n339 bytes (339 B) copied, 0.00478487 s, 70.8 kB/s\n'; <rc> = 0
Thread-498::INFO::2015-11-03 12:47:37,697::clusterlock::219::Storage.SANLock::(acquireHostId) Acquiring host id for domain 67d3271b-b1c9-4401-8fa0-0b9ced7c53e7 (id: 1)
Thread-498::DEBUG::2015-11-03 12:47:37,698::clusterlock::237::Storage.SANLock::(acquireHostId) Host id for domain 67d3271b-b1c9-4401-8fa0-0b9ced7c53e7 successfully acquired (id: 1)
Thread-473::DEBUG::2015-11-03 12:47:37,963::monitor::288::Storage.Monitor::(_refreshDomain) Refreshing domain d4f33bae-ab63-4b46-9330-cd98544e9a6b
Thread-473::DEBUG::2015-11-03 12:47:37,987::fileSD::157::Storage.StorageDomainManifest::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/10.35.160.6:_gluster4444/d4f33bae-ab63-4b46-9330-cd98544e9a6b
Thread-473::DEBUG::2015-11-03 12:47:37,992::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend
Thread-473::ERROR::2015-11-03 12:47:37,999::monitor::250::Storage.Monitor::(_monitorDomain) Error monitoring domain d4f33bae-ab63-4b46-9330-cd98544e9a6b
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/monitor.py", line 246, in _monitorDomain
    self._performDomainSelftest()
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 774, in wrapper
    value = meth(self, *a, **kw)
  File "/usr/share/vdsm/storage/monitor.py", line 313, in _performDomainSelftest
    self.domain.selftest()
  File "/usr/share/vdsm/storage/sdc.py", line 49, in __getattr__
    return getattr(self.getRealDomain(), attrName)
  File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
    return self._cache._realProduce(self._sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 123, in _realProduce
    domain = self._findDomain(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 142, in _findDomain
    dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/glusterSD.py", line 32, in findDomain
    return GlusterStorageDomain(GlusterStorageDomain.findDomainPath(sdUUID))
  File "/usr/share/vdsm/storage/fileSD.py", line 198, in __init__
    validateFileSystemFeatures(manifest.sdUUID, manifest.mountpoint)
  File "/usr/share/vdsm/storage/fileSD.py", line 93, in validateFileSystemFeatures
    oop.getProcessPool(sdUUID).directTouch(testFilePath)
  File "/usr/share/vdsm/storage/outOfProcess.py", line 350, in directTouch
    ioproc.touch(path, flags, mode)
  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 507, in touch
    self.timeout)
  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 391, in _sendCommand
    raise OSError(errcode, errstr)
OSError: [Errno 30] Read-only file system
Thread-473::INFO::2015-11-03 12:47:38,000::monitor::273::Storage.Monitor::(_notifyStatusChanges) Domain d4f33bae-ab63-4b46-9330-cd98544e9a6b became INVALID
Thread-265864::DEBUG::2015-11-03 12:47:38,001::misc::777::Storage.Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Emitting event
Thread-265864::DEBUG::2015-11-03 12:47:38,002::misc::787::Storage.Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Calling registered method `contEIOVms`
Thread-265864::DEBUG::2015-11-03 12:47:38,003::misc::797::Storage.Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Event emitted
Reactor thread::INFO::2015-11-03 12:47:38,317::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:34022
Reactor thread::DEBUG::2015-11-03 12:47:38,329::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2015-11-03 12:47:38,330::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:34022
Reactor thread::DEBUG::2015-11-03 12:47:38,331::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 34022)
BindingXMLRPC::INFO::2015-11-03 12:47:38,331::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:34022
Thread-265866::INFO::2015-11-03 12:47:38,332::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:34022 started
Thread-265866::INFO::2015-11-03 12:47:38,346::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:34022 stopped
Comment 1 Kevin Alon Goldblatt 2015-11-03 09:58 EST
Created attachment 1089062 [details]
engine, vdsm, server, journal logs

Added Logs
Comment 4 Red Hat Bugzilla Rules Engine 2015-11-15 10:37:58 EST
Fixed bug tickets must have version flags set prior to fixing them. Please set the correct version flags and move the bugs back to the previous status after this is corrected.
Comment 5 Red Hat Bugzilla Rules Engine 2015-11-15 10:37:58 EST
This bug is not marked for z-stream, yet the milestone is for a z-stream version, therefore the milestone has been reset.
Please set the correct milestone or add the z-stream flag.
Comment 6 Red Hat Bugzilla Rules Engine 2015-11-15 10:51:45 EST
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
Comment 7 Aharon Canan 2015-12-29 04:20:12 EST
Can you please provide verification steps ?
Comment 8 Fred Rolland 2016-01-03 02:33:41 EST
The null pointer will occur in case a Gluster storage domain comes back to active from another state.
The trigger will be from a VDSM status report.
Comment 9 Aharon Canan 2016-01-03 05:28:56 EST
Verified, 

vdsm-4.17.15-0.el7ev.noarch
rhevm-3.6.2-0.1.el6.noarch


Verification steps
==================
1. Block Gluster SD from hosts (using iptables)

2016-01-03 10:21:50,458 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-41) [5457e38d] Correlation ID: 5457e38d, Job ID: db2822d4-c837-495d-a1f3-a31ea0d591e4, Call Stack: null, Custom Event ID: -1, Message: Storage Domain Gluster (Data Center Default) was deactivated by system because it's not visible by any of the hosts.

2. Unblock the connection

2016-01-03 10:24:43,953 WARN  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (DefaultQuartzScheduler_Worker-31) [] Storage Domain '7f98a0a8-cafe-4986-8025-b275edf0a52c:Gluster' was reported by Host 'green-vdsc' as Active in Pool '00000001-0001-0001-0001-0000000000a8', moving to active status

3. No NPE on engine.
Comment 10 Sandro Bonazzola 2016-01-13 09:40:20 EST
oVirt 3.6.1 has been released, closing current release

Note You need to log in before you can comment on or make changes to this bug.