Description of problem: ----------------------- There are few gluster geo-rep sync job related errors are seen in engine.log, though there are no geo-rep sessions created and used Version-Release number of selected component (if applicable): -------------------------------------------------------------- RHV manager 4.4.1-11 How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Create the hyperconverged setup with self-hosted engine 2. Observe engine.log for GlusterGeoRepSyncJob Actual results: --------------- Errors related to GlusterGeoRepSyncJob are seen in engine.log Expected results: ----------------- No errors related to GlusterGeoRepSyncJob should be seen in engine.log --- Additional comment from SATHEESARAN on 2020-07-17 10:28:48 UTC --- [root@hostedenginesm3 ovirt-engine]# grep GlusterGeoRepSyncJob engine.log 2020-07-17 03:39:58,085Z ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-58) [] VDS error Geo Rep status failed: rc=2 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} err={self.err!r}'] 2020-07-17 04:39:58,192Z ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96) [] VDS error Geo Rep status failed: rc=2 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} err={self.err!r}'] 2020-07-17 05:39:58,310Z ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-36) [] VDS error Geo Rep status failed: rc=2 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} err={self.err!r}'] 2020-07-17 06:39:58,417Z ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-14) [] VDS error Geo Rep status failed: rc=2 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} err={self.err!r}'] 2020-07-17 07:39:58,521Z ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-93) [] VDS error Geo Rep status failed: rc=2 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} err={self.err!r}'] 2020-07-17 08:39:58,630Z ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-27) [] VDS error Geo Rep status failed: rc=2 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} err={self.err!r}'] 2020-07-17 09:39:58,738Z ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-59) [] VDS error Geo Rep status failed: rc=2 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} err={self.err!r}']
Created attachment 1701534 [details] engine.log
This is vdsm gluster bug, please move the bug to vdsm.
Does that impact the customer in any way?
It won't impact the customer. This is just piling up engine logs whenever we check for geo-rep session is configured or not. The ovirt check this using a scheduler and the default time for scheduler 60min so you will see this error on hourly basis. This error will not encounter If we configure geo-rep sessions .
(In reply to Yaniv Kaul from comment #4) > Does that impact the customer in any way? I do not see any customer impact. Only the logs might grow, that too not fast but just continues to grow 24 lines per day. Also I see that there is patch that Ritesh pointed out, where the issue is getting fixed in gluster. I'm inclined to CLOSING this bug as WON'T FIX for now, but when gluster fixes the issue, we can make use of that version of gluster with RHVH for RHHI-V What do you suggest ?
(In reply to SATHEESARAN from comment #6) > (In reply to Yaniv Kaul from comment #4) > > Does that impact the customer in any way? > I do not see any customer impact. > Only the logs might grow, that too not fast but just continues to grow 24 > lines per day. That's not a big deal. > > Also I see that there is patch that Ritesh pointed out, where the issue is > getting fixed in gluster. The patch is abandoned - exactly because it was not important enough to work on. > I'm inclined to CLOSING this bug as WON'T FIX for now, but when gluster > fixes the issue, we can make > use of that version of gluster with RHVH for RHHI-V > > What do you suggest ? CLOSED-DEFERRED, at this point. If we ever get to fix it and it is backported downstream, we'll be happy to re-open it!
Removing the target milestone and release_ack