Bug 1858236 - Errors related to GlusterGeoRepSyncJob are seen in engine.log, though no geo-rep session used
Summary: Errors related to GlusterGeoRepSyncJob are seen in engine.log, though no geo-...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: vdsm
Classification: oVirt
Component: Gluster
Version: ---
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
: ---
Assignee: Ritesh Chikatwar
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1858235
TreeView+ depends on / blocked
 
Reported: 2020-07-17 10:30 UTC by SATHEESARAN
Modified: 2020-10-28 11:45 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1858235
Environment:
Last Closed: 2020-10-20 08:06:00 UTC
oVirt Team: Gluster
Embargoed:


Attachments (Terms of Use)
engine.log (1.21 MB, application/octet-stream)
2020-07-17 10:32 UTC, SATHEESARAN
no flags Details

Description SATHEESARAN 2020-07-17 10:30:16 UTC
Description of problem:
-----------------------
There are few gluster geo-rep sync job related errors are seen in engine.log, though there are no geo-rep sessions created and used


Version-Release number of selected component (if applicable):
--------------------------------------------------------------
RHV manager 4.4.1-11

How reproducible:
-----------------
Always


Steps to Reproduce:
-------------------
1. Create the hyperconverged setup with self-hosted engine
2. Observe engine.log for GlusterGeoRepSyncJob

Actual results:
---------------
Errors related to GlusterGeoRepSyncJob are seen in engine.log

Expected results:
-----------------
No errors related to GlusterGeoRepSyncJob should be seen in engine.log

--- Additional comment from SATHEESARAN on 2020-07-17 10:28:48 UTC ---

[root@hostedenginesm3 ovirt-engine]# grep GlusterGeoRepSyncJob engine.log
2020-07-17 03:39:58,085Z ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-58) [] VDS error Geo Rep status failed: rc=2 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} err={self.err!r}']
2020-07-17 04:39:58,192Z ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96) [] VDS error Geo Rep status failed: rc=2 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} err={self.err!r}']
2020-07-17 05:39:58,310Z ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-36) [] VDS error Geo Rep status failed: rc=2 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} err={self.err!r}']
2020-07-17 06:39:58,417Z ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-14) [] VDS error Geo Rep status failed: rc=2 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} err={self.err!r}']
2020-07-17 07:39:58,521Z ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-93) [] VDS error Geo Rep status failed: rc=2 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} err={self.err!r}']
2020-07-17 08:39:58,630Z ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-27) [] VDS error Geo Rep status failed: rc=2 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} err={self.err!r}']
2020-07-17 09:39:58,738Z ERROR [org.ovirt.engine.core.bll.gluster.GlusterGeoRepSyncJob] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-59) [] VDS error Geo Rep status failed: rc=2 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} err={self.err!r}']

Comment 1 SATHEESARAN 2020-07-17 10:32:07 UTC
Created attachment 1701534 [details]
engine.log

Comment 2 Nir Soffer 2020-10-05 14:22:41 UTC
This is vdsm gluster bug, please move the bug to vdsm.

Comment 4 Yaniv Kaul 2020-10-15 13:33:27 UTC
Does that impact the customer in any way?

Comment 5 Ritesh Chikatwar 2020-10-19 06:21:21 UTC
It won't impact the customer. This is just piling up engine logs whenever we check for geo-rep session is configured or not. 
The ovirt check this using a scheduler and the default time for scheduler 60min so you will see this error on hourly basis. 

This error will not encounter If we configure geo-rep sessions .

Comment 6 SATHEESARAN 2020-10-20 06:18:14 UTC
(In reply to Yaniv Kaul from comment #4)
> Does that impact the customer in any way?
I do not see any customer impact.
Only the logs might grow, that too not fast but just continues to grow 24 lines per day.

Also I see that there is patch that Ritesh pointed out, where the issue is getting fixed in gluster.
I'm inclined to CLOSING this bug as WON'T FIX for now, but when gluster fixes the issue, we can make
use of that version of gluster with RHVH for RHHI-V

What do you suggest ?

Comment 7 Yaniv Kaul 2020-10-20 08:06:00 UTC
(In reply to SATHEESARAN from comment #6)
> (In reply to Yaniv Kaul from comment #4)
> > Does that impact the customer in any way?
> I do not see any customer impact.
> Only the logs might grow, that too not fast but just continues to grow 24
> lines per day.

That's not a big deal.

> 
> Also I see that there is patch that Ritesh pointed out, where the issue is
> getting fixed in gluster.

The patch is abandoned - exactly because it was not important enough to work on.

> I'm inclined to CLOSING this bug as WON'T FIX for now, but when gluster
> fixes the issue, we can make
> use of that version of gluster with RHVH for RHHI-V
> 
> What do you suggest ?

CLOSED-DEFERRED, at this point. If we ever get to fix it and it is backported downstream, we'll be happy to re-open it!

Comment 8 SATHEESARAN 2020-10-28 11:45:16 UTC
Removing the target milestone and release_ack


Note You need to log in before you can comment on or make changes to this bug.