Bug 809724
Summary: | In RHS2.0 , geo-rep is faulty since it is trying to locate gsyncd on wrong path. | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Vijaykumar Koppad <vkoppad> |
Component: | geo-replication | Assignee: | Csaba Henk <csaba> |
Status: | CLOSED NOTABUG | QA Contact: | |
Severity: | urgent | Docs Contact: | |
Priority: | unspecified | ||
Version: | mainline | CC: | bbandari, enakai, gluster-bugs |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2012-04-05 00:09:30 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Vijaykumar Koppad
2012-04-04 08:15:49 UTC
Which gsyncd find you not being found (local or remote), what kind of setup are you using (ssh based or not)? I am doing ssh based setup. this is the error message it is showing in the log file. [2012-04-04 03:41:20.2859] I [monitor(monitor):80:monitor] Monitor: ------------------------------------------------------------ [2012-04-04 03:41:20.3421] I [monitor(monitor):81:monitor] Monitor: starting gsyncd worker [2012-04-04 03:41:20.85971] I [gsyncd:355:main_i] <top>: syncing: gluster://localhost:doa -> ssh://172.17.251.152:/root/geo [2012-04-04 03:41:40.323651] E [syncdutils:173:log_raise_exception] <top>: connection to peer is broken [2012-04-04 03:41:40.324121] E [resource:166:errfail] Popen: command "ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-FO87rp/gsycnd-ssh-%r@%h:%p 172.17.251.152 /usr/local/libexec/glusterfs/gsyncd --session-owner 4bae81ac-ce57-402f-a619-f5e71c8ef1e2 -N --listen --timeout 120 file:///root/geo" returned with 127, saying: [2012-04-04 03:41:40.324336] E [resource:169:errfail] Popen: ssh> Warning: Identity file /var/lib/glusterd/geo-replication/secret.pem not accessible: No such file or directory. [2012-04-04 03:41:40.324496] E [resource:169:errfail] Popen: ssh> bash: /usr/local/libexec/glusterfs/gsyncd: No such file or directory [2012-04-04 03:41:40.324800] I [syncdutils:142:finalize] <top>: exiting. [2012-04-04 03:41:50.336358] I [monitor(monitor):80:monitor] Monitor: ------------------------------------------------------------ [2012-04-04 03:41:50.336667] I [monitor(monitor):81:monitor] Monitor: starting gsyncd worker [2012-04-04 03:41:50.380790] I [gsyncd:355:main_i] <top>: syncing: gluster://localhost:doa -> ssh://172.17.251.152:/root/geo [2012-04-04 03:42:00.603565] E [syncdutils:173:log_raise_exception] <top>: connection to peer is broken [2012-04-04 03:42:00.603945] E [resource:166:errfail] Popen: command "ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-h6m4Fl/gsycnd-ssh-%r@%h:%p 172.17.251.152 /usr/local/libexec/glusterfs/gsyncd --session-owner 4bae81ac-ce57-402f-a619-f5e71c8ef1e2 -N --listen --timeout 120 file:///root/geo" returned with 127, saying: [2012-04-04 03:42:00.604176] E [resource:169:errfail] Popen: ssh> Warning: Identity file /var/lib/glusterd/geo-replication/secret.pem not accessible: No such file or directory. [2012-04-04 03:42:00.604329] E [resource:169:errfail] Popen: ssh> bash: /usr/local/libexec/glusterfs/gsyncd: No such file or directory [2012-04-04 03:42:00.604624] I [syncdutils:142:finalize] <top>: exiting. (END) Yes, it's that what I thought to be: you use ssh and remote gsyncd not found (you can see the "No such file or directory" statement is coming from ssh (is under an "ssh>" prompt, so it's an issue with remote side). This is not a bug, it's a feature. You get this due to not following the documentation (ie. that use command enforcement with ssh key auth as in "Red Hat Storage" 9.2.5.1). That is always good if alteration from suggested practices becomes clearly manifest! So, just set up command enforcement and this will go away. Actually ATM it's not as sharp as I say above -- in current state of the docs command enforcement is put as optional, but in fact there is not any reason to merge that into 9.2.4., "To setup Geo-replication for SSH". I'll drop a mail on that to whom it concerns. *** Bug 820142 has been marked as a duplicate of this bug. *** |