Bug 832941
Summary: | hostname resolve fails on remote peer on 'peer probe' command | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | fous <honza801> |
Component: | cli | Assignee: | Kaushal <kaushal> |
Status: | CLOSED WONTFIX | QA Contact: | |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.3.0 | CC: | gluster-bugs |
Target Milestone: | --- | Keywords: | Reopened |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2012-06-19 13:33:20 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
fous
2012-06-18 08:57:29 UTC
After probing 'thathost' from 'thishost', do "gluster peer probe 'thishost'" on 'thathost'. This command should output somesthing saying "'thishost' is already a part of the cluster", but it should update peer status to now use the hostname of 'thishost' instead of it ipv6 address. hi, your solution works, also restarting glusterd works, but it is a dirty workaround. let's assume i have 10 peers, it is really uncomfortable to run the command on all of them for each other peer. (btw this is 10x9 = 90 commands !) please fix the problem, this should be definitely a bug. thanks fous Hi fous, You have made a wrong assumption. 10 peers don't require 90 commands to be executed, just 10. 9 probe commands from peer1 to the other peers and 1 reverse probe from any one of the 9 peers to peer1. glusterd handles the syncing of the hostname across the peers. For N peers only N commands need to be issued so that the cluster is in sync wrt to hostnames of peers. Also, AFAIK restarting glusterd doesn't update the IP's to hostnames. If you still feel that this is uncomfortable, leave this open. Else, please close it. Kaushal hi, thanks for all the details. i find this still little bit uncomfortable, but good enough. thanks for all of your replies marking closed fous |