Bug 961703
Summary: | glusterd : gluster peer status shows node(from where command is given) itself in peer list and because of that gluster volume create/stop/delete/status command always fails | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Rachana Patel <racpatel> |
Component: | glusterd | Assignee: | krishnan parthasarathi <kparthas> |
Status: | CLOSED ERRATA | QA Contact: | amainkar |
Severity: | high | Docs Contact: | |
Priority: | medium | ||
Version: | 2.1 | CC: | amarts, nsathyan, rhs-bugs, sdharane, vbellur |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.4.0.8rhs-1 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2013-09-23 22:39:43 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Rachana Patel
2013-05-10 10:41:53 UTC
verified on 3.4.0.8rhs-1.el6rhs.x86_64 had a cluster of 3 node, on one node [root@mia ~]# ifconfig | grep inet inet addr:10.70.34.92 [root@mia ~]# hostname mia.lab.eng.blr.redhat.com [root@mia ~]# gluster peer status Number of Peers: 3 Hostname: fan.lab.eng.blr.redhat.com Uuid: c6dfd028-d46f-4d20-a9c6-17c04e7fb919 State: Peer in Cluster (Connected) Hostname: mia.lab.eng.blr.redhat.com Uuid: 1698dc55-2245-4b20-9b8c-60fbe77a06ff State: Peer in Cluster (Connected) Hostname: 10.70.34.80 Uuid: ababf76c-a741-4e27-a6bb-93da035d8fd7 State: Peer in Cluster (Connected) ----> it shows mia in peer from mia as a result of it gluster volume create/stop/delete/status command always fails e.g. [root@mia ~]# gluster volume create abc mia.lab.eng.blr.redhat.com:/rhs/brick1/ll fred.lab.eng.blr.redhat.com:/rhs/brick1/ll volume create: abc: failed: Another transaction could be in progress. Please try again after sometime. Updated from Kaushal, on RPM issue faced till 3.4.0.8rhs (which has all the right fixes). ------------------ Hi all, Another small update on the steps to be taken when updating from build 7 to any newer releases. 1. Backup /var/lib/glusterd 2. Upgrade 3. Stop gluster 4. Restore /var/lib/glusterd 5. Delete the /var/lib/glusterd/options file if empty. This will be recreated by glusterd. 6. Start gluster and continue with your testing. The /var/lib/glusterd/options file being empty causes syncing problems on glusterd restart. Build7 cleared this file. If you hadn't done any server-quorum test with build7, this file is most probably still empty. So, if anyone is facing any volume syncing issues, do step 5 and restart glusterd. Thanks, Kaushal ----- Original Message ----- > From: "Kaushal M" <kaushal> > To: storage-qa > Sent: Wednesday, May 15, 2013 12:10:08 PM > Subject: Re: Warning on upgrade from gluster v3.4.0.7 to v3.4.0.8 > > A small clarification. The upgrade will not delete all the files in > /var/lib/glusterd. Only some files/directories like glusterd.info and nfs > directory can be deleted. This is due to a packaging bug in build 7, in > which these files/directories were a part of the package itself. > This may be avoided by uninstalling and installing, instead of and upgrade (I > haven't tested this). But to be on the safer side, backup and restore the > /var/lib/glusterd directory. > > - Kaushal > > ----- Original Message ----- >> From: "Kaushal M" <kaushal> >> To: storage-qa >> Sent: Wednesday, May 15, 2013 11:48:05 AM >> Subject: Warning on upgrade from gluster v3.4.0.7 to v3.4.0.8 >> >> Hi all, >> >> Because of bugs in packaging of build 7, an upgrade from build 7 tp build 8 >> will cause files /var/lib/glusterd/ to be deleted. As you can probably >> guess >> this will lead to all sorts of problems. >> So, before upgrading, backup your /var/lib/glusterd directory. Follow the >> below steps to make sure you don't break your existing setup, >> >> 1. Backup /var/lib/glusterd >> 2. Upgrade >> 3. Stop gluster >> 4. Restore /var/lib/glusterd >> 5. Start gluster and continue with your testing. >> >> >> Regards, >> Kaushal > verified on 3.4.0.8rhs-1.el6rhs.x86_64, without rpm upgrade(remove old and install new rpm) and it works fine. so changing status to verified verified on 3.4.0.8rhs-1.el6rhs.x86_64, without rpm upgrade(remove old and install new rpm) and it works fine. so changing status to verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html |