| Summary: | When creating a VM from a template, peer UUIDs are the same | ||
|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Amaya Rosa Gil Pippino <amaya> |
| Component: | glusterfs | Assignee: | Raghavendra Bhat <rabhat> |
| Status: | CLOSED ERRATA | QA Contact: | Rejy M Cyriac <rcyriac> |
| Severity: | high | Docs Contact: | |
| Priority: | medium | ||
| Version: | 1.0 | CC: | amarts, amaya, gluster-bugs, grajaiya, hateya, nsathyan, pablo.iranzo, psriniva, rabhat, rcyriac, sdharane, shaines |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.4.0qa5-1 | Doc Type: | Bug Fix |
| Doc Text: |
Previously, when a Red Hat Storage server virtual machine that has a generated glusterd UUID, was used as a template for Red Hat Storage server virtual machines in a Red Hat Enterprise Virtualization environment, the template inherited the glusterd UUID. All the virtual machines that were created from this template would get the same glusterd UUID preventing them from being peer probed. Now a 'gluster system:: uuid reset' command is available to reset the UUID of the local glusterd in the Red Hat Storage servers, thus enabling a proper 'peer probe'.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2013-09-16 03:22:17 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Amaya Rosa Gil Pippino
2012-04-11 09:19:19 UTC
Maybe a patch to initscripts and sys-unconfig (verified during boot at rc.sysinit) could do the trick, but IMHO a cleaner approach would be to create a /etc/unconfigured.d/ scripts so each program could drop in the required steps to be executed during boot process. Yes, actually, performed sys-unconfig prior to create the template. Or maybe a command to change UUID like gluster peer resetuid <HOST> More of a feature request for now, as this is a known limitation. We will come up with a proposal to solve this soon. *** Bug 829342 has been marked as a duplicate of this bug. *** http://review.gluster.org/3637 (with the patch, it will be possible to get out of this case). CHANGE: http://review.gluster.org/3637 (glusterd, cli: implement gluster system uuid reset command) merged in master by Anand Avati (avati) *** Bug 874565 has been marked as a duplicate of this bug. *** Testing was done on the following environment: Red Hat Storage: 2.1 (glusterfs-3.4.0.8rhs-1.el6rhs.x86_64) RHEVM: 3.2 (3.2.0-10.21.master.el6ev) Hypervisor: RHEL 6.4 --------------------------------------------------------- I found that there are some information discrepancies between that available in this BZ, and what is currently available in RHS 2.1. I thought that it would be useful to have them outlined here for confirmation, and possibly for later documentation. 1) The BZ description shows that the local glusterd UUID is stored in the file '/etc/glusterd/glusterd.info'. Currently the location is at '/var/lib/glusterd/glusterd.info' 2) The BZ provides information that the 'glusterd.info' file is created at first run of the glusterd daemon. I understand that this has now changed to the first gluster operation that requires a unique local glusterd UUID. 3) The BZ provides information that the failure due to duplicate UUID among the RHS clones happen at the 'add-brick' stage of a volume. Currently the failure happens much earlier, at the 'peer probe' stage itself. 4) The BZ provides the link http://review.gluster.org/3637 for the change implemented to fix the reported issue. The commit message for this includes "To handle it gluster peer reset command is implemented which upon execution changes the uuid of local glusterd." But there is no 'gluster peer reset' command available in RHS 2.1. Instead, I believe that the command implemented to fix the issue is 'gluster system:: uuid reset'. 5) The recommended way to go about cloning RHS VM systems may need to be documented, to avoid the possible pitfalls. I need confirmation that the above statements are as expected, and that they may be used to verify the fix. (In reply to Rejy M Cyriac from comment #9) > Testing was done on the following environment: > > Red Hat Storage: 2.1 (glusterfs-3.4.0.8rhs-1.el6rhs.x86_64) > > RHEVM: 3.2 (3.2.0-10.21.master.el6ev) > > Hypervisor: RHEL 6.4 > > --------------------------------------------------------- > > I found that there are some information discrepancies between that available > in this BZ, and what is currently available in RHS 2.1. I thought that it > would be useful to have them outlined here for confirmation, and possibly > for later documentation. > > 1) The BZ description shows that the local glusterd UUID is stored in the > file '/etc/glusterd/glusterd.info'. Currently the location is at > '/var/lib/glusterd/glusterd.info' Yes thats right. > > 2) The BZ provides information that the 'glusterd.info' file is created at > first run of the glusterd daemon. I understand that this has now changed to > the first gluster operation that requires a unique local glusterd UUID. > Yes thats right. > 3) The BZ provides information that the failure due to duplicate UUID among > the RHS clones happen at the 'add-brick' stage of a volume. Currently the > failure happens much earlier, at the 'peer probe' stage itself. > Yes. > 4) The BZ provides the link http://review.gluster.org/3637 for the change > implemented to fix the reported issue. The commit message for this includes > "To handle it gluster peer reset command is implemented which upon execution > changes the uuid of local glusterd." > But there is no 'gluster peer reset' command available in RHS 2.1. Instead, > I believe that the command implemented to fix the issue is 'gluster system:: > uuid reset'. > Yes its gluster system uuid reset (At the time of starting the patch it was decided to be gluster peer reset. But later it was decided to change the command to gluster system uuid reset). > 5) The recommended way to go about cloning RHS VM systems may need to be > documented, to avoid the possible pitfalls. > Yes. > I need confirmation that the above statements are as expected, and that they > may be used to verify the fix. Verified - the 'gluster system:: uuid reset' command may be used, to reset the UUID of local glusterd in Red Hat Storage server virtual machines, that have been created from a template of a Red Hat Storage server with previously generated UUID. Previously the 'glusterd.info' file was created at the first run of the glusterd daemon. This has also now changed to the first gluster operation that requires a unique local glusterd UUID. So there are less chances now, that a Red Hat Storage server specifically created for the purpose of template creation would already have a generated UUID. Again previously, the failure due to duplicate UUID among the Red Hat Storage virtual machine clones happened only at the 'add-brick' stage of a volume. Currently the failure happens much earlier, at the 'peer probe' stage itself. ---------------------------------------------------------------------------- Red Hat Storage template system: [root@localhost ~]# service glusterd status glusterd (pid 1342) is running... [root@localhost ~]# ls /var/lib/glusterd/ geo-replication glustershd groups hooks nfs options peers vols [root@localhost ~]# gluster peer status Number of Peers: 0 [root@localhost ~]# ls /var/lib/glusterd/ geo-replication glustershd groups hooks nfs options peers vols [root@localhost ~]# gluster peer probe rhs-client10.lab.eng.blr.redhat.com peer probe: success [root@localhost ~]# ls /var/lib/glusterd/ geo-replication glustershd hooks options vols glusterd.info groups nfs peers [root@localhost ~]# gluster peer detach rhs-client10.lab.eng.blr.redhat.com peer detach: success [root@localhost ~]# gluster peer status Number of Peers: 0 [root@localhost ~]# cat /var/lib/glusterd/glusterd.info UUID=a82a6129-db95-4751-be88-357d9287e9f7 operating-version=1 ------------------- From Clone VM01: [root@localhost ~]# cat /var/lib/glusterd/glusterd.info UUID=a82a6129-db95-4751-be88-357d9287e9f7 operating-version=1 [root@localhost ~]# gluster peer probe 10.70.37.113 peer probe: failed: Peer uuid (host 10.70.37.113) is same as local uuid [root@localhost ~]# gluster system:: uuid reset Resetting uuid changes the uuid of local glusterd. Do you want to continue? (y/n) y resetting the peer uuid has been successful [root@localhost ~]# cat /var/lib/glusterd/glusterd.info UUID=c5dc3631-371f-42bd-a605-a225ed9e5698 operating-version=1 [root@localhost ~]# gluster peer probe 10.70.37.113 peer probe: success [root@localhost ~]# gluster peer status Number of Peers: 1 Hostname: 10.70.37.113 Port: 24007 Uuid: 0d201a74-d335-4a20-8964-8843801cadcf State: Peer in Cluster (Connected) [root@localhost ~]# From Clone VM02: [root@localhost ~]# cat /var/lib/glusterd/glusterd.info UUID=a82a6129-db95-4751-be88-357d9287e9f7 operating-version=1 [root@localhost ~]# gluster peer probe 10.70.37.215 peer probe: failed: Peer uuid (host 10.70.37.215) is same as local uuid [root@localhost ~]# gluster system:: uuid reset Resetting uuid changes the uuid of local glusterd. Do you want to continue? (y/n) y resetting the peer uuid has been successful [root@localhost ~]# cat /var/lib/glusterd/glusterd.info UUID=0d201a74-d335-4a20-8964-8843801cadcf operating-version=1 [root@localhost ~]# gluster peer status Number of Peers: 1 Hostname: 10.70.37.215 Port: 24007 Uuid: c5dc3631-371f-42bd-a605-a225ed9e5698 State: Peer in Cluster (Connected) ---------------------------------------------------------------------------- Kindly review the edited doc text for technical accuracy and sign off. My take on how the the doc text might be worded better: Previously, when a Red Hat Storage server virtual machine that has a generated glusterd UUID, is used as a template for Red Hat Storage server virtual machines in a Red Hat Enterprise Virtualization environment, the template inherits the glusterd UUID. Then all the virtual machines that are created from this template get the same glusterd UUID, and Red Hat Storage servers with the same glusterd UUID cannot be added to the same trusted pool. A 'gluster system:: uuid reset' command is now available, to reset the UUID of the local glusterd in Red Hat Storage servers. Cheers, rejy (rmc) Pavithra, Looking at it again, I think we need to take out the 'Previously' word from the beginning. This is because the specified behaviour has not been changed now, and is not to be changed, as it is not a bug per-se. Rather, we have provided a tool to resolve the possible impact on RHS servers from the behaviour. So how about: When a Red Hat Storage server virtual machine that has a generated glusterd UUID, is used as a template for Red Hat Storage server virtual machines in a Red Hat Enterprise Virtualization environment, the template inherits the glusterd UUID. Then all the virtual machines that are created from this template get the same glusterd UUID, and Red Hat Storage servers with the same glusterd UUID cannot be added to the same trusted pool. A 'gluster system:: uuid reset' command is now available, to reset the UUID of the local glusterd in Red Hat Storage servers. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html |