| Summary: | Glusterfs installation on our cluster | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Basavanagowda Kanur <gowda> |
| Component: | core | Assignee: | Vijay Bellur <vbellur> |
| Status: | CLOSED NOTABUG | QA Contact: | |
| Severity: | low | Docs Contact: | |
| Priority: | low | ||
| Version: | pre-2.0 | CC: | amarts, gluster-bugs, guru, pavan, rabhat |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | --- | |
| Regression: | RTNR | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
tejas/chida/vijay/avati any thoughts on this? should we keep this bug open? have some discussion? some procedure? Installation done |
Thu Oct 23 11:11:26 2008 guru - Ticket created This is what needs to be done (and a general guideline to glusterfs installation/upgradation for later use) on our cluster: * glusterfs *NOT TO BE* installed at a central location (/usr/local, /usr etc) * glusterfs to be installed in /opt (which is exported from gnu.zresearch.com and is a shared directory in the cluster) * glusterfs to be compiled on client01 (in the directory /opt/glusterfs/<glusterfs-version>). Make sure that the node has all the required packages/dependencies - fuse, db, lighttpd, apache etc. * Since the installation is going to be done on only one node, a configure/make/make install approach will be sufficient and efficient * However, RPMS (and later deb's) should be generated (to eliminate any possible errors and to make sure that SPEC files are kept in sync with the releases) * Each version of glusterfs is installed in a separate directory (glusterfs version 1.4.0qa21, for example, will be installed in /opt/glusterfs/glusterfs-1.4.0qa21) * All the mount points shall be created within the installation directory of the version. In the previous example, all mount points for the version shall be in /opt/glusterfs/glusterfs-1.4.0qa21/mnt -------------------------------------------------------------------------------- # Sat Dec 06 02:04:17 2008 guru - Correspondence added * Two scripts available the cluster for the purpose. (Right now, only source install is handled) The following two commands need to be executed: /opt/qa/tools/get_glusterfs.sh <glusterfs-url> /opt/qa/tools/build_glfs.sh <downloaded-tarball> * By default, glusterfs is installed in: /opt/glusterfs/<glusterfs-version> (eg: /opt/glusterfs/1.4.0qa42) * Glusterfs *WILL NOT* be available in the path (to ensure that there is no accidental run using an ambiguous version of glusterfs) * While the scripts are available in the aforementioned directory, I propose that installation/administration on/of the cluster must be solely done by the QA team. If a developer needs something installed, he/she (???) should get it done through someone in the QA team. -------------------------------------------------------------------------------- # Tue Apr 07 18:12:47 2009 gowda - Correspondence added guru, if the discussion is finalized and rules are set about installation of glusterfs on cluster. please put the same into a documentation and display the same on for any login into the client01 machine. resolve the ticket, if there is nothing to be discussed on this front. -- gowda -------------------------------------------------------------------------------- # Thu Apr 09 13:10:11 2009 guru - Correspondence added On Tue Apr 07 18:12:47 2009, gowda wrote: > guru, > if the discussion is finalized and rules are set about installation of > glusterfs on cluster. please put the same into a documentation and > display the same on for any login into the client01 machine. > resolve the ticket, if there is nothing to be discussed on this front. Only thing that has not yet been addressed is about user policy. It was discussed and decided that we need to have normal user support on the cluster and everybody is expected to be non-root by default. I will close the ticket once a strategy has been finalized.