Bug 1080241
Summary: | rhs-hadoop-install deletes files in the gluster volume and the volume itself. | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Jeff Vance <jvance> |
Component: | rhs-hadoop-install | Assignee: | Jeff Vance <jvance> |
Status: | CLOSED ERRATA | QA Contact: | Martin Bukatovic <mbukatov> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | unspecified | CC: | ashetty, bchilds, eboyd, esammons, jvance, matt, mbukatov, mkudlej, nlevinki |
Target Milestone: | Release Candidate | Keywords: | UpcomingRelease, ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | 0.82-1 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2014-11-24 11:54:15 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1159155 |
Description
Jeff Vance
2014-03-25 02:20:44 UTC
With this fix the cleanup() function is no longer executed in the standard workflow of the script. Therefore, the user must ensure that the brick is unmounted, etc before running ./install.sh. The undocumented --clean option remains and, if specified, regardless of the -y setting, the user is prompted TWICE to confirm the deletes. Fixed in 0.79-1. Build id: https://brewweb.devel.redhat.com//buildinfo?buildID=345653 (In reply to Jeff Vance from comment #2) > The undocumented --clean option > remains and, if specified, regardless of the -y setting, the user is > prompted TWICE to confirm the deletes. I don't understand why we would like to leave this undocumented. Could you elaborate? Trying on RHSS-2.1.bd-20140219.n.0 with latest rhs-hadoop-install from brew:
rhs-hadoop-install-0_79-1.el6rhs.noarch
Running the installer for the first time (which works fine):
~~~
./install.sh /dev/mapper/TestVolume002-export_bricks
~~~
Then trying to rerun the same command again: which fails with:
~~~
----------------------------------------
-- Begin cluster configuration --
----------------------------------------
-- Setting up brick and volume mounts, creating and starting volume
-- on all nodes:
mkfs.xfs on brick-device
mkdir /mnt/brick1, /mnt/glusterfs and /mnt/brick1/mapredlocal...
append mount entries to /etc/fstab...
mount /mnt/brick1...
On mrg-qe-vm-c4-402.lab.eng.brq.redhat.com:
ERROR: mrg-qe-vm-c4-402.lab.eng.brq.redhat.com: mkfs.xfs on brick /dev/RHS_vg1/RHS_lv1: mkfs.xfs: /dev/RHS_vg1/RHS_lv1 contains a mounted filesystem
Usage: mkfs.xfs
/* blocksize */ [-b log=n|size=num]
/* data subvol */ [-d agcount=n,agsize=n,file,name=xxx,size=num,
(sunit=value,swidth=value|su=num,sw=num),
sectlog=n|sectsize=num
/* inode size */ [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
projid32bit=0|1]
/* log subvol */ [-l agnum=n,internal,size=num,logdev=xxx,version=n
sunit=value|su=num,sectlog=n|sectsize=num,
lazy-count=0|1]
/* label */ [-L label (maximum 12 characters)]
/* naming */ [-n log=n|size=num,version=2|ci]
/* prototype file */ [-p fname]
/* quiet */ [-q]
/* realtime subvol */ [-r extsize=num,size=num,rtdev=xxx]
/* sectorsize */ [-s log=n|size=num]
/* version */ [-V]
devicename
<devicename> is required unless -d name=xxx is given.
<num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
<value> is xxx (512 byte blocks).
~~~
Which is expected behaviour: nothing happens when the volume is already mounted
But I have a problem with the message which installer shows in the end of
successful volume setup:
~~~
**** This script can be re-run anytime! ****
~~~
Since this change conflicts this message, could you remove it?
>>> ASSIGNED
There are several undocumented options to install.sh: --clean, --setup, --mkdirs, --vol, --users, --perf. In the next version --_prep will be added, and the list above was going to be renamed to: --_clean, --_setup, --_hadoop-dirs, --_vol, --_users, --_perf to make these options less likely to be accidentally invoked. Reason for undocumented: 1) I used these options in complex debugging cases, 2) they had minimal testing, 3) I don't want to place extra burden on QE and extra effort in documentation (probably just the readme file), 4) some of these options can be seen as hard-to-use, eg. --mkdirs, which creates the hadoop-specific dirs, requires the volume to be set up first, ie use --vol first. I am willing to doc these options (and not change the names to --_xxxx) if QE recommends that action. (In reply to Jeff Vance from comment #6) > There are several undocumented options to install.sh: --clean, --setup, > --mkdirs, --vol, --users, --perf. In the next version --_prep will be added, > and the list above was going to be renamed to: --_clean, --_setup, > --_hadoop-dirs, --_vol, --_users, --_perf to make these options less likely > to be accidentally invoked. > > Reason for undocumented: 1) I used these options in complex debugging cases, > 2) they had minimal testing, 3) I don't want to place extra burden on QE and > extra effort in documentation (probably just the readme file), 4) some of > these options can be seen as hard-to-use, eg. --mkdirs, which creates the > hadoop-specific dirs, requires the volume to be set up first, ie use --vol > first. > > I am willing to doc these options (and not change the names to --_xxxx) if > QE recommends that action. I see your point and don't like additional complexity as well. I would rather see devel-only options completely disabled though ... (this is not a request to implement it right now, because we would need which options should be disabled ...). Speaking about the --mkdirs function, I believe this should be in dedicated script as described in BZ 1062401. See also BZ 1082695. Just tried with 0.85 and looks like it is fixed now. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2014-1275.html |