Description of problem: If the brick directory = the XFS mountpoint and the XFS mountpoint is not mounted, then the rhs-* tuned profiles can fail. I'll attach a log showing what happens in this case. Version-Release number of selected component (if applicable): RHS 2.1 Gold, rpm is redhat-storage-server-2.1.0.3-1.el6rhs.noarch . How reproducible: Every time, but only if certain conditions are met Steps to Reproduce: 1. define 2 gluster volumes on a server 2. stop the first of the two volumes 3. unmount the XFS brick Actual results: tuned script aborts before tuning the 2nd volume Expected results: tuned script should report that there is a problem with the first volume and continue, so that it tunes the 2nd volume. Additional info: Here's a patch that addresses part of it at least. [ben@ben-england-laptop virt]$ diff -u k-orig.sh k.sh | more --- k-orig.sh 2013-10-03 12:37:23.446447854 -0400 +++ k.sh 2013-10-03 12:36:55.780522782 -0400 @@ -37,8 +37,8 @@ # assumption: brick filesystem mountpoint is either the brick directory or parent of the brick directory next_dev=`grep " $brickdir " /proc/mounts | awk '{ print $1 }'` if [ -z "$next_dev" ] ; then - brickdir=`dirname $brickdir` - next_dev=`grep $brickdir /proc/mounts | awk '{ print $1 }'` + next_dir=`dirname $brickdir` + next_dev=`grep $next_dir /proc/mounts | awk '{ print $1 }'` fi if [ -z "$next_dev" ] ; then echo "ERROR: could not find mountpoint for brick directory $brickdir" @@ -51,7 +51,14 @@ fi device_name=`basename $device_path` echo -n " $device_name" - echo $ra > /sys/block/$device_name/queue/read_ahead_kb + sysfs_ra="/sys/block/$device_name/queue/read_ahead_kb" + if [ ! -f $sysfs_ra ] ; then + echo + echo "ERROR: did not find path $sysfs_ra for brick $brickdir" + echo + continue + fi + echo $ra > $sysfs_ra done echo
Patch is under review at https://code.engineering.redhat.com/gerrit/#/c/14239/
Tested on: [root@boo ~]# rpm -qa | grep redhat-storage-server redhat-storage-server-2.1.2.0-2.el6rhs.noarch Followed steps from Comment 0 tested both rhs-high-throughput and rhs-virtualization No errors reported.
Ben, could you provide doctext?
This enhancement to the rhs-high-throughput and rhs-virtualization tuned profiles will make these scripts less vulnerable to configuration-related failures. In the past, if the first brick's XFS mountpoint was not mounted then the script would fail on the first brick, and consequently would not proceed to tune the subsequent bricks. With this fix, the script will log an error but will proceed to subsequent bricks. This is important when dealing with multi-brick configurations (example: 3 12-disk RAID6 drives/server). Note that the system admin. is expected to add the bricks' XFS mountpoints to /etc/fstab so they automount upon reboot. We also recommend (not a requirement) that the XFS mountpoints not be used as the Gluster brick directory. Use a subdirectory instead, so that Gluster will not attempt to use the brick if the XFS mountpoint does not exist. If you use the XFS mountpoint as the brick directory, and the XFS filesystem for the brick is not mounted, then Gluster will proceed to access files on the system disk (or wherever the mountpoint directory is located). Gluster only knows about directories, not mountpoints. In addition, using a subdirectory inside the XFS mountpoint gives you the option to create multiple "bricks" that share the same XFS filesystem, allowing multiple Gluster volumes to share the same physical storage.