Bug 1015231 - rhs-virtualization and rhs-high-throughput profiles not robust enough
rhs-virtualization and rhs-high-throughput profiles not robust enough
Status: CLOSED CURRENTRELEASE
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: storage-server-tools (Show other bugs)
2.1
Unspecified Unspecified
low Severity low
: ---
: RHGS 2.1.2
Assigned To: Bala.FA
Sachidananda Urs
: ZStream
Depends On:
Blocks: 1165441
  Show dependency treegraph
 
Reported: 2013-10-03 13:16 EDT by Ben England
Modified: 2015-11-22 21:58 EST (History)
4 users (show)

See Also:
Fixed In Version: redhat-storage-server-2.1.2.0-1.el6rhs
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1165441 (view as bug list)
Environment:
Last Closed: 2015-08-10 03:46:16 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ben England 2013-10-03 13:16:57 EDT
Description of problem:

If the brick directory = the XFS mountpoint and the XFS mountpoint is not mounted, then the rhs-* tuned profiles can fail.  I'll attach a log showing what happens in this case.

Version-Release number of selected component (if applicable):

RHS 2.1 Gold, rpm is redhat-storage-server-2.1.0.3-1.el6rhs.noarch .  

How reproducible:

Every time, but only if certain conditions are met

Steps to Reproduce:
1. define 2 gluster volumes on a server
2. stop the first of the two volumes
3. unmount the XFS brick

Actual results:

tuned script aborts before tuning the 2nd volume

Expected results:

tuned script should report that there is a problem with the first volume and continue, so that it tunes the 2nd volume.

Additional info:

Here's a patch that addresses part of it at least.

[ben@ben-england-laptop virt]$ diff -u k-orig.sh k.sh | more
--- k-orig.sh   2013-10-03 12:37:23.446447854 -0400
+++ k.sh        2013-10-03 12:36:55.780522782 -0400
@@ -37,8 +37,8 @@
     # assumption: brick filesystem mountpoint is either the brick directory or parent of the brick directory
     next_dev=`grep " $brickdir " /proc/mounts | awk '{ print $1 }'`
     if [ -z "$next_dev" ] ; then
-      brickdir=`dirname $brickdir`
-      next_dev=`grep $brickdir /proc/mounts | awk '{ print $1 }'`
+      next_dir=`dirname $brickdir`
+      next_dev=`grep $next_dir /proc/mounts | awk '{ print $1 }'`
     fi
     if [ -z "$next_dev" ] ; then
       echo "ERROR: could not find mountpoint for brick directory $brickdir"
@@ -51,7 +51,14 @@
     fi
     device_name=`basename $device_path`
     echo -n " $device_name"
-    echo $ra > /sys/block/$device_name/queue/read_ahead_kb
+    sysfs_ra="/sys/block/$device_name/queue/read_ahead_kb"
+    if [ ! -f $sysfs_ra ] ; then
+        echo 
+        echo "ERROR: did not find path $sysfs_ra for brick $brickdir"
+        echo
+        continue
+      fi
+    echo $ra > $sysfs_ra
   done
   echo
Comment 2 Bala.FA 2013-10-18 05:36:04 EDT
Patch is under review at
https://code.engineering.redhat.com/gerrit/#/c/14239/
Comment 3 Sachidananda Urs 2013-12-31 05:51:21 EST
Tested on:

[root@boo ~]# rpm -qa | grep redhat-storage-server
redhat-storage-server-2.1.2.0-2.el6rhs.noarch

Followed steps from Comment 0 tested both rhs-high-throughput and rhs-virtualization

No errors reported.
Comment 4 Bala.FA 2014-01-07 04:17:19 EST
Ben, could you provide doctext?
Comment 5 Ben England 2014-01-07 07:59:55 EST
This enhancement to the rhs-high-throughput and rhs-virtualization tuned profiles will make these scripts less vulnerable to configuration-related failures.  In the past, if the first brick's XFS mountpoint was not mounted then the script would fail on the first brick, and consequently would not proceed to tune the subsequent bricks.    With this fix, the script will log an error but will proceed to subsequent bricks.  This is important when dealing with multi-brick configurations (example: 3 12-disk RAID6  drives/server).   

Note that the system admin. is expected to add the bricks' XFS mountpoints to /etc/fstab so they automount upon reboot.  

We also recommend (not a requirement) that the XFS mountpoints not be used as the Gluster brick directory.  Use a subdirectory instead, so that Gluster will not attempt to use the brick if the XFS mountpoint does not exist.  If you use the XFS mountpoint as the brick directory, and the XFS filesystem for the brick is not mounted, then Gluster will proceed to access files on the system disk (or wherever the mountpoint directory is located).  Gluster only knows about directories, not mountpoints.  

In addition, using a subdirectory inside the XFS mountpoint gives you the option to create multiple "bricks" that share the same XFS filesystem, allowing multiple Gluster volumes to share the same physical storage.

Note You need to log in before you can comment on or make changes to this bug.