Bug 970055 - Applying rhs-high-throughput tuned profile does not set the "read_ahead_kb" correctly when volume is created using IP addresses
Applying rhs-high-throughput tuned profile does not set the "read_ahead_kb" c...
Status: CLOSED DUPLICATE of bug 910566
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: storage-server-tools (Show other bugs)
2.1
All Linux
unspecified Severity high
: ---
: ---
Assigned To: Bala.FA
Sudhir D
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-06-03 07:55 EDT by Neependra Khare
Modified: 2015-12-02 19:39 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-06-04 02:11:06 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Neependra Khare 2013-06-03 07:55:07 EDT
Description of problem:
Applying rhs-high-throughput tuned profile does not set the "read_ahead_kb" of underlying disk to 64 MB when volume is created using IP addresses 

Version-Release number of selected component (if applicable):
appliance-base-1.7.1-1.el6rhs.noarch

How reproducible:

Steps to Reproduce:
1. Install Anshi
2. Create a volume using IP addresses :-

[root@gprfs033 nkhare]# gluster v i 
Volume Name: test
Type: Distributed-Replicate
Volume ID: 3f96485c-463d-4482-8268-3f50d0902720
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 172.17.40.33:/brick/gluster
Brick2: 172.17.40.34:/brick/gluster
Brick3: 172.17.40.35:/brick/gluster
Brick4: 172.17.40.36:/brick/gluster

3. Apply rhs-high-throughput profile 
$ tuned-adm profile rhs-high-throughput

Actual results:
- read_ahead_kb for the underling disk is not set to "65536"
$ cat /sys/block/sdb/queue/read_ahead_kb 
128

Expected results:
- It should be set to 65536


Additional info:
In "/etc/tune-profiles/rhs-high-throughput/ktune.sh" we are grepping for "hostname". If the volume is created using IP address, it would not work.

26   for d in `find /var/lib/glusterd/vols -name bricks -type d 2>/tmp/e ` ; do
 27        (cd $d ; ls | grep `hostname -s` | awk -F: '{ print $2 }' | sed 's/\-/\//g' \
 28           >> $bricklist 2>> /tmp/bricks.err)
 29   done
Comment 2 Bala.FA 2013-06-03 21:43:10 EDT
Could you try using appliance-1.7.1-3.el6rhs as tuned profiles where updated in this?
Comment 3 Neependra Khare 2013-06-04 00:08:51 EDT
I tried appliance-1.7.1-3.el6rhs from following but that did not fix the problem 
https://brewweb.devel.redhat.com/buildinfo?buildID=274082
Comment 4 Bala.FA 2013-06-04 01:01:15 EDT
I believe, dns and ntp configuration in rhs node is mandatory.  I am not sure whether we consider this usecases.

Ben, could you share your thoughts here?
Comment 5 Bala.FA 2013-06-04 02:11:06 EDT
This issue is already tracked at bz#910566.  Closing as duplicate.

*** This bug has been marked as a duplicate of bug 910566 ***
Comment 6 Ben England 2013-08-06 18:30:18 EDT
I thought people routinely created gluster volumes using IP addresses, some of which would not be in DNS (example: if customer uses private VLAN for routing gluster traffic on 2nd NIC).  So we really do need to support this.  But it appears that there is code to do it in the tuned-adm profile rhs-high-throughput in appliance-base-1.7.3-1.el6rhs.noarch .  it works.

[root@gprfs017 rhs-high-throughput]# lvcreate --name t --size 1T vg_brick0
  Logical volume "t" created
[root@gprfs017 rhs-high-throughput]# mkfs -t xfs /dev/vg_brick0/t
meta-data=/dev/vg_brick0/t       isize=256    agcount=4, agsize=67108864 blks
         =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=268435456, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=131072, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@gprfs017 rhs-high-throughput]# mount -t xfs /dev/vg
vga_arbiter  vg_brick0/   vg_gprfs017/ 
[root@gprfs017 rhs-high-throughput]# mkdir /mnt/t
mkdir /mnt/t
[root@gprfs017 rhs-high-throughput]# mount -t xfs /dev/vg_brick0/t /mnt/t
[root@gprfs017 rhs-high-throughput]# mkdir /mnt/t/brick
[root@gprfs017 rhs-high-throughput]# gluster volume create t 172.17.40.17:/mnt/t/brick
volume create: t: success: please start the volume to access data

[root@gprfs017 rhs-high-throughput]# tuned-adm profile default
Stopping tuned:                                            [  OK  ]
Switching to profile 'default'
Applying ktune sysctl settings:
/etc/ktune.d/tunedadm.conf:                                [  OK  ]
Applying sysctl settings from /etc/sysctl.conf
Starting tuned:                                            [  OK  ]
[root@gprfs017 rhs-high-throughput]# tuned-adm profile rhs-high-throughput
Stopping tuned:                                            [  OK  ]
Switching to profile 'rhs-high-throughput'
Applying ktune sysctl settings:
/etc/ktune.d/tunedadm.conf:                                [  OK  ]
Calling '/etc/ktune.d/tunedadm.sh start': setting readahead to 65536 on brick devices:  dm-3 dm-6
                                                           [  OK  ]
Applying sysctl settings from /etc/sysctl.conf
Applying deadline elevator: dm-0 dm-1 dm-2 dm-3 dm-4 dm-5 d[  OK  ]sdb sdc sdd 
Starting tuned:                                            [  OK  ]

[root@gprfs017 rhs-high-throughput]# gluster volume info t
 
Volume Name: t
Type: Distribute
Volume ID: 70e510c4-1ad6-4be6-841b-c7ea43f1d1bd
Status: Created
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 172.17.40.17:/mnt/t/brick

[root@gprfs017 rhs-high-throughput]# cat /sys/block/dm-6/queue/read_ahead_kb 
65536

Note You need to log in before you can comment on or make changes to this bug.