Bug 970055
Summary: | Applying rhs-high-throughput tuned profile does not set the "read_ahead_kb" correctly when volume is created using IP addresses | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Neependra Khare <nkhare> |
Component: | storage-server-tools | Assignee: | Bala.FA <barumuga> |
Status: | CLOSED DUPLICATE | QA Contact: | Sudhir D <sdharane> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 2.1 | CC: | bengland, dpati, dshaks, rhs-bugs |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2013-06-04 06:11:06 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Neependra Khare
2013-06-03 11:55:07 UTC
Could you try using appliance-1.7.1-3.el6rhs as tuned profiles where updated in this? I tried appliance-1.7.1-3.el6rhs from following but that did not fix the problem https://brewweb.devel.redhat.com/buildinfo?buildID=274082 I believe, dns and ntp configuration in rhs node is mandatory. I am not sure whether we consider this usecases. Ben, could you share your thoughts here? This issue is already tracked at bz#910566. Closing as duplicate. *** This bug has been marked as a duplicate of bug 910566 *** I thought people routinely created gluster volumes using IP addresses, some of which would not be in DNS (example: if customer uses private VLAN for routing gluster traffic on 2nd NIC). So we really do need to support this. But it appears that there is code to do it in the tuned-adm profile rhs-high-throughput in appliance-base-1.7.3-1.el6rhs.noarch . it works. [root@gprfs017 rhs-high-throughput]# lvcreate --name t --size 1T vg_brick0 Logical volume "t" created [root@gprfs017 rhs-high-throughput]# mkfs -t xfs /dev/vg_brick0/t meta-data=/dev/vg_brick0/t isize=256 agcount=4, agsize=67108864 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=268435456, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=131072, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@gprfs017 rhs-high-throughput]# mount -t xfs /dev/vg vga_arbiter vg_brick0/ vg_gprfs017/ [root@gprfs017 rhs-high-throughput]# mkdir /mnt/t mkdir /mnt/t [root@gprfs017 rhs-high-throughput]# mount -t xfs /dev/vg_brick0/t /mnt/t [root@gprfs017 rhs-high-throughput]# mkdir /mnt/t/brick [root@gprfs017 rhs-high-throughput]# gluster volume create t 172.17.40.17:/mnt/t/brick volume create: t: success: please start the volume to access data [root@gprfs017 rhs-high-throughput]# tuned-adm profile default Stopping tuned: [ OK ] Switching to profile 'default' Applying ktune sysctl settings: /etc/ktune.d/tunedadm.conf: [ OK ] Applying sysctl settings from /etc/sysctl.conf Starting tuned: [ OK ] [root@gprfs017 rhs-high-throughput]# tuned-adm profile rhs-high-throughput Stopping tuned: [ OK ] Switching to profile 'rhs-high-throughput' Applying ktune sysctl settings: /etc/ktune.d/tunedadm.conf: [ OK ] Calling '/etc/ktune.d/tunedadm.sh start': setting readahead to 65536 on brick devices: dm-3 dm-6 [ OK ] Applying sysctl settings from /etc/sysctl.conf Applying deadline elevator: dm-0 dm-1 dm-2 dm-3 dm-4 dm-5 d[ OK ]sdb sdc sdd Starting tuned: [ OK ] [root@gprfs017 rhs-high-throughput]# gluster volume info t Volume Name: t Type: Distribute Volume ID: 70e510c4-1ad6-4be6-841b-c7ea43f1d1bd Status: Created Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: 172.17.40.17:/mnt/t/brick [root@gprfs017 rhs-high-throughput]# cat /sys/block/dm-6/queue/read_ahead_kb 65536 |