Bug 479421 - kernel: gfs_tool: page allocation failure. order:4, mode:0xd0
Summary: kernel: gfs_tool: page allocation failure. order:4, mode:0xd0
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: gfs-kmod
Version: 5.4
Hardware: i386
OS: Linux
low
high
Target Milestone: rc
: ---
Assignee: Abhijith Das
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On: 245264
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-01-09 14:22 UTC by Steve Whitehouse
Modified: 2010-01-12 03:30 UTC (History)
5 users (show)

Fixed In Version: gfs-kmod-0.1.33-2.el5
Doc Type: Bug Fix
Doc Text:
Clone Of: 245264
Environment:
Last Closed: 2009-09-02 11:03:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2009:1338 0 normal SHIPPED_LIVE gfs-kmod bug-fix update 2009-09-01 10:42:14 UTC

Description Steve Whitehouse 2009-01-09 14:22:52 UTC
+++ This bug was initially created as a clone of Bug #245264 +++

I am getting the following errors on my GFS cluster, my log files are full of
these messages. 
Thanks 
-Anand

 uname -arsv
Linux pa-dev201.eng.vmware.com 2.6.9-42.0.10.ELsmp #1 SMP Fri Feb 16 17:17:21
EST 2007 i686 athlon i386 GNU/Linux


################################################################################
Jun 20 17:10:31 pa-dev201 kernel: gfs_tool: page allocation failure. order:4,
mode:0xd0
Jun 20 17:10:31 pa-dev201 kernel:  [<c0144273>] __alloc_pages+0x28b/0x29d
Jun 20 17:10:31 pa-dev201 kernel:  [<c014429d>] __get_free_pages+0x18/0x24
Jun 20 17:10:31 pa-dev201 kernel:  [<c0146d78>] kmem_getpages+0x1c/0xbb
Jun 20 17:10:31 pa-dev201 kernel:  [<c01478c6>] cache_grow+0xab/0x138
Jun 20 17:10:31 pa-dev201 kernel:  [<c0147ab8>] cache_alloc_refill+0x165/0x19d
Jun 20 17:10:31 pa-dev201 kernel:  [<c0147e8c>] __kmalloc+0x76/0x88
Jun 20 17:10:31 pa-dev201 kernel:  [<f8e9634c>] gi_skeleton+0x4c/0xd3 [gfs]
Jun 20 17:10:31 pa-dev201 kernel:  [<f8e96dbb>] gi_get_counters+0x0/0xb72 [gfs]
Jun 20 17:10:31 pa-dev201 kernel:  [<f8e9a15d>] gfs_ioctl_i+0x1b4/0x507 [gfs]
Jun 20 17:10:31 pa-dev201 kernel:  [<c015a300>] filp_open+0x1f/0x70
Jun 20 17:10:31 pa-dev201 kernel:  [<f8ea5e34>] gfs_ioctl+0x75/0x7f [gfs]
Jun 20 17:10:31 pa-dev201 kernel:  [<c016ada2>] sys_ioctl+0x227/0x269
Jun 20 17:10:31 pa-dev201 kernel:  [<c02d4903>] syscall_call+0x7/0xb
Jun 20 17:10:31 pa-dev201 kernel: Mem-info:
Jun 20 17:10:31 pa-dev201 kernel: DMA per-cpu:
Jun 20 17:10:31 pa-dev201 kernel: cpu 0 hot: low 2, high 6, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 0 cold: low 0, high 2, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 1 hot: low 2, high 6, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 1 cold: low 0, high 2, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 2 hot: low 2, high 6, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 2 cold: low 0, high 2, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 3 hot: low 2, high 6, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 3 cold: low 0, high 2, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 4 hot: low 2, high 6, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 4 cold: low 0, high 2, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 5 hot: low 2, high 6, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 5 cold: low 0, high 2, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 6 hot: low 2, high 6, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 6 cold: low 0, high 2, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 7 hot: low 2, high 6, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 7 cold: low 0, high 2, batch 1
Jun 20 17:10:31 pa-dev201 kernel: Normal per-cpu:
Jun 20 17:10:31 pa-dev201 kernel: cpu 0 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 0 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 1 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 1 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 2 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 2 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 3 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 3 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 4 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 4 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 5 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 5 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 6 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 6 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 7 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 7 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: HighMem per-cpu:
Jun 20 17:10:31 pa-dev201 kernel: cpu 0 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 0 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 1 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 1 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 2 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 2 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 3 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 3 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 4 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 4 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 5 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 5 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 6 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 6 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 7 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 7 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: 
Jun 20 17:10:31 pa-dev201 kernel: Free pages:    14073364kB (14044160kB HighMem)
Jun 20 17:10:31 pa-dev201 kernel: Active:127028 inactive:354704 dirty:70710
writeback:1 unstable:0 free:3518341 slab:108269 mapped:48711 pagetables:1938
Jun 20 17:10:31 pa-dev201 kernel: DMA free:12564kB min:16kB low:32kB high:48kB
active:0kB inactive:0kB present:16384kB pages_scanned:5026 all_unreclaimable? yes
Jun 20 17:10:31 pa-dev201 kernel: protections[]: 0 0 0
Jun 20 17:10:31 pa-dev201 kernel: Normal free:16640kB min:928kB low:1856kB
high:2784kB active:624kB inactive:299080kB present:901120kB pages_scanned:0
all_unreclaimable? no
Jun 20 17:10:31 pa-dev201 kernel: protections[]: 0 0 0
Jun 20 17:10:31 pa-dev201 kernel: HighMem free:14044160kB min:512kB low:1024kB
high:1536kB active:507544kB inactive:1119736kB present:15859708kB
pages_scanned:0 all_unreclaimable? no
Jun 20 17:10:31 pa-dev201 kernel: protections[]: 0 0 0
Jun 20 17:10:31 pa-dev201 kernel: DMA: 5*4kB 4*8kB 4*16kB 3*32kB 3*64kB 1*128kB
1*256kB 1*512kB 1*1024kB 1*2048kB 2*4096kB = 12564kB
Jun 20 17:10:31 pa-dev201 kernel: Normal: 3294*4kB 223*8kB 95*16kB 5*32kB 0*64kB
0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 16640kB
Jun 20 17:10:31 pa-dev201 kernel: HighMem: 10648*4kB 17708*8kB 10766*16kB
5097*32kB 2572*64kB 1046*128kB 734*256kB 161*512kB 24*1024kB 0*2048kB
3157*4096kB = 14044096kB
Jun 20 17:10:31 pa-dev201 sshd(pam_unix)[18680]: session closed for user mts
Jun 20 17:10:31 pa-dev201 kernel: Swap cache: add 0, delete 0, find 0/0, race 0+0
Jun 20 17:10:31 pa-dev201 kernel: 0 bounce buffer pages
Jun 20 17:10:31 pa-dev201 kernel: Free swap:       2096440kB
Jun 20 17:10:31 pa-dev201 kernel: 4194303 pages of RAM
Jun 20 17:10:31 pa-dev201 kernel: 3921909 pages of HIGHMEM
Jun 20 17:10:31 pa-dev201 kernel: 77350 reserved pages
Jun 20 17:10:31 pa-dev201 kernel: 486424 pages shared
Jun 20 17:10:31 pa-dev201 kernel: 0 pages swap cached
Jun 20 17:10:31 pa-dev201 sshd(pam_unix)[18735]: session opened for user mts by
(uid=0)
Jun 20 17:10:31 pa-dev201 kernel: gfs_tool: page allocation failure. order:4,
mode:0xd0
Jun 20 17:10:31 pa-dev201 kernel:  [<c0144273>] __alloc_pages+0x28b/0x29d
Jun 20 17:10:31 pa-dev201 kernel:  [<c014429d>] __get_free_pages+0x18/0x24
Jun 20 17:10:31 pa-dev201 kernel:  [<c0146d78>] kmem_getpages+0x1c/0xbb
Jun 20 17:10:31 pa-dev201 kernel:  [<c01478c6>] cache_grow+0xab/0x138
Jun 20 17:10:31 pa-dev201 kernel:  [<c0147ab8>] cache_alloc_refill+0x165/0x19d
Jun 20 17:10:31 pa-dev201 kernel:  [<c0147e8c>] __kmalloc+0x76/0x88
Jun 20 17:10:31 pa-dev201 kernel:  [<f8e9634c>] gi_skeleton+0x4c/0xd3 [gfs]
Jun 20 17:10:31 pa-dev201 kernel:  [<f8e96dbb>] gi_get_counters+0x0/0xb72 [gfs]
Jun 20 17:10:31 pa-dev201 kernel:  [<f8e9a15d>] gfs_ioctl_i+0x1b4/0x507 [gfs]
Jun 20 17:10:31 pa-dev201 kernel:  [<c015a300>] filp_open+0x1f/0x70
Jun 20 17:10:31 pa-dev201 kernel:  [<f8ea5e34>] gfs_ioctl+0x75/0x7f [gfs]
Jun 20 17:10:31 pa-dev201 kernel:  [<c016ada2>] sys_ioctl+0x227/0x269
Jun 20 17:10:31 pa-dev201 kernel:  [<c02d4903>] syscall_call+0x7/0xb
Jun 20 17:10:31 pa-dev201 kernel: Mem-info:
Jun 20 17:10:31 pa-dev201 kernel: DMA per-cpu:
Jun 20 17:10:31 pa-dev201 kernel: cpu 0 hot: low 2, high 6, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 0 cold: low 0, high 2, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 1 hot: low 2, high 6, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 1 cold: low 0, high 2, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 2 hot: low 2, high 6, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 2 cold: low 0, high 2, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 3 hot: low 2, high 6, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 3 cold: low 0, high 2, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 4 hot: low 2, high 6, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 4 cold: low 0, high 2, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 5 hot: low 2, high 6, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 5 cold: low 0, high 2, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 6 hot: low 2, high 6, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 6 cold: low 0, high 2, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 7 hot: low 2, high 6, batch 1
Jun 20 17:10:31 pa-dev201 kernel: cpu 7 cold: low 0, high 2, batch 1
Jun 20 17:10:31 pa-dev201 kernel: Normal per-cpu:
Jun 20 17:10:31 pa-dev201 kernel: cpu 0 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 0 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 1 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 1 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 2 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 2 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 3 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 3 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 4 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 4 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 5 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 5 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 6 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 6 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 7 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 7 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: HighMem per-cpu:
Jun 20 17:10:31 pa-dev201 kernel: cpu 0 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 0 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 1 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 1 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 2 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 2 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 3 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 3 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 4 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 4 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 5 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 5 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 6 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 sshd(pam_unix)[18735]: session closed for user mts
Jun 20 17:10:31 pa-dev201 kernel: cpu 6 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 7 hot: low 32, high 96, batch 16
Jun 20 17:10:31 pa-dev201 kernel: cpu 7 cold: low 0, high 32, batch 16
Jun 20 17:10:31 pa-dev201 kernel: 
Jun 20 17:10:31 pa-dev201 kernel: Free pages:    14074964kB (14047872kB HighMem)
Jun 20 17:10:31 pa-dev201 kernel: Active:126042 inactive:355352 dirty:70823
writeback:0 unstable:0 free:3518741 slab:108273 mapped:47616 pagetables:1929
Jun 20 17:10:31 pa-dev201 kernel: DMA free:12564kB min:16kB low:32kB high:48kB
active:0kB inactive:0kB present:16384kB pages_scanned:5026 all_unreclaimable? yes
Jun 20 17:10:31 pa-dev201 kernel: protections[]: 0 0 0
Jun 20 17:10:31 pa-dev201 kernel: Normal free:14528kB min:928kB low:1856kB
high:2784kB active:636kB inactive:301268kB present:901120kB pages_scanned:0
all_unreclaimable? no
Jun 20 17:10:31 pa-dev201 kernel: protections[]: 0 0 0
Jun 20 17:10:31 pa-dev201 kernel: HighMem free:14047872kB min:512kB low:1024kB
high:1536kB active:503588kB inactive:1120140kB present:15859708kB
pages_scanned:0 all_unreclaimable? no
Jun 20 17:10:31 pa-dev201 kernel: protections[]: 0 0 0
Jun 20 17:10:31 pa-dev201 kernel: DMA: 5*4kB 4*8kB 4*16kB 3*32kB 3*64kB 1*128kB
1*256kB 1*512kB 1*1024kB 1*2048kB 2*4096kB = 12564kB
Jun 20 17:10:31 pa-dev201 kernel: Normal: 2766*4kB 223*8kB 95*16kB 5*32kB 0*64kB
0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 14528kB
Jun 20 17:10:31 pa-dev201 kernel: HighMem: 11576*4kB 17708*8kB 10766*16kB
5097*32kB 2572*64kB 1046*128kB 734*256kB 161*512kB 24*1024kB 0*2048kB
3157*4096kB = 14047808kB
Jun 20 17:10:31 pa-dev201 kernel: Swap cache: add 0, delete 0, find 0/0, race 0+0
Jun 20 17:10:31 pa-dev201 kernel: 0 bounce buffer pages
Jun 20 17:10:31 pa-dev201 kernel: Free swap:       2096440kB
Jun 20 17:10:31 pa-dev201 kernel: 4194303 pages of RAM
Jun 20 17:10:31 pa-dev201 kernel: 3921909 pages of HIGHMEM
Jun 20 17:10:31 pa-dev201 kernel: 77350 reserved pages
Jun 20 17:10:31 pa-dev201 kernel: 486420 pages shared
Jun 20 17:10:31 pa-dev201 kernel: 0 pages swap cached
Jun 20 17:10:58 pa-dev201 mountd[4358]: authenticated unmount request from
ravi-lx2.eng.vmware.com:618 for /mts/dbc2 (/mts/dbc2)
Jun 20 17:10:58 pa-dev201 mountd[4358]: authenticated unmount request from
ravi-lx2.eng.vmware.com:620 for /mts/dbc2 (/mts/dbc2)
Jun 20 18:17:10 pa-dev201 sshd(pam_unix)[30655]: session closed for user puneetz
Jun 21 17:10:07 pa-dev201 mountd[4358]: authenticated unmount request from
ravi-lx2.eng.vmware.com:927 for /mts/dbc2 (/mts/dbc2)
Jun 21 17:10:07 pa-dev201 mountd[4358]: authenticated unmount request from
ravi-lx2.eng.vmware.com:929 for /mts/dbc2 (/mts/dbc2)
Jun 21 17:10:25 pa-dev201 mountd[4358]: authenticated mount request from
office2-dhcp216.eng.vmware.com:744 for /mts/dbc2 (/mts/dbc2)
Jun 21 17:10:32 pa-dev201 su(pam_unix)[20987]: session opened for user erik by
root(uid=0)
Jun 21 17:10:38 pa-dev201 sshd(pam_unix)[21085]: session opened for user mts by
(uid=0)
Jun 21 17:10:40 pa-dev201 sshd(pam_unix)[21085]: session closed for user mts
Jun 21 17:10:40 pa-dev201 sshd(pam_unix)[21113]: session opened for user mts by
(uid=0)
Jun 21 17:10:41 pa-dev201 sshd(pam_unix)[21113]: session closed for user mts
Jun 21 17:10:41 pa-dev201 sshd(pam_unix)[21154]: session opened for user mts by
(uid=0)
Jun 21 17:10:43 pa-dev201 sshd(pam_unix)[21154]: session closed for user mts
Jun 21 17:10:43 pa-dev201 sshd(pam_unix)[21194]: session opened for user mts by
(uid=0)
Jun 21 17:10:44 pa-dev201 sshd(pam_unix)[21194]: session closed for user mts
Jun 21 17:10:44 pa-dev201 sshd(pam_unix)[21223]: session opened for user mts by
(uid=0)
Jun 21 17:10:46 pa-dev201 sshd(pam_unix)[21223]: session closed for user mts
Jun 21 17:10:47 pa-dev201 sshd(pam_unix)[21268]: session opened for user mts by
(uid=0)
Jun 21 17:10:48 pa-dev201 sshd(pam_unix)[21268]: session closed for user mts
Jun 21 17:10:48 pa-dev201 sshd(pam_unix)[21313]: session opened for user mts by
(uid=0)
Jun 21 17:10:48 pa-dev201 kernel: gfs_tool: page allocation failure. order:4,
mode:0xd0
Jun 21 17:10:48 pa-dev201 kernel:  [<c0144273>] __alloc_pages+0x28b/0x29d
Jun 21 17:10:48 pa-dev201 kernel:  [<c014429d>] __get_free_pages+0x18/0x24
Jun 21 17:10:48 pa-dev201 kernel:  [<c0146d78>] kmem_getpages+0x1c/0xbb
Jun 21 17:10:48 pa-dev201 kernel:  [<c01478c6>] cache_grow+0xab/0x138
Jun 21 17:10:48 pa-dev201 kernel:  [<c0147ab8>] cache_alloc_refill+0x165/0x19d
Jun 21 17:10:48 pa-dev201 kernel:  [<c0147e8c>] __kmalloc+0x76/0x88
Jun 21 17:10:48 pa-dev201 kernel:  [<f8e9634c>] gi_skeleton+0x4c/0xd3 [gfs]
Jun 21 17:10:48 pa-dev201 kernel:  [<f8e96dbb>] gi_get_counters+0x0/0xb72 [gfs]
Jun 21 17:10:48 pa-dev201 kernel:  [<f8e9a15d>] gfs_ioctl_i+0x1b4/0x507 [gfs]
Jun 21 17:10:48 pa-dev201 kernel:  [<c015a300>] filp_open+0x1f/0x70
Jun 21 17:10:48 pa-dev201 kernel:  [<f8ea5e34>] gfs_ioctl+0x75/0x7f [gfs]
Jun 21 17:10:48 pa-dev201 kernel:  [<c016ada2>] sys_ioctl+0x227/0x269
Jun 21 17:10:48 pa-dev201 kernel:  [<c02d4903>] syscall_call+0x7/0xb
Jun 21 17:10:48 pa-dev201 kernel: Mem-info:
Jun 21 17:10:48 pa-dev201 kernel: DMA per-cpu:
Jun 21 17:10:48 pa-dev201 kernel: cpu 0 hot: low 2, high 6, batch 1
Jun 21 17:10:48 pa-dev201 kernel: cpu 0 cold: low 0, high 2, batch 1
Jun 21 17:10:48 pa-dev201 kernel: cpu 1 hot: low 2, high 6, batch 1
Jun 21 17:10:48 pa-dev201 kernel: cpu 1 cold: low 0, high 2, batch 1
Jun 21 17:10:48 pa-dev201 kernel: cpu 2 hot: low 2, high 6, batch 1
Jun 21 17:10:48 pa-dev201 kernel: cpu 2 cold: low 0, high 2, batch 1
Jun 21 17:10:48 pa-dev201 kernel: cpu 3 hot: low 2, high 6, batch 1
Jun 21 17:10:48 pa-dev201 kernel: cpu 3 cold: low 0, high 2, batch 1
Jun 21 17:10:48 pa-dev201 kernel: cpu 4 hot: low 2, high 6, batch 1
Jun 21 17:10:48 pa-dev201 kernel: cpu 4 cold: low 0, high 2, batch 1
Jun 21 17:10:48 pa-dev201 kernel: cpu 5 hot: low 2, high 6, batch 1
Jun 21 17:10:48 pa-dev201 kernel: cpu 5 cold: low 0, high 2, batch 1
Jun 21 17:10:48 pa-dev201 kernel: cpu 6 hot: low 2, high 6, batch 1
Jun 21 17:10:48 pa-dev201 kernel: cpu 6 cold: low 0, high 2, batch 1
Jun 21 17:10:48 pa-dev201 kernel: cpu 7 hot: low 2, high 6, batch 1
Jun 21 17:10:48 pa-dev201 kernel: cpu 7 cold: low 0, high 2, batch 1
Jun 21 17:10:48 pa-dev201 kernel: Normal per-cpu:
Jun 21 17:10:48 pa-dev201 kernel: cpu 0 hot: low 32, high 96, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 0 cold: low 0, high 32, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 1 hot: low 32, high 96, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 1 cold: low 0, high 32, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 2 hot: low 32, high 96, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 2 cold: low 0, high 32, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 3 hot: low 32, high 96, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 3 cold: low 0, high 32, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 4 hot: low 32, high 96, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 4 cold: low 0, high 32, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 5 hot: low 32, high 96, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 5 cold: low 0, high 32, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 6 hot: low 32, high 96, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 6 cold: low 0, high 32, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 7 hot: low 32, high 96, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 7 cold: low 0, high 32, batch 16
Jun 21 17:10:48 pa-dev201 kernel: HighMem per-cpu:
Jun 21 17:10:48 pa-dev201 kernel: cpu 0 hot: low 32, high 96, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 0 cold: low 0, high 32, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 1 hot: low 32, high 96, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 1 cold: low 0, high 32, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 2 hot: low 32, high 96, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 2 cold: low 0, high 32, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 3 hot: low 32, high 96, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 3 cold: low 0, high 32, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 4 hot: low 32, high 96, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 4 cold: low 0, high 32, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 5 hot: low 32, high 96, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 5 cold: low 0, high 32, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 6 hot: low 32, high 96, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 6 cold: low 0, high 32, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 7 hot: low 32, high 96, batch 16
Jun 21 17:10:48 pa-dev201 kernel: cpu 7 cold: low 0, high 32, batch 16
Jun 21 17:10:48 pa-dev201 kernel: 
Jun 21 17:10:48 pa-dev201 kernel: Free pages:    14643996kB (14414400kB HighMem)
Jun 21 17:10:48 pa-dev201 kernel: Active:142520 inactive:210346 dirty:1260
writeback:1 unstable:0 free:3660999 slab:94182 mapped:42012 pagetables:1993
Jun 21 17:10:48 pa-dev201 kernel: DMA free:12564kB min:16kB low:32kB high:48kB
active:0kB inactive:0kB present:16384kB pages_scanned:7313 all_unreclaimable? yes
Jun 21 17:10:48 pa-dev201 kernel: protections[]: 0 0 0
Jun 21 17:10:48 pa-dev201 kernel: Normal free:217032kB min:928kB low:1856kB
high:2784kB active:3244kB inactive:152100kB present:901120kB pages_scanned:0
all_unreclaimable? no
Jun 21 17:10:48 pa-dev201 kernel: protections[]: 0 0 0
Jun 21 17:10:48 pa-dev201 kernel: HighMem free:14414400kB min:512kB low:1024kB
high:1536kB active:566836kB inactive:689284kB present:15859708kB pages_scanned:0
all_unreclaimable? no
Jun 21 17:10:48 pa-dev201 sshd(pam_unix)[21313]: session closed for user mts
Jun 21 17:10:48 pa-dev201 kernel: protections[]: 0 0 0
Jun 21 17:10:48 pa-dev201 kernel: DMA: 5*4kB 4*8kB 4*16kB 3*32kB 3*64kB 1*128kB
1*256kB 1*512kB 1*1024kB 1*2048kB 2*4096kB = 12564kB
Jun 21 17:10:48 pa-dev201 kernel: Normal: 23724*4kB 12497*8kB 1235*16kB 75*32kB
0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 217032kB
Jun 21 17:10:48 pa-dev201 kernel: HighMem: 12252*4kB 15764*8kB 11891*16kB
3562*32kB 5201*64kB 3811*128kB 1102*256kB 667*512kB 272*1024kB 81*2048kB
2941*4096kB = 14414400kB
Jun 21 17:10:48 pa-dev201 kernel: Swap cache: add 0, delete 0, find 0/0, race 0+0
Jun 21 17:10:48 pa-dev201 kernel: 0 bounce buffer pages
Jun 21 17:10:48 pa-dev201 kernel: Free swap:       2096440kB
Jun 21 17:10:48 pa-dev201 kernel: 4194303 pages of RAM
Jun 21 17:10:48 pa-dev201 kernel: 3921909 pages of HIGHMEM
Jun 21 17:10:48 pa-dev201 kernel: 77350 reserved pages
Jun 21 17:10:48 pa-dev201 kernel: 361534 pages shared
Jun 21 17:10:48 pa-dev201 kernel: 0 pages swap cached
Jun 21 17:10:48 pa-dev201 sshd(pam_unix)[21334]: session opened for user mts by
(uid=0)
Jun 21 17:10:49 pa-dev201 kernel: gfs_tool: page allocation failure. order:4,
mode:0xd0
Jun 21 17:10:49 pa-dev201 kernel:  [<c0144273>] __alloc_pages+0x28b/0x29d
Jun 21 17:10:49 pa-dev201 kernel:  [<c014429d>] __get_free_pages+0x18/0x24
Jun 21 17:10:49 pa-dev201 kernel:  [<c0146d78>] kmem_getpages+0x1c/0xbb
Jun 21 17:10:49 pa-dev201 kernel:  [<c01478c6>] cache_grow+0xab/0x138
Jun 21 17:10:49 pa-dev201 kernel:  [<c0147ab8>] cache_alloc_refill+0x165/0x19d
Jun 21 17:10:49 pa-dev201 kernel:  [<c0147e8c>] __kmalloc+0x76/0x88
Jun 21 17:10:49 pa-dev201 kernel:  [<f8e9634c>] gi_skeleton+0x4c/0xd3 [gfs]
Jun 21 17:10:49 pa-dev201 kernel:  [<f8e96dbb>] gi_get_counters+0x0/0xb72 [gfs]
Jun 21 17:10:49 pa-dev201 kernel:  [<f8e9a15d>] gfs_ioctl_i+0x1b4/0x507 [gfs]
Jun 21 17:10:49 pa-dev201 kernel:  [<c015a300>] filp_open+0x1f/0x70
Jun 21 17:10:49 pa-dev201 kernel:  [<f8ea5e34>] gfs_ioctl+0x75/0x7f [gfs]
Jun 21 17:10:49 pa-dev201 kernel:  [<c016ada2>] sys_ioctl+0x227/0x269
Jun 21 17:10:49 pa-dev201 kernel:  [<c02d4903>] syscall_call+0x7/0xb
Jun 21 17:10:49 pa-dev201 kernel: Mem-info:
Jun 21 17:10:49 pa-dev201 kernel: DMA per-cpu:
Jun 21 17:10:49 pa-dev201 kernel: cpu 0 hot: low 2, high 6, batch 1
Jun 21 17:10:49 pa-dev201 kernel: cpu 0 cold: low 0, high 2, batch 1
Jun 21 17:10:49 pa-dev201 kernel: cpu 1 hot: low 2, high 6, batch 1
Jun 21 17:10:49 pa-dev201 kernel: cpu 1 cold: low 0, high 2, batch 1
Jun 21 17:10:49 pa-dev201 kernel: cpu 2 hot: low 2, high 6, batch 1
Jun 21 17:10:49 pa-dev201 kernel: cpu 2 cold: low 0, high 2, batch 1
Jun 21 17:10:49 pa-dev201 kernel: cpu 3 hot: low 2, high 6, batch 1
Jun 21 17:10:49 pa-dev201 kernel: cpu 3 cold: low 0, high 2, batch 1
Jun 21 17:10:49 pa-dev201 kernel: cpu 4 hot: low 2, high 6, batch 1
Jun 21 17:10:49 pa-dev201 kernel: cpu 4 cold: low 0, high 2, batch 1
Jun 21 17:10:49 pa-dev201 kernel: cpu 5 hot: low 2, high 6, batch 1
Jun 21 17:10:49 pa-dev201 kernel: cpu 5 cold: low 0, high 2, batch 1
Jun 21 17:10:49 pa-dev201 kernel: cpu 6 hot: low 2, high 6, batch 1
Jun 21 17:10:49 pa-dev201 kernel: cpu 6 cold: low 0, high 2, batch 1
Jun 21 17:10:49 pa-dev201 kernel: cpu 7 hot: low 2, high 6, batch 1
Jun 21 17:10:49 pa-dev201 kernel: cpu 7 cold: low 0, high 2, batch 1
Jun 21 17:10:49 pa-dev201 kernel: Normal per-cpu:
Jun 21 17:10:49 pa-dev201 kernel: cpu 0 hot: low 32, high 96, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 0 cold: low 0, high 32, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 1 hot: low 32, high 96, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 1 cold: low 0, high 32, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 2 hot: low 32, high 96, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 2 cold: low 0, high 32, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 3 hot: low 32, high 96, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 3 cold: low 0, high 32, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 4 hot: low 32, high 96, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 4 cold: low 0, high 32, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 5 hot: low 32, high 96, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 5 cold: low 0, high 32, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 6 hot: low 32, high 96, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 6 cold: low 0, high 32, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 7 hot: low 32, high 96, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 7 cold: low 0, high 32, batch 16
Jun 21 17:10:49 pa-dev201 kernel: HighMem per-cpu:
Jun 21 17:10:49 pa-dev201 kernel: cpu 0 hot: low 32, high 96, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 0 cold: low 0, high 32, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 1 hot: low 32, high 96, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 1 cold: low 0, high 32, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 2 hot: low 32, high 96, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 2 cold: low 0, high 32, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 3 hot: low 32, high 96, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 3 cold: low 0, high 32, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 4 hot: low 32, high 96, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 4 cold: low 0, high 32, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 5 hot: low 32, high 96, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 5 cold: low 0, high 32, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 6 hot: low 32, high 96, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 6 cold: low 0, high 32, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 7 hot: low 32, high 96, batch 16
Jun 21 17:10:49 pa-dev201 kernel: cpu 7 cold: low 0, high 32, batch 16
Jun 21 17:10:49 pa-dev201 kernel: 
Jun 21 17:10:49 pa-dev201 kernel: Free pages:    14643004kB (14413952kB HighMem)
Jun 21 17:10:49 pa-dev201 kernel: Active:142526 inactive:210561 dirty:1255
writeback:1 unstable:0 free:3660751 slab:94247 mapped:42013 pagetables:1991
Jun 21 17:10:49 pa-dev201 kernel: DMA free:12564kB min:16kB low:32kB high:48kB
active:0kB inactive:0kB present:16384kB pages_scanned:7313 all_unreclaimable? yes
Jun 21 17:10:49 pa-dev201 kernel: protections[]: 0 0 0
Jun 21 17:10:49 pa-dev201 kernel: Normal free:216488kB min:928kB low:1856kB
high:2784kB active:3244kB inactive:152560kB present:901120kB pages_scanned:0
all_unreclaimable? no
Jun 21 17:10:49 pa-dev201 kernel: protections[]: 0 0 0
Jun 21 17:10:49 pa-dev201 kernel: HighMem free:14413952kB min:512kB low:1024kB
high:1536kB active:566860kB inactive:689684kB present:15859708kB pages_scanned:0
all_unreclaimable? no
Jun 21 17:10:49 pa-dev201 kernel: protections[]: 0 0 0
Jun 21 17:10:49 pa-dev201 kernel: DMA: 5*4kB 4*8kB 4*16kB 3*32kB 3*64kB 1*128kB
1*256kB 1*512kB 1*1024kB 1*2048kB 2*4096kB = 12564kB
Jun 21 17:10:49 pa-dev201 kernel: Normal: 23582*4kB 12502*8kB 1236*16kB 74*32kB
0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 216488kB
Jun 21 17:10:49 pa-dev201 kernel: HighMem: 12140*4kB 15764*8kB 11891*16kB
3562*32kB 5201*64kB 3811*128kB 1102*256kB 667*512kB 272*1024kB 81*2048kB
2941*4096kB = 14413952kB
Jun 21 17:10:49 pa-dev201 sshd(pam_unix)[21334]: session closed for user mts
Jun 21 17:10:49 pa-dev201 kernel: Swap cache: add 0, delete 0, find 0/0, race 0+0
Jun 21 17:10:49 pa-dev201 kernel: 0 bounce buffer pages
Jun 21 17:10:49 pa-dev201 kernel: Free swap:       2096440kB
Jun 21 17:10:49 pa-dev201 kernel: 4194303 pages of RAM
Jun 21 17:10:49 pa-dev201 kernel: 3921909 pages of HIGHMEM
Jun 21 17:10:49 pa-dev201 kernel: 77350 reserved pages
Jun 21 17:10:49 pa-dev201 kernel: 361751 pages shared
Jun 21 17:10:49 pa-dev201 kernel: 0 pages swap cached

--- Additional comment from adas on 2007-06-22 12:00:16 EDT ---

It looks like you're hitting a page allocation failure in the kernel. You might
be running out of memory. A gfs tunable 'lockdump_size' determines the minimum
amount of kernel memory requested with each 'gfs_tool getXXX' command. The
default is 32 pages (131072 bytes). I don't know if this is set very high on
your system. Do a 'gfs_tool gettune /gfs/mount/point | grep lockdump_size' to
get the current setting. You can set it to a lower value using 'gfs_tool settune
/gfs/mount/point lockdump_size 16384' or something. See if that helps.

--- Additional comment from adas on 2007-06-22 13:27:31 EDT ---

Oops. I'm sorry, I was wrong about the minimum amount of kernel memory
requested. The default is 32 pages, but it's _not_ the minimum. If you do a
'gfs_tool counters' , you only request 4096 bytes, so your machine must be
really out of memory. I don't think changing the lockdump_size is going to help you.

--- Additional comment from anandab on 2007-06-22 14:15:49 EDT ---

The output of the gfs_tool commad is 
gfs_tool gettune /mts/dbc2 | grep lockdump
lockdump_size = 131072

I also have my drop_count set to 0. 

The machine has 16GB of memory, does the error have something to do with a large
drop_count setting?




--- Additional comment from wcheng on 2007-06-22 15:11:57 EDT ---

The memory is *very* fragemented based on the memory output. Any kmalloc
size >= 64K will most likely fail, even the system still has large amount
of memory inside smaller slab cache buckets:

Normal: 23724*4kB 12497*8kB 1235*16kB 75*32kB
0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 217032kB

This is a known linux VMM design issue. For newer 2.6 based kernel, 
there is command allowing you to purge slab cache. For RHEL 4 (2.6.9
base), we're out of luck here. Hopefully "umount", then "remount" could
alleviate the symptoms but there is no guarantee. 

If this is a repeated GFS issue, GFS RHEL 4.5-base RPM has a glock trimming 
patch that would allow you to trim glock percentage (that will subsequently
return GFS slab cache back to the central pool). Give it a try to see whether 
it will help: 

shell> gfs_tool settune /mnt/gfs1 glock_purge 30 

(this will purge 30% of glock back to centrol pool on a 5 second interval).

--- Additional comment from adas on 2007-06-22 15:28:32 EDT ---

Ok... take a look at bug 229461
Without this fix, you're still looking at allocating 64K of contiguous kernel
memory which might not be available in all cases.
There're still places in the gfs_tool code that request 64K for various ioctl
calls. I'm gonna take a look at those and commit fixes.

--- Additional comment from anandab on 2007-06-22 15:34:38 EDT ---

Wendy:I am using dlm not glock. But your analysis was great and i seems to fit
the prolems we are seeing.

Abhijith:Will you be providing instructions for getting the updates?


--- Additional comment from wcheng on 2007-06-22 17:14:32 EDT ---

Fixing gfs_tool to use smaller buffer (but takes longer time to complete)
is a good thing to do. However, when a system has fragemented memory like 
that, the general performance would be bad. VM tuning knowledge and tools
are a must for system adminitrators. In simple words, when gfs_tool lock
dump (that takes less buffer) is fixed, your problem will be shifted to 
other parts of the system, Certain level of VM tuning should be required
to avoid getting system into this state.

GFS "glock" component calls DLM to carry out inter-node locking. There are
one-to-one correspondences between glock and dlm lock. When glocks start to
accumulate to an unacceptable level, so do dlm locks.


--- Additional comment from wcheng on 2007-06-22 17:22:01 EDT ---

Actually it is interesting to read the glock and dlm comments :) .. Aware
that GFS actually uses "glock" to do locking (and then glock calls DLM) ?

--- Additional comment from swhiteho on 2008-12-10 10:52:27 EDT ---

The simple fix for this looks like using vmalloc rather than kmalloc. Abhi, was this fixed in the end? If so please close this bug.

--- Additional comment from swhiteho on 2009-01-09 09:19:44 EDT ---

Abhi, please can you turn this allocation into a vmalloc and do the same for 5.4 & upstream too? Its a simple fix and the current code is too ghastly for words.

Comment 1 Steve Whitehouse 2009-06-05 07:39:24 UTC
Abhi, can you take a look at this one please. It should be trivial to fix and we ought to do it for 5.4

Comment 2 Abhijith Das 2009-06-08 01:20:06 UTC
Checked the patch to change kmalloc to vmalloc in gi_skeleton (ioctl.c) into RHEL5, RHEL54, STABLE2, STABLE3 and master.

Comment 5 errata-xmlrpc 2009-09-02 11:03:16 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2009-1338.html


Note You need to log in before you can comment on or make changes to this bug.