Description of problem: System is Core 2 Duo based on a Gigabyte GA-965P-S3 motherboard. The hard drive is a 500Mb SATA2. System has operated without problems for several months on 2.6.18-1.2869.fc6 After upgrading to 2.6.19-1.2895.fc6, I started getting kernel errors similar to the ones discussed here: http://www.uwsg.iu.edu/hypermail/linux/kernel/0612.1/0441.html I reverted to 2.6.18-1.2869.fc6 and the problems vanished. When 2.6.19-1.2911.fc6 appeared on updates, I tried upgrading again. I get errors like the following at random intervals: Feb 19 13:00:00 hotblue kernel: do_get_write_access: OOM for frozen_buffer Feb 19 13:00:00 hotblue kernel: ext3_reserve_inode_write: aborting transaction: Out of memory in __ext3_journal_get_write_access Feb 19 13:00:00 hotblue kernel: EXT3-fs error (device sda5) in ext3_reserve_inode_write: Out of memory Feb 19 13:00:00 hotblue kernel: EXT3-fs error (device sda5) in ext3_dirty_inode: Out of memory Despite these errors, the system continued to function without visible effect. Device sda5 is mounted as /home. Unmounting and running an 'e2fsck -f -v /dev/sda5' found no problems. Reverting to 2.6.18-1.2869.fc6 stopped the errors once again. I am staying there for the moment, out of fear of data corruption.
What is the blocksize for this filesystem? Are all the filesystems on this machine using the same block size?
dumpe2fs -h /dev/sda5 reports a blocksize of 4096. /dev/sda2 has the same (/). I have an NTFS filesystem mounted with fuse. Full output of mount and dumpe2fs below: /dev/sda2 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) /dev/sda5 on /home type ext3 (rw) /dev/sda1 on /mnt/XP type fuse (rw,nosuid,nodev,noatime,allow_other) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) dumpe2fs 1.39 (29-May-2006) Filesystem volume name: /home1 Last mounted on: <not available> Filesystem UUID: b5870a32-ab1c-4de3-a74a-3d2448468172 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 52690944 Block count: 52667086 Reserved block count: 2633354 Free blocks: 31724981 Free inodes: 52303928 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1011 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 32768 Inode blocks per group: 1024 Filesystem created: Sun Jan 7 07:34:33 2007 Last mount time: Sat Feb 24 19:05:54 2007 Last write time: Sat Feb 24 19:05:54 2007 Mount count: 3 Maximum mount count: -1 Last checked: Mon Feb 19 23:21:40 2007 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Default directory hash: tea Directory Hash Seed: 0a4686c9-be96-49ca-a133-d1b3dbc12cb8 Journal backup: inode blocks Journal size: 128M
Created attachment 149722 [details] output from df -hl and dump2efs At the time of error, there was heavy NFS activity on the partition in question.
I am seeing the same issues as reported by Brian - Mar 9 12:45:42 bonham kernel: do_get_write_access: OOM for frozen_buffer Mar 9 12:45:42 bonham kernel: ext3_new_blocks: aborting transaction: Out of memory in __ext3_journal_get_write_access Mar 9 12:45:42 bonham kernel: EXT3-fs error (device md2) in ext3_new_blocks: Out of memory Mar 9 12:45:43 bonham kernel: EXT3-fs error (device md2) in ext3_reserve_inode_write: Readonly filesystem Mar 9 12:45:44 bonham kernel: EXT3-fs error (device md2) in ext3_dirty_inode: Out of memory Mar 9 12:45:44 bonham kernel: EXT3-fs error (device md2) in ext3_prepare_write: Out of memory [cph@bonham ~]$ uname -a Linux bonham.stata.com 2.6.19-1.2895.fc6 #1 SMP Wed Jan 10 18:50:56 EST 2007 x86_64 x86_64 x86_64 GNU/Linux
I have been running 2.6.20-1.2933.fc6 since Mar 27th without the problem reoccurring on my system.