Bug 185316
| Summary: | VFS: brelse: Trying to free free buffer | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 4 | Reporter: | Doug Chapman <dchapman> | ||||
| Component: | kernel | Assignee: | Tom Coughlan <coughlan> | ||||
| Status: | CLOSED DUPLICATE | QA Contact: | Brian Brock <bbrock> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | 4.0 | CC: | aviro, jbaron, staubach | ||||
| Target Milestone: | --- | ||||||
| Target Release: | --- | ||||||
| Hardware: | ia64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2006-09-01 17:25:24 UTC | Type: | --- | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | |||||||
| Bug Blocks: | 198694 | ||||||
| Attachments: |
|
||||||
|
Description
Doug Chapman
2006-03-13 17:33:00 UTC
Created attachment 126054 [details]
various unique stack traces seen
FYI, I am now able to reproduce this on a much smaller system. I have a 4cpu ia64 system in my private rack in the Red Hat lab connected to a single MSA1000. I am able to hit these stack traces (although not nearly as often as on the 64 cpu with 8 MSA1000's). I filed this quite some time back when I was the only one seeing it however we
are now seeing this more often in other testing inside HP. It is no longer just
seen on massive systems like the one I originally reported it on so I am
increasing the severity. It has been reported to be easily reproduced on a 2
socket dual core ia64 system.
Here is a stacktrace as seeon on RHEL4 U4 partner beta:
VFS: brelse: Trying to free free buffer
Badness in __brelse at fs/buffer.c:1372
Call Trace:
[<a000000100016da0>] show_stack+0x80/0xa0
sp=e00000003d997940 bsp=e00000003d991058
[<a000000100016df0>] dump_stack+0x30/0x60
sp=e00000003d997b10 bsp=e00000003d991040
[<a000000100129990>] __brelse+0xd0/0x100
sp=e00000003d997b10 bsp=e00000003d991020
[<a0000002001de770>] __try_to_free_cp_buf+0x1b0/0x220 [jbd]
sp=e00000003d997b10 bsp=e00000003d990ff0
[<a0000002001de930>] __journal_clean_checkpoint_list+0x150/0x180 [jbd]
sp=e00000003d997b10 bsp=e00000003d990f98
[<a0000002001d9090>] journal_commit_transaction+0x6d0/0x3080 [jbd]
sp=e00000003d997b10 bsp=e00000003d990ea0
[<a0000002001e18d0>] kjournald+0x170/0x580 [jbd]
sp=e00000003d997d80 bsp=e00000003d990e38
[<a000000100018c70>] kernel_thread_helper+0x30/0x60
sp=e00000003d997e30 bsp=e00000003d990e10
[<a000000100008c60>] start_kernel_thread+0x20/0x40
sp=e00000003d997e30 bsp=e00000003d990e1
This request was evaluated by Red Hat Product Management for inclusion in a Red Hat Enterprise Linux maintenance release. Product Management has requested further review of this enhancement by Red Hat Engineering, for potential inclusion in a Red Hat Enterprise Linux Update release for currently deployed products. This enhancement is not yet committed for inclusion in an Update release. Er... the obvious question: can you reproduce it with different controller? I.e. is that a memory corruptor in SCSI that happens to hit buffer cache under that specific load or is that a bug in fs/buffer.c and/or VM and/or fs code? Is it dependent on the fs type, while we are at it? Alexander, The one common card in all of the systems we have seen this on is a qlogic 4GB fibre chanel card. I have asked people back in HP to see if they can reproduce this with other cards. Did you intend to remove the issue tracker link when you updated this BZ? You removed IT 96777 on your last update. looks like a duplicate of 168301 *** This bug has been marked as a duplicate of 168301 *** Pulling in ack from 168301. |