Bug 147853 - Missed wakeup in kiobuf_wait_for_io()
Missed wakeup in kiobuf_wait_for_io()
Status: CLOSED DEFERRED
Product: Red Hat Enterprise Linux 2.1
Classification: Red Hat
Component: kernel (Show other bugs)
2.1
All Linux
medium Severity high
: ---
: ---
Assigned To: Larry Woodman
Brian Brock
:
Depends On:
Blocks: 143573
  Show dependency treegraph
 
Reported: 2005-02-11 16:14 EST by Joel Becker
Modified: 2007-11-30 17:06 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2006-09-13 16:37:36 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Stack trace of processes stuck in kiobuf_wait_for_io(), taken at short intervals (591.36 KB, text/plain)
2005-02-11 16:14 EST, Joel Becker
no flags Details
Diagnostic patch for the sleep in kiobuf_wait_for_io() (618 bytes, patch)
2005-02-11 16:16 EST, Joel Becker
no flags Details | Diff
This is the output from the diagnostic patch (attachment 11096). (1.72 MB, text/plain)
2005-02-11 16:21 EST, Joel Becker
no flags Details
Newer diagnostic patch (3.48 KB, patch)
2005-02-25 20:06 EST, Joel Becker
no flags Details | Diff
Output from newer diagnostic patch (attachment 111451) (2.06 KB, text/plain)
2005-02-25 20:09 EST, Joel Becker
no flags Details
buffer_head tracing patch (9.55 KB, patch)
2005-03-01 14:49 EST, Joel Becker
no flags Details | Diff
Output from diagnostic and buffer_head tracing patches (896.50 KB, text/plain)
2005-03-01 14:53 EST, Joel Becker
no flags Details

  None (edit)
Description Joel Becker 2005-02-11 16:14:09 EST
Description of problem:
This is the same customer experiencing bug #147656.  After applying the patch
attached to that bug, the __wait_on_buffer() problem (described in 147656) goes
away, but the machine still hangs.

Looking at the traces, it is pretty obvious they are hanging in
kiobuf_wait_for_io(), called from brw_kiovec().

I provided them with a diagnostic patch to determine the problem.  The patch
changes the schedule() to schedule_timeout(300*HZ).  Sure enough, their workload
no longer hangs, as the timeout is reached.  When the timeout is reached, the
patch prints the value of kiobuf->io_count.  It is *always* zero.  But if it is
zero, the process should have been woken, and it shouldn't be hanging in schedule.

I'm continuing to look at the interaction of brw_kiovec(), end_kiobuf_io(), and
kiobuf_wait_for_io() to see where the race could be.  Given the mb() in
set_task_state(), I cannot see how the wakeup in end_kiobuf_io() could race
kiobuf_wait_for_io(), but I'm still looking.

Version-Release number of selected component (if applicable):
RHAS 2.1 kernel 2.4.9-e.57enterprise

How reproducible:
Always.

Steps to Reproduce:
1. Run a big rman backup

Additional info:
Same machine as bug 147656
Dual 3+GHz Xeons, 8GB RAM
Comment 1 Joel Becker 2005-02-11 16:14:09 EST
Created attachment 110995 [details]
Stack trace of processes stuck in kiobuf_wait_for_io(), taken at short intervals
Comment 2 Joel Becker 2005-02-11 16:16:31 EST
Created attachment 110996 [details]
Diagnostic patch for the sleep in kiobuf_wait_for_io()

This is the patch used to determine that wakeup never happens.
Comment 3 Joel Becker 2005-02-11 16:21:26 EST
Created attachment 110999 [details]
This is the output from the diagnostic patch (attachment 11096). 

Just search for "timeout".  Note that all timeout lines slept 300 seconds (the
patch prints 300 * HZ mistakenly). Also note that kiobuf->io_count is *always*
zero.  This is what leads me to believe that there is a missed wakeup (or that
the wakeup never happens).
Comment 4 Joel Becker 2005-02-11 16:25:22 EST
Oh, in the first comment where I mention end_kiobuf_io() I really meant
end_kiobuf_request().
Comment 5 Joel Becker 2005-02-11 17:43:56 EST
Issue tracker issue 65797 created.
Comment 6 Joel Becker 2005-02-25 20:06:31 EST
Created attachment 111451 [details]
Newer diagnostic patch

This is a newer patch that prints more details for the hang.  It shows that
io_count hasn't reached 0.  As such, I'm not sure this is a missed wakeup
anymore.  Somehow, those buffer_heads aren't finishing.

I've attached to /proc/kcore with gdb, and the io_count buffers are still
BH_Locked.  So I have no idea where they are in the process of I/O other than
that they were setup in brw_kiovec().
Comment 7 Joel Becker 2005-02-25 20:09:07 EST
Created attachment 111452 [details]
Output from newer diagnostic patch (attachment 111451 [details])

This output shows the io_count is nonzero.
Comment 8 Joel Becker 2005-03-01 14:49:43 EST
Created attachment 111545 [details]
buffer_head tracing patch

Ok, since the diagnostic patch (attachment 111451 [details]) was showing incomplete I/Os,
I added tracing to buffer_heads.  The addition is a b_trace bitfield, with a
bit for each place of interest.  Also, a b_bounce bitfield to point to an
associated bounce buffer if there is one, as a pending I/O isn't going set bits
on the parent  buffer (ie, in scsi_request_fn() all it has is the bounce
buffer).  Finally, when the timeout from the diagnostic patch is reached, the
buffer_heads on the kiobuf are dumped.
Comment 9 Joel Becker 2005-03-01 14:53:39 EST
Created attachment 111547 [details]
Output from diagnostic and buffer_head tracing patches

This is the output of a run with the patches in attachments 111451 and 111545. 
Again, we see a kiobuf fail to complete within five minutes.  The first process
to have this problem, pid 1440, shows a buffer_head that was bounced, added to
the request_queue, and queued in scsi (BHT_ScsiQueued is set at the bottom of
scsi_request_fn()).  However, it never gets to the end of SCSI processing
(BHT_ScsiEnd would be set in __scsi_end_request(), BHT_EndIO would be set in
the b_end_io function).

The next two processes, 1421 and 1908, show that their kiobufs have a
buffer_head that is bounced and added to the request_queue, but never queued in
SCSI.
Comment 10 Joel Becker 2005-03-01 16:56:39 EST
I logged onto the machine, still in the state shown in attachment
111547 [details], and checked the output of /proc/scsi/qla2x00/[34].  Sure
enough, the qla2300 driver sees no active or pending I/Os.  So, where
did that BHT_ScsiQueued I/O go?

Would it be worthwhile to engage QLogic here?
Comment 11 Larry Woodman 2005-04-12 13:36:06 EDT
I built an AS2.1 kernel that includes the missed wakeup patch.
Please try it out and see if if fixes this problem:

Its located here: http://people.redhat.com/~lwoodman/.for_oracle/


Thanks, Larry


Comment 12 Joel Becker 2005-04-12 16:46:43 EDT
Larry, when you say "missed wakeup patch," do you mean the patch I posted in the
__wait_on_buffer() bug (#147656), or do you mean something else?

I'm currently querying the customer to see if the environment is still available.
Comment 13 Larry Woodman 2005-05-04 13:56:18 EDT
Joel, did you have the same stability issues with the previous kernel on this
issue as you reported in Bug 147656?  If yes, the official AS2.1-U7 kernel is
located in:

>>>http://people.redhat.com/~lwoodman/AS2.1/

It does not contain the wait_on_buffer wait patch but I need to know if this
kernel fixes the stability issues you experienced in the previous kernel that
you tried.

Larry
Comment 14 AJ Johnson 2005-05-04 17:13:59 EDT
Can you let me know the status this bug...I am seeing something very similar on
RHEL3 up3 (which has the __wait_on_buffer patch as well).  Please see RIT71999
for additional details.
Comment 15 Larry Woodman 2005-05-11 15:38:44 EDT
AJ, if you are experiencing a similar problem on RHEL3 please open up a new BUG
for that issue.  The code is totally different between AS2.1 and RHEL3 so it
really cant be the exact same issue.

Thanks, Larry Woodman
Comment 16 AJ Johnson 2005-05-11 16:24:41 EDT
Yea, I think our situation is either in the scsi or scsi adapter/multipath areas.

Thanks, AJ
Comment 18 Bob Johnson 2006-03-29 13:27:59 EST
Joel, any updates regarding Larry's test kernel, see comment #13 ?
Comment 19 Joel Becker 2006-03-29 13:46:47 EST
The customer had long since moved on with our kernel.  They were in production,
and couldn't spend downtime testing Larry's kernel.  I believe I told Larry this
offline.

I'm not sure what to do.  The problem still exists, so closing the bug seems
wrong, but I know that this particular customer isn't going to be helping us
reproduce it.

Note You need to log in before you can comment on or make changes to this bug.