Bug 603938 - Task kjournald blocked for more than 120 seconds, leading to I/O timeout
Task kjournald blocked for more than 120 seconds, leading to I/O timeout
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel (Show other bugs)
6.1
x86_64 Linux
low Severity medium
: rc
: ---
Assigned To: Red Hat Kernel Manager
Red Hat Kernel QE team
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-06-14 18:43 EDT by Sean Stewart
Modified: 2010-07-15 11:54 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2010-07-15 11:54:58 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
messages file from the host during the test run (179.28 KB, application/octet-stream)
2010-06-14 18:43 EDT, Sean Stewart
no flags Details

  None (edit)
Description Sean Stewart 2010-06-14 18:43:48 EDT
Created attachment 423989 [details]
messages file from the host during the test run

Description of problem:
Running two LSI arrays. Test case calls for extending the boot sequence of the controllers. When this occurs, some timer in the LSI MPP driver gets reset, causing I/O to time out for several devices before they will transfer to the alternate controller.  Several minutes after pausing the controller boot sequence, a soft panic can be observed in the messages:

Jun 14 16:45:26 kswc-kenpachi kernel: INFO: task kjournald:10576 blocked for more than 120 seconds.
Jun 14 16:45:26 kswc-kenpachi kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 14 16:45:26 kswc-kenpachi kernel: kjournald     D 0000000000000002     0 10576      2 0x00000080
Jun 14 16:45:26 kswc-kenpachi kernel: ffff8803e8e8fc40 0000000000000046 0000000000000000 000000000000036a
Jun 14 16:45:26 kswc-kenpachi kernel: ffff880415216240 ffff880415216240 0000000000000050 ffff880416a43620
Jun 14 16:45:26 kswc-kenpachi kernel: ffff8803fba4fab0 ffff8803e8e8ffd8 00000000000103b8 ffff8803fba4fab0
Jun 14 16:45:26 kswc-kenpachi kernel: Call Trace:
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffff8109a449>] ? ktime_get_ts+0xa9/0xe0
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffff811962e0>] ? sync_buffer+0x0/0x50
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffff814d4543>] io_schedule+0x73/0xc0
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffff81196320>] sync_buffer+0x40/0x50
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffff814d4dcf>] __wait_on_bit+0x5f/0x90
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffff811962e0>] ? sync_buffer+0x0/0x50
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffff814d4e78>] out_of_line_wait_on_bit+0x78/0x90
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffff8108ffc0>] ? wake_bit_function+0x0/0x50
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffff811962d6>] __wait_on_buffer+0x26/0x30
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffffa04a5aeb>] journal_commit_transaction+0x5fb/0x1240 [jbd]
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffff8107b94c>] ? lock_timer_base+0x3c/0x70
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffff8107c3cb>] ? try_to_del_timer_sync+0x7b/0xe0
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffffa04a97f4>] kjournald+0xe4/0x230 [jbd]
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffff8108ff80>] ? autoremove_wake_function+0x0/0x40
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffffa04a9710>] ? kjournald+0x0/0x230 [jbd]
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffff8108fc26>] kthread+0x96/0xa0
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffff810141ca>] child_rip+0xa/0x20
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffff8108fb90>] ? kthread+0x0/0xa0
Jun 14 16:45:26 kswc-kenpachi kernel: [<ffffffff810141c0>] ? child_rip+0x0/0x20

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Map 64 LUNs to the host running the LSI MPP driver, 32 on two separate arrays
2. Start IO and reboot the A controller on each array
3. Wait 10 minutes (the application timeout)
  
Actual results:
Some volumes transferred at the "Controller IO time expired" after 5 minutes. The timer somehow got reset on several devices, so that the timer does not expire until 10 minutes, when the application times out.

Expected results:
"Controller IO time expired" should be observed after 5 minutes, and the volumes should all transfer to the alternate controller.

Additional info:
If I run with only a single array (32 LUNs), this problem does not occur. If I reboot controller A of the first array, wait several minutes, then reboot controller A of the second array, this problem also does not occur. This seems to imply some problem within the OS handling all of the LUNs. In the cases where I/O does not time out, there is still a soft panic with LinuxSmash, but not for kjournald.
Comment 2 RHEL Product and Program Management 2010-06-14 19:02:58 EDT
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.
Comment 3 Eric Sandeen 2010-06-15 10:01:48 EDT
As near as I can tell, this is not an ext3/kjournald/jbd bug - it looks like the underlying hardware is sitting on an IO for an extended period of time, and jbd is just waiting for it.  In that case the softlockup messages are expected after 2 min / 120s.

The actual bug is that the transfer to the alternate controller is taking too long, I think.  Have I read it right?

Thanks,
-Eric
Comment 4 Sean Stewart 2010-06-15 10:22:58 EDT
That's essentially right. There are 64 LUNs, and the transfer takes place as expected for most devices.  The problem is that the timer seems to be reset, somehow, for four or five devices, and those devices take five minutes longer than they should to transfer. After talking with our failover team, they thought it has to do with the kjournald hang.  The problem seems to be dependent on the number of LUNs, as it is not observed with 32 and is with 64.
Comment 5 Eric Sandeen 2010-06-15 10:34:14 EDT
(In reply to comment #4)
> After talking with our failover team, they
> thought it has to do with the kjournald hang.

Hm, and I'm inclined to think that kjournald is the victim here.  :)

The transfer doesn't have anything to do with host IO does it?

Is this slow transfer behavior also seen when no IO is happening, and/or when no filesystem is mounted?  What about testing with IO going directly to the block device, rather than through the fs?
Comment 6 Tom Coughlan 2010-06-20 13:49:50 EDT
In addition to the questions in comment 5, can you reproduce this without the closed source (therefore unsupported) LSI MPP driver? Is there support for this hardware in dm-multipath? If not, how about developing and offering that as an option?
Comment 7 RHEL Product and Program Management 2010-07-15 10:36:48 EDT
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release. It has
been denied for the current Red Hat Enterprise Linux release.

** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **

Note You need to log in before you can comment on or make changes to this bug.