Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 681817

Summary: [ext4] Reproducer for bug 546700 got stuck
Product: Red Hat Enterprise Linux 6 Reporter: Eryu Guan <eguan>
Component: kernelAssignee: Red Hat Kernel Manager <kernel-mgr>
Status: CLOSED DUPLICATE QA Contact: Red Hat Kernel QE team <kernel-qe>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 6.1CC: esandeen, jmoyer, kzhang, rwheeler
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2011-04-01 14:40:41 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
sysrq-w output none

Description Eryu Guan 2011-03-03 11:12:58 UTC
Description of problem:
When running reproducer for bug 546700 the reproducer got stuck for more than 120 seconds

Version-Release number of selected component (if applicable):
[root@ibm-x3550m3-02 bz546700]# uname -a
Linux ibm-x3550m3-02.rhts.eng.nay.redhat.com 2.6.32-71.el6.x86_64 #1 SMP Wed Sep 1 01:33:01 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux

How reproducible:
always

Steps to Reproduce:
1. yum install rh-tests-kernel-filesystems-bz546700
2. cd /mnt/tests/kernel/filesystems/bz546700
3. make run
  
Actual results:
INFO: task jbd2/dm-0-8:404 blocked for more than 120 seconds. 
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 
jbd2/dm-0-8   D 0000000000000001     0   404      2 0x00000000 
 ffff8801f8fa9b20 0000000000000046 0000000000000000 ffffffffa000158d 
 0000000000000001 ffff8801fa43d8c0 ffff8801f8fa9ad0 0000000000000282 
 ffff8801f93b5af8 ffff8801f8fa9fd8 000000000000f558 ffff8801f93b5af8 
Call Trace: 
 [<ffffffffa000158d>] ? __map_bio+0xad/0x130 [dm_mod] 
 [<ffffffff81098889>] ? ktime_get_ts+0xa9/0xe0 
 [<ffffffff8110cdc0>] ? sync_page+0x0/0x50 
 [<ffffffff814d9663>] io_schedule+0x73/0xc0 
 [<ffffffff8110cdfd>] sync_page+0x3d/0x50 
 [<ffffffff814d9ecf>] __wait_on_bit+0x5f/0x90 
 [<ffffffff8110cfb3>] wait_on_page_bit+0x73/0x80 
 [<ffffffff8108dd20>] ? wake_bit_function+0x0/0x50 
 [<ffffffff81122d05>] ? pagevec_lookup_tag+0x25/0x40 
 [<ffffffff8110d3cb>] wait_on_page_writeback_range+0xfb/0x190 
 [<ffffffff81247bef>] ? submit_bio+0x8f/0x120 
 [<ffffffff8110d48f>] filemap_fdatawait+0x2f/0x40 
 [<ffffffffa0064e80>] jbd2_journal_commit_transaction+0x7f0/0x1490 [jbd2] 
 [<ffffffff81079efb>] ? try_to_del_timer_sync+0x7b/0xe0 
 [<ffffffffa006a958>] kjournald2+0xb8/0x220 [jbd2] 
 [<ffffffff8108dce0>] ? autoremove_wake_function+0x0/0x40 
 [<ffffffffa006a8a0>] ? kjournald2+0x0/0x220 [jbd2] 
 [<ffffffff8108d976>] kthread+0x96/0xa0 
 [<ffffffff8100c1ca>] child_rip+0xa/0x20 
 [<ffffffff8108d8e0>] ? kthread+0x0/0xa0 
 [<ffffffff8100c1c0>] ? child_rip+0x0/0x20 
INFO: task beah-beaker-bac:1686 blocked for more than 120 seconds. 
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 
beah-beaker-b D 0000000000000001     0  1686      1 0x00000080 
 ffff8801f8d259a8 0000000000000082 0000000000da3d02 ffff8801f8d25918 
 ffff8801f8d259a8 ffffffff814da710 ffff8800f5d93919 ffff8801fa983918 
 ffff8801f8dd45f8 ffff8801f8d25fd8 000000000000f558 ffff8801f8dd45f8 
Call Trace: 
 [<ffffffff814da710>] ? schedule_hrtimeout_range+0xd0/0x160 
 [<ffffffff8108dfce>] ? prepare_to_wait+0x4e/0x80 
 [<ffffffffa00640dd>] do_get_write_access+0x29d/0x500 [jbd2] 
 [<ffffffff8108dd20>] ? wake_bit_function+0x0/0x50 
 [<ffffffffa0064491>] jbd2_journal_get_write_access+0x31/0x50 [jbd2] 
 [<ffffffffa00b8328>] __ext4_journal_get_write_access+0x38/0x80 [ext4] 
 [<ffffffffa0094843>] ext4_reserve_inode_write+0x73/0xa0 [ext4] 
 [<ffffffffa00948bc>] ext4_mark_inode_dirty+0x4c/0x1d0 [ext4] 
 [<ffffffffa0094bb0>] ext4_dirty_inode+0x40/0x60 [ext4] 
 [<ffffffff81199aab>] __mark_inode_dirty+0x3b/0x160 
 [<ffffffff8118a3c2>] file_update_time+0xf2/0x170 
 [<ffffffff814257f5>] ? neigh_resolve_output+0x105/0x370 
 [<ffffffff8110efe0>] __generic_file_aio_write+0x220/0x480 
 [<ffffffff8145607c>] ? ip_finish_output+0x13c/0x310 
 [<ffffffff8145525f>] ? __ip_local_out+0x9f/0xb0 
 [<ffffffff8110f2af>] generic_file_aio_write+0x6f/0xe0 
 [<ffffffffa008e2a1>] ext4_file_write+0x61/0x1e0 [ext4] 
 [<ffffffff8117095a>] do_sync_write+0xfa/0x140 
 [<ffffffff8108dce0>] ? autoremove_wake_function+0x0/0x40 
 [<ffffffff81098889>] ? ktime_get_ts+0xa9/0xe0 
 [<ffffffff8121037b>] ? selinux_file_permission+0xfb/0x150 
 [<ffffffff812037e6>] ? security_file_permission+0x16/0x20 
 [<ffffffff81170c58>] vfs_write+0xb8/0x1a0 
 [<ffffffff810d1652>] ? audit_syscall_entry+0x272/0x2a0 
 [<ffffffff81171752>] sys_pwrite64+0x82/0xa0 
 [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b 
INFO: task aio_wq_hang:12142 blocked for more than 120 seconds. 
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 
aio_wq_hang   D 0000000000000001     0 12142  12125 0x00000080 
 ffff8800f63e59a8 0000000000000082 0000000000000000 ffffffff81123a11 
 ffff880000000002 ffffffff81140cbd 00000000faa1c360 000000010146bffa 
 ffff8800f56b1ab8 ffff8800f63e5fd8 000000000000f558 ffff8800f56b1ab8 
Call Trace: 
 [<ffffffff81123a11>] ? lru_cache_add_lru+0x21/0x40 
 [<ffffffff81140cbd>] ? page_add_new_anon_rmap+0x9d/0xf0 
 [<ffffffffa00640dd>] do_get_write_access+0x29d/0x500 [jbd2] 
 [<ffffffff8111c756>] ? __rmqueue+0x156/0x490 
 [<ffffffff8108dd20>] ? wake_bit_function+0x0/0x50 
 [<ffffffffa0064491>] jbd2_journal_get_write_access+0x31/0x50 [jbd2] 
 [<ffffffffa00b8328>] __ext4_journal_get_write_access+0x38/0x80 [ext4] 
 [<ffffffffa0094843>] ext4_reserve_inode_write+0x73/0xa0 [ext4] 
 [<ffffffffa00948bc>] ext4_mark_inode_dirty+0x4c/0x1d0 [ext4] 
 [<ffffffffa0094bb0>] ext4_dirty_inode+0x40/0x60 [ext4] 
 [<ffffffff81199aab>] __mark_inode_dirty+0x3b/0x160 
 [<ffffffff8118a3c2>] file_update_time+0xf2/0x170 
 [<ffffffff8110efe0>] __generic_file_aio_write+0x220/0x480 
 [<ffffffff8110f2af>] generic_file_aio_write+0x6f/0xe0 
 [<ffffffffa008e2a1>] ext4_file_write+0x61/0x1e0 [ext4] 
 [<ffffffff8117095a>] do_sync_write+0xfa/0x140 
 [<ffffffff8108dce0>] ? autoremove_wake_function+0x0/0x40 
 [<ffffffff81092f62>] ? hrtimer_cancel+0x22/0x30 
 [<ffffffff8121037b>] ? selinux_file_permission+0xfb/0x150 
 [<ffffffff812037e6>] ? security_file_permission+0x16/0x20 
 [<ffffffff81170c58>] vfs_write+0xb8/0x1a0 
 [<ffffffff810d1652>] ? audit_syscall_entry+0x272/0x2a0 
 [<ffffffff81171691>] sys_write+0x51/0x90 
 [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b 
INFO: task aio_wq_hang:12150 blocked for more than 120 seconds. 
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 
aio_wq_hang   D 0000000000000002     0 12150  12125 0x00000080 
 ffff8801faa2beb8 0000000000000082 ffff8801fa121680 ffff8801faa2bf30 
 ffff8801faa2be28 ffffffff8110f34e ffff8801faa2be48 ffffffff8110f4b5 
 ffff8801f92d5af8 ffff8801faa2bfd8 000000000000f558 ffff8801f92d5af8 
Call Trace: 
 [<ffffffff8110f34e>] ? mempool_kfree+0xe/0x10 
 [<ffffffff8110f4b5>] ? mempool_free+0x95/0xa0 
 [<ffffffff81098889>] ? ktime_get_ts+0xa9/0xe0 
 [<ffffffff814d9663>] io_schedule+0x73/0xc0 
 [<ffffffff811b67e2>] wait_for_all_aios+0xd2/0x110 
 [<ffffffff8105c8e0>] ? default_wake_function+0x0/0x20 
 [<ffffffff811b68a7>] io_destroy+0x87/0xe0 
 [<ffffffff811b6962>] sys_io_destroy+0x62/0xb0 
 [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b 
INFO: task aio_wq_hang:12154 blocked for more than 120 seconds. 
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 
aio_wq_hang   D 0000000000000002     0 12154  12125 0x00000080 
 ffff8801f9ddfeb8 0000000000000082 ffff8801fa121800 ffff8801f9ddff30 
 ffff8801f9ddfe28 ffffffff8110f34e ffff8801f9ddfe48 ffffffff8110f4b5 
 ffff8801f9203a78 ffff8801f9ddffd8 000000000000f558 ffff8801f9203a78 
Call Trace: 
 [<ffffffff8110f34e>] ? mempool_kfree+0xe/0x10 
 [<ffffffff8110f4b5>] ? mempool_free+0x95/0xa0 
 [<ffffffff81098889>] ? ktime_get_ts+0xa9/0xe0 
 [<ffffffff814d9663>] io_schedule+0x73/0xc0 
 [<ffffffff811b67e2>] wait_for_all_aios+0xd2/0x110 
 [<ffffffff8105c8e0>] ? default_wake_function+0x0/0x20 
 [<ffffffff811b68a7>] io_destroy+0x87/0xe0 
 [<ffffffff811b6962>] sys_io_destroy+0x62/0xb0 
 [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b 
INFO: task aio_wq_hang:12157 blocked for more than 120 seconds. 
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 
aio_wq_hang   D 0000000000000001     0 12157  12125 0x00000080 
 ffff8800e59e5eb8 0000000000000082 0000000000000000 ffff8800e59e5e7c 
 ffff8800e59e5e28 ffff8800fb823280 ffff8800e59e5e68 ffffffff81098889 
 ffff8800f4d85038 ffff8800e59e5fd8 000000000000f558 ffff8800f4d85038 
Call Trace: 
 [<ffffffff81098889>] ? ktime_get_ts+0xa9/0xe0 
 [<ffffffff81098889>] ? ktime_get_ts+0xa9/0xe0 
 [<ffffffff814d9663>] io_schedule+0x73/0xc0 
 [<ffffffff811b67e2>] wait_for_all_aios+0xd2/0x110 
 [<ffffffff8105c8e0>] ? default_wake_function+0x0/0x20 
 [<ffffffff811b68a7>] io_destroy+0x87/0xe0 
 [<ffffffff811b6962>] sys_io_destroy+0x62/0xb0 
 [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b 
INFO: task aio_wq_hang:12158 blocked for more than 120 seconds. 
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 
aio_wq_hang   D 0000000000000002     0 12158  12125 0x00000080 
 ffff8800f5265eb8 0000000000000082 0000000000000000 0000000000000000 
 0000000000001008 0000000000000046 ffff8800f5265e68 000000010146a565 
 ffff8800f4fa4638 ffff8800f5265fd8 000000000000f558 ffff8800f4fa4638 
Call Trace: 
 [<ffffffff814d9663>] io_schedule+0x73/0xc0 
 [<ffffffff811b67e2>] wait_for_all_aios+0xd2/0x110 
 [<ffffffff8105c8e0>] ? default_wake_function+0x0/0x20 
 [<ffffffff811b68a7>] io_destroy+0x87/0xe0 
 [<ffffffff811b6962>] sys_io_destroy+0x62/0xb0 
 [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b 
INFO: task aio_wq_hang:12159 blocked for more than 120 seconds. 
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 
aio_wq_hang   D 0000000000000003     0 12159  12125 0x00000080 
 ffff8800f6231eb8 0000000000000082 ffff8801fa680cc0 ffff8800f6231f30 
 ffff8800f6231e28 ffffffff8110f34e ffff8800f6231e48 ffffffff8110f4b5 
 ffff8800f5ffdb38 ffff8800f6231fd8 000000000000f558 ffff8800f5ffdb38 
Call Trace: 
 [<ffffffff8110f34e>] ? mempool_kfree+0xe/0x10 
 [<ffffffff8110f4b5>] ? mempool_free+0x95/0xa0 
 [<ffffffff81098889>] ? ktime_get_ts+0xa9/0xe0 
 [<ffffffff814d9663>] io_schedule+0x73/0xc0 
 [<ffffffff811b67e2>] wait_for_all_aios+0xd2/0x110 
 [<ffffffff8105c8e0>] ? default_wake_function+0x0/0x20 
 [<ffffffff811b68a7>] io_destroy+0x87/0xe0 
 [<ffffffff811b6962>] sys_io_destroy+0x62/0xb0 
 [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b 
INFO: task aio_wq_hang:12162 blocked for more than 120 seconds. 
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 
aio_wq_hang   D 0000000000000002     0 12162  12125 0x00000080 
 ffff8800f5199eb8 0000000000000082 0000000000000000 ffff8800f5199e7c 
 ffff8800f5199e28 ffff8800fb823480 ffff8800f5199e68 ffffffff81098889 
 ffff8800f5245b38 ffff8800f5199fd8 000000000000f558 ffff8800f5245b38 
Call Trace: 
 [<ffffffff81098889>] ? ktime_get_ts+0xa9/0xe0 
 [<ffffffff81098889>] ? ktime_get_ts+0xa9/0xe0 
 [<ffffffff814d9663>] io_schedule+0x73/0xc0 
 [<ffffffff811b67e2>] wait_for_all_aios+0xd2/0x110 
 [<ffffffff8105c8e0>] ? default_wake_function+0x0/0x20 
 [<ffffffff811b68a7>] io_destroy+0x87/0xe0 
 [<ffffffff811b6962>] sys_io_destroy+0x62/0xb0 
 [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b 
INFO: task aio_wq_hang:12163 blocked for more than 120 seconds. 
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 
aio_wq_hang   D 0000000000000003     0 12163  12125 0x00000080 
 ffff8800f5f15eb8 0000000000000082 0000000000000000 ffff8800f5f15f30 
 ffff8800f5f15e28 ffffffff8110f34e ffff8800f5f15e68 0000000101469f34 
 ffff880037b370f8 ffff8800f5f15fd8 000000000000f558 ffff880037b370f8 
Call Trace: 
 [<ffffffff8110f34e>] ? mempool_kfree+0xe/0x10 
 [<ffffffff814d9663>] io_schedule+0x73/0xc0 
 [<ffffffff811b67e2>] wait_for_all_aios+0xd2/0x110 
 [<ffffffff8105c8e0>] ? default_wake_function+0x0/0x20 
 [<ffffffff811b68a7>] io_destroy+0x87/0xe0 
 [<ffffffff811b6962>] sys_io_destroy+0x62/0xb0 
 [<ffffffff8100b172>] system_call_fastpath+0x16/0x1b 

Expected results:
Pass the test

Additional info:
This only happens on x86_64 host, here is a failed job in beaker
https://beaker.engineering.redhat.com/recipes/116713
http://beaker-archive.app.eng.bos.redhat.com/beaker-logs/2011/02/571/57169/116713///console.log

This test also fails on 6.0 GA kernel.

Comment 1 Eryu Guan 2011-03-03 11:19:05 UTC
Created attachment 482048 [details]
sysrq-w output

The call trace isn't similar to bug 546700, but to bug 681439, I'm not sure whether it's duplicated.

I uploaded the sysrq-w output while it got stuck.

Comment 5 Jeff Moyer 2011-04-01 14:40:41 UTC

*** This bug has been marked as a duplicate of bug 587402 ***