Bug 1254449

Summary: Kernel crashes in bitmap_file_set_bit
Product: [Fedora] Fedora Reporter: Zdenek Kabelac <zkabelac>
Component: lvm2Assignee: Heinz Mauelshagen <heinzm>
Status: CLOSED UPSTREAM QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 24CC: agk, bmarzins, bmr, dwysocha, heinzm, jonathan, lvm-team, msnitzer, prajnoha, prockai, zkabelac
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-03-22 16:01:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Zdenek Kabelac 2015-08-18 07:41:52 UTC
Description of problem:

Recent kernel 4.2.0-0.rc6.git1.1.fc24.x86_64  crashes:
(during lvm2 test suite ndev-vanilla: shell/lvcreate-cache.sh)

[  663.050088] BUG: unable to handle kernel paging request at ffff8800951107c8
[  663.051009] IP: [<ffffffff815ed2cf>] bitmap_file_set_bit+0xbf/0x120
[  663.051009] PGD 203c067 PUD 0
[  663.051009] Oops: 0002 [#1] SMP
[  663.051009] Modules linked in: raid1 raid10 dm_raid raid456 async_raid6_recov async_memcpy async_pq async_xor xor async_tx raid6_pq dm_cache_cleaner dm_cache_smq dm_cach
e_mq dm_cache dm_delay dm_thin_pool dm_persistent_data dm_bio_prison libcrc32c loop crct10dif_pclmul crc32_pclmul virtio_net virtio_balloon acpi_cpufreq crc32c_intel ghash_
clmulni_intel ppdev parport_pc parport joydev i2c_piix4 cirrus drm_kms_helper ttm virtio_blk drm serio_raw virtio_pci virtio_ring virtio ata_generic pata_acpi
[  663.051009] CPU: 1 PID: 29332 Comm: lvm Tainted: G        W       4.2.0-0.rc6.git1.1.fc24.x86_64 #1
[  663.051009] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2007
[  663.051009] task: ffff880000060c80 ti: ffff880074b90000 task.ti: ffff880074b90000
[  663.051009] RIP: 0010:[<ffffffff815ed2cf>]  [<ffffffff815ed2cf>] bitmap_file_set_bit+0xbf/0x120
[  663.051009] RSP: 0018:ffff880074b93648  EFLAGS: 00010083
[  663.051009] RAX: 00000000ffffffc4 RBX: ffff880077393a00 RCX: 0000000000000002
[  663.051009] RDX: ffff8800751107d0 RSI: 0000000000000800 RDI: ffff880077393a00
[  663.051009] RBP: ffff880074b93668 R08: 0000000000000000 R09: 0000000000000000
[  663.051009] R10: ffff880050634000 R11: 0000000000000008 R12: ffffea0001d6ab00
[  663.051009] R13: 0000000000000800 R14: ffff880077393ac0 R15: ffff880074b936b0
[  663.051009] FS:  00007f9e349e2880(0000) GS:ffff88007a300000(0000) knlGS:0000000000000000
[  663.051009] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  663.051009] CR2: ffff8800951107c8 CR3: 0000000075155000 CR4: 00000000000006e0
[  663.051009] Stack:
[  663.051009]  0000000000000000 ffff880077393a00 0000000000000000 0000000000000008
[  663.051009]  ffff880074b936f8 ffffffff815edaa4 ffff880000060c80 ffff880050634000
[  663.051009]  0000000000000001 0000000000100000 0000000000000000 0000000000000000
[  663.051009] Call Trace:
[  663.051009]  [<ffffffff815edaa4>] bitmap_startwrite+0xf4/0x1d0
[  663.051009]  [<ffffffffa01c47f3>] add_stripe_bio+0x333/0x630 [raid456]
[  663.051009]  [<ffffffffa01cc5b2>] make_request+0x1f2/0xc50 [raid456]
[  663.051009]  [<ffffffffa01c3ac0>] ? release_stripe_list+0x70/0x70 [raid456]
[  663.051009]  [<ffffffff811a2eed>] ? mempool_alloc_slab+0x1d/0x30
[  663.051009]  [<ffffffff810df0a0>] ? wake_atomic_t_function+0x70/0x70
[  663.051009]  [<ffffffffa01e1018>] raid_map+0x18/0x20 [dm_raid]
[  663.051009]  [<ffffffff815f121e>] __map_bio+0x3e/0x100
[  663.051009]  [<ffffffff815f3105>] __split_and_process_bio+0x285/0x3f0
[  663.051009]  [<ffffffff815f32dd>] dm_make_request+0x6d/0xc0
[  663.051009]  [<ffffffff8136b106>] generic_make_request+0xd6/0x110
[  663.051009]  [<ffffffff8136b1b6>] submit_bio+0x76/0x170
[  663.051009]  [<ffffffff81258f17>] do_blockdev_direct_IO+0x22e7/0x2b20
[  663.051009]  [<ffffffff810ce96c>] ? __enqueue_entity+0x6c/0x70
[  663.051009]  [<ffffffff810c6c58>] ? check_preempt_curr+0x88/0xa0
[  663.051009]  [<ffffffff81253ee0>] ? I_BDEV+0x20/0x20
[  663.051009]  [<ffffffff81259793>] __blockdev_direct_IO+0x43/0x50
[  663.051009]  [<ffffffff81247cea>] ? __mark_inode_dirty+0x27a/0x300
[  663.051009]  [<ffffffff812547ec>] blkdev_direct_IO+0x4c/0x50
[  663.051009]  [<ffffffff811a1c89>] generic_file_direct_write+0xb9/0x180
[  663.051009]  [<ffffffff811a1e10>] __generic_file_write_iter+0xc0/0x1f0
[  663.051009]  [<ffffffff8125534b>] blkdev_write_iter+0x8b/0x120
[  663.051009]  [<ffffffff810d467e>] ? set_next_entity+0x6e/0x400
[  663.051009]  [<ffffffff8121b09c>] __vfs_write+0xcc/0x100
[  663.051009]  [<ffffffff8121b986>] vfs_write+0xa6/0x1a0
[  663.051009]  [<ffffffff8102229b>] ? do_audit_syscall_entry+0x4b/0x70
[  663.051009]  [<ffffffff8121c675>] SyS_write+0x55/0xc0
[  663.051009]  [<ffffffff8177712e>] entry_SYSCALL_64_fastpath+0x12/0x71
[  663.051009] Code: 25 00 b9 00 00 83 a8 58 09 00 00 01 8b 80 58 09 00 00 85 c0 78 37 f6 05 09 83 73 00 04 75 41 49 8b 44 24 10 48 8b 53 60 c1 e0 02 <f0> 48 0f ab 02 48 83
 c4 08 5b 41 5c 41 5d 5d c3 48 89 f0 48 c1
[  663.051009] RIP  [<ffffffff815ed2cf>] bitmap_file_set_bit+0xbf/0x120
Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Jan Kurik 2016-02-24 13:37:54 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 24 development cycle.
Changing version to '24'.

More information and reason for this action is here:
https://fedoraproject.org/wiki/Fedora_Program_Management/HouseKeeping/Fedora24#Rawhide_Rebase