Bug 1461586 - Mirror to RAID1 TAKEOVER: GPF kmem_cache_alloc
Mirror to RAID1 TAKEOVER: GPF kmem_cache_alloc
Status: NEW
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2 (Show other bugs)
7.4
x86_64 Linux
unspecified Severity high
: rc
: ---
Assigned To: Heinz Mauelshagen
cluster-qe@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-06-14 17:12 EDT by Corey Marthaler
Modified: 2018-07-20 11:50 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Corey Marthaler 2017-06-14 17:12:15 EDT
Description of problem:
Not much to go on here, hopefully I'll be able to repo this.

================================================================================
Iteration 0.5 started at Wed Jun 14 15:42:08 CDT 2017
================================================================================
Scenario mirror: Convert Mirrored volume
********* Take over hash info for this scenario *********
* from type:    mirror
* to type:      raid1
* from legs:    4
* to legs:      1
* from region:  256.00k
* to region:    16384.00k
* contiguous:   0
* snapshot:     1
******************************************************


Creating original volume on mckinley-04...
mckinley-04: lvcreate  --type mirror -R 256.00k -m 4 -n takeover -L 4G centipede2
  WARNING: Not using lvmetad because a repair command was run.
Waiting until all mirror|raid volumes become fully syncd...
   0/1 mirror(s) are fully synced: ( 15.38% )
   0/1 mirror(s) are fully synced: ( 34.29% )
   0/1 mirror(s) are fully synced: ( 57.90% )
   0/1 mirror(s) are fully synced: ( 81.75% )
   0/1 mirror(s) are fully synced: ( 98.96% )
   1/1 mirror(s) are fully synced: ( 100.00% )
Sleeping 15 sec

  WARNING: Not using lvmetad because a repair command was run.
  WARNING: Not using lvmetad because a repair command was run.
Placing a spacer on all raid image PVs so that expansion will have to be placed beyond
  WARNING: Not using lvmetad because a repair command was run.
  WARNING: Not using lvmetad because a repair command was run.
  WARNING: Not using lvmetad because a repair command was run.
  WARNING: Not using lvmetad because a repair command was run.
  WARNING: Not using lvmetad because a repair command was run.
  WARNING: Not using lvmetad because a repair command was run.
Extending raid beyond spacer
        lvextend -L +50M centipede2/takeover
  WARNING: Not using lvmetad because a repair command was run.

Current volume device structure:
  WARNING: Not using lvmetad because a repair command was run.
  LV                  Attr       LSize   Cpy%Sync Devices                                                                                                 
  lvol0               -wi-a-----  20.00m          /dev/mapper/mpatha1(1024)                                                                               
  lvol1               -wi-a-----  20.00m          /dev/mapper/mpathb1(1024)                                                                               
  lvol2               -wi-a-----  20.00m          /dev/mapper/mpathc1(1024)                                                                               
  lvol3               -wi-a-----  20.00m          /dev/mapper/mpathd1(1024)                                                                               
  lvol4               -wi-a-----  20.00m          /dev/mapper/mpathh1(1)                                                                                  
  lvol5               -wi-a-----  20.00m          /dev/nvme0n1p1(1024)                                                                                    
  takeover            mwi-a-m---   4.05g 98.94    takeover_mimage_0(0),takeover_mimage_1(0),takeover_mimage_2(0),takeover_mimage_3(0),takeover_mimage_4(0)
  [takeover_mimage_0] Iwi-aom---   4.05g          /dev/nvme0n1p1(0)                                                                                       
  [takeover_mimage_0] Iwi-aom---   4.05g          /dev/nvme0n1p1(1029)                                                                                    
  [takeover_mimage_1] Iwi-aom---   4.05g          /dev/mapper/mpatha1(0)                                                                                  
  [takeover_mimage_1] Iwi-aom---   4.05g          /dev/mapper/mpatha1(1029)                                                                               
  [takeover_mimage_2] Iwi-aom---   4.05g          /dev/mapper/mpathb1(0)                                                                                  
  [takeover_mimage_2] Iwi-aom---   4.05g          /dev/mapper/mpathb1(1029)                                                                               
  [takeover_mimage_3] Iwi-aom---   4.05g          /dev/mapper/mpathc1(0)                                                                                  
  [takeover_mimage_3] Iwi-aom---   4.05g          /dev/mapper/mpathc1(1029)                                                                               
  [takeover_mimage_4] Iwi-aom---   4.05g          /dev/mapper/mpathd1(0)                                                                                  
  [takeover_mimage_4] Iwi-aom---   4.05g          /dev/mapper/mpathd1(1029)                                                                               
  [takeover_mlog]     lwi-aom---   4.00m          /dev/mapper/mpathh1(0)                                                                                  


Creating xfs on top of mirror(s) on mckinley-04...
warning: device is not properly aligned /dev/centipede2/takeover
Mounting mirrored xfs filesystems on mckinley-04...

Writing verification files (checkit) to mirror(s) on...
        ---- mckinley-04 ----

<start name="mckinley-04_takeover"  pid="15643" time="Wed Jun 14 15:43:43 2017 -0500" type="cmd" />
Sleeping 15 seconds to get some outsanding I/O locks before the failure 
Verifying files (checkit) on mirror(s) on...
        ---- mckinley-04 ----

TAKEOVER: lvconvert --yes -R 16384.00k  --type raid1 centipede2/takeover
  WARNING: Not using lvmetad because a repair command was run.
<fail name="mckinley-04_takeover"  pid="15643" time="Wed Jun 14 15:46:10 2017 -0500" type="cmd" duration="147" ec="127" />
ALL STOP!
Didn't receive heartbeat from mckinley-04 for 120 seconds


# after the system was back up, here's what the volume looks like:
[root@mckinley-04 spool]# lvs -a -o +devices
  LV                  VG         Attr       LSize    Log             Cpy%Sync Devices
  lvol0               centipede2 -wi-a-----  20.00m                           /dev/mapper/mpatha1(1024)
  lvol1               centipede2 -wi-a-----  20.00m                           /dev/mapper/mpathb1(1024)
  lvol2               centipede2 -wi-a-----  20.00m                           /dev/mapper/mpathc1(1024)
  lvol3               centipede2 -wi-a-----  20.00m                           /dev/mapper/mpathd1(1024)
  lvol4               centipede2 -wi-a-----  20.00m                           /dev/mapper/mpathh1(1)
  lvol5               centipede2 -wi-a-----  20.00m                           /dev/nvme0n1p1(1024)
  takeover            centipede2 mwi-a-m---   4.05g  [takeover_mlog] 100.00   takeover_mimage_0(0),takeover_mimage_1(0),takeover_mimage_2(0),takeover_mimage_3(0),takeover_mimage_4(0)
  [takeover_mimage_0] centipede2 iwi-aom---   4.05g                           /dev/nvme0n1p1(0)
  [takeover_mimage_0] centipede2 iwi-aom---   4.05g                           /dev/nvme0n1p1(1029)
  [takeover_mimage_1] centipede2 iwi-aom---   4.05g                           /dev/mapper/mpatha1(0)
  [takeover_mimage_1] centipede2 iwi-aom---   4.05g                           /dev/mapper/mpatha1(1029)
  [takeover_mimage_2] centipede2 iwi-aom---   4.05g                           /dev/mapper/mpathb1(0)
  [takeover_mimage_2] centipede2 iwi-aom---   4.05g                           /dev/mapper/mpathb1(1029)
  [takeover_mimage_3] centipede2 iwi-aom---   4.05g                           /dev/mapper/mpathc1(0)
  [takeover_mimage_3] centipede2 iwi-aom---   4.05g                           /dev/mapper/mpathc1(1029)
  [takeover_mimage_4] centipede2 iwi-aom---   4.05g                           /dev/mapper/mpathd1(0)
  [takeover_mimage_4] centipede2 iwi-aom---   4.05g                           /dev/mapper/mpathd1(1029)
  [takeover_mlog]     centipede2 lwi-aom---   4.00m                           /dev/mapper/mpathh1(0)
  takeover_rmeta_0    centipede2 -wi-a-----   4.00m                           /dev/nvme0n1p1(1042)
  takeover_rmeta_1    centipede2 -wi-a-----   4.00m                           /dev/mapper/mpatha1(1042)
  takeover_rmeta_2    centipede2 -wi-a-----   4.00m                           /dev/mapper/mpathb1(1042)
  takeover_rmeta_3    centipede2 -wi-a-----   4.00m                           /dev/mapper/mpathc1(1042)
  takeover_rmeta_4    centipede2 -wi-a-----   4.00m                           /dev/mapper/mpathd1(1042)





[ 7643.995286] general protection fault: 0000 [#1] SMP ^M
[ 7644.000853] Modules linked in: raid1 raid10 raid0 dm_raid raid456 async_raid6_recov async_memcpy async_pq raid6_pq async_xor xor async_tx iTCO_wdt iTCO_vendor_support sb_edac dcdbas edac_core intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm irqbypass crc32_pclmul ghash_clmulni_intel aesni_intel ipmi_ssif lrw gf128mul glue_helper ablk_helper cryptd dm_service_time pcspkr ipmi_si mei_me ipmi_devintf lpc_ich joydev mei shpchp acpi_pad ipmi_msghandler wmi acpi_power_meter dm_multipath sg nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables xfs libcrc32c sr_mod cdrom sd_mod crc_t10dif crct10dif_generic mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops qla2xxx ttm ahci libahci drm crct10dif_pclmul tg3 crct10dif_common libata crc32c_intel scsi_transport_fc nvme ptp megaraid_sas nvme_core i2c_core scsi_tgt pps_core dm_mirror dm_region_hash dm_log dm_mod^M
[ 7644.089754] CPU: 7 PID: 90 Comm: kdevtmpfs Not tainted 3.10.0-681.el7.bz1443999a.x86_64 #1^M
[ 7644.098986] Hardware name: Dell Inc. PowerEdge R820/0RN9TC, BIOS 2.0.20 01/16/2014^M
[ 7644.107436] task: ffff880168c0cf10 ti: ffff880fffde4000 task.ti: ffff880fffde4000^M
[ 7644.115778] RIP: 0010:[<ffffffff811df3e4>]  [<ffffffff811df3e4>] kmem_cache_alloc+0x74/0x1e0^M
[ 7644.125217] RSP: 0018:ffff880fffde7bc0  EFLAGS: 00010282^M
[ 7644.131145] RAX: 0000000000000000 RBX: 0000000000000026 RCX: 0000000000232ea4^M
[ 7644.139107] RDX: 0000000000232ea3 RSI: 0000000000018020 RDI: ffff88017fc03b00^M
[ 7644.147068] RBP: ffff880fffde7bf0 R08: 0000000000019bc0 R09: ffffffff816a1b85^M
[ 7644.155036] R10: 0000000000000020 R11: ffff8810a94325d8 R12: dead000000000200^M
[ 7644.162998] R13: 0000000000018020 R14: ffff88017fc03b00 R15: ffff88017fc03b00^M
[ 7644.170971] FS:  0000000000000000(0000) GS:ffff881ffeac0000(0000) knlGS:0000000000000000^M
[ 7644.179999] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033^M
[ 7644.186409] CR2: 00000000004260d0 CR3: 00000000019ee000 CR4: 00000000000407e0^M
[ 7644.194373] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000^M
[ 7644.202335] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400^M
[ 7644.210296] Stack:^M
[ 7644.212539]  0000000077ad679d 0000000000000026 0000000000000001 ffff880fffde7cb4^M
[ 7644.220832]  000000000000000b 0000000000000000 ffff880fffde7c30 ffffffff816a1b85^M
[ 7644.229129]  000000000000001f 0000000000000026 0000000000000001 ffff880fffde7cb4^M
[ 7644.237424] Call Trace:^M
[ 7644.240160]  [<ffffffff816a1b85>] avc_alloc_node+0x24/0x123^M
[ 7644.246381]  [<ffffffff816a1d1c>] avc_compute_av+0x98/0x1b5^M
[ 7644.252604]  [<ffffffff812b4548>] avc_has_perm_flags+0xd8/0x1a0^M
[ 7644.259217]  [<ffffffff812c8fde>] ? security_compute_sid+0x4e/0x50^M
[ 7644.266119]  [<ffffffff812b8fd1>] may_create+0x101/0x130^M
[ 7644.272046]  [<ffffffff812b7c9e>] ? selinux_capable+0x2e/0x40^M
[ 7644.278461]  [<ffffffff812bb16a>] selinux_inode_mknod+0x7a/0x80^M
[ 7644.285064]  [<ffffffff812b1c3f>] security_inode_mknod+0x1f/0x30^M
[ 7644.291772]  [<ffffffff8120d599>] vfs_mknod+0xc9/0x160^M
[ 7644.297514]  [<ffffffff81445254>] handle_create.isra.2+0x84/0x220^M
[ 7644.304319]  [<ffffffff816a7b4d>] ? __schedule+0x39d/0x8b0^M
[ 7644.310442]  [<ffffffff81445545>] devtmpfsd+0x155/0x180^M
[ 7644.316273]  [<ffffffff814453f0>] ? handle_create.isra.2+0x220/0x220^M
[ 7644.323369]  [<ffffffff810b09cf>] kthread+0xcf/0xe0^M
[ 7644.328814]  [<ffffffff810b0900>] ? insert_kthread_work+0x40/0x40^M
[ 7644.335608]  [<ffffffff816b3ad8>] ret_from_fork+0x58/0x90^M
[ 7644.341625]  [<ffffffff810b0900>] ? insert_kthread_work+0x40/0x40^M
[ 7644.348423] Code: dd e2 7e 49 8b 50 08 4d 8b 20 49 8b 40 10 4d 85 e4 0f 84 20 01 00 00 48 85 c0 0f 84 17 01 00 00 49 63 46 20 48 8d 4a 01 4d 8b 06 <49> 8b 1c 04 4c 89 e0 65 49 0f c7 08 0f 94 c0 84 c0 74 ba 49 63 ^M
[ 7644.370138] RIP  [<ffffffff811df3e4>] kmem_cache_alloc+0x74/0x1e0^M
[ 7644.376951]  RSP <ffff880fffde7bc0>^M

Version-Release number of selected component (if applicable):
3.10.0-681.el7.bz1443999a.x86_64

lvm2-2.02.171-5.el7    BUILT: Wed Jun 14 10:33:32 CDT 2017
lvm2-libs-2.02.171-5.el7    BUILT: Wed Jun 14 10:33:32 CDT 2017
lvm2-cluster-2.02.171-5.el7    BUILT: Wed Jun 14 10:33:32 CDT 2017
device-mapper-1.02.140-5.el7    BUILT: Wed Jun 14 10:33:32 CDT 2017
device-mapper-libs-1.02.140-5.el7    BUILT: Wed Jun 14 10:33:32 CDT 2017
device-mapper-event-1.02.140-5.el7    BUILT: Wed Jun 14 10:33:32 CDT 2017
device-mapper-event-libs-1.02.140-5.el7    BUILT: Wed Jun 14 10:33:32 CDT 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 10:15:46 CDT 2017
Comment 2 Corey Marthaler 2017-06-14 19:27:44 EDT
Another reproduction.

[ 5879.791091] VFS: busy inodes on changed media or resized disk dm-27^M
[ 5879.813039] md: reshape of RAID array mdX^M
[ 5921.519129] md: mdX: reshape done.^M
[ 5921.582682] dm-27: detected capacity change from 4303355904 to 7172259840^M
[ 5921.590521] VFS: busy inodes on changed media or resized disk dm-27^M
[ 5963.707572] XFS (dm-27): Unmounting Filesystem^M
[ 6022.629666] BUG: unable to handle kernel paging request at 0000004600000090^M
[ 6022.637496] IP: [<ffffffff811df3e4>] kmem_cache_alloc+0x74/0x1e0^M
[ 6022.644243] PGD 0 ^M
[ 6022.646506] Oops: 0000 [#1] SMP ^M
[ 6022.650140] Modules linked in: raid10 raid1 raid0 dm_raid raid456 async_raid6_recov async_memcpy async_pq raid6_pq async_xor xor async_tx sb_edac edac_core intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm irqbypass crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd iTCO_wdt iTCO_vendor_support dcdbas ipmi_ssif ipmi_si ipmi_devintf pcspkr mei_me dm_service_time lpc_ich joydev ipmi_msghandler mei shpchp acpi_power_meter wmi acpi_pad sg nfsd auth_rpcgss nfs_acl lockd grace dm_multipath sunrpc ip_tables xfs libcrc32c sd_mod sr_mod cdrom crc_t10dif crct10dif_generic mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm qla2xxx drm ahci libahci tg3 crct10dif_pclmul crct10dif_common libata scsi_transport_fc crc32c_intel i2c_core nvme ptp megaraid_sas nvme_core scsi_tgt pps_core dm_mirror dm_region_hash dm_log dm_mod^M
[ 6022.739485] CPU: 11 PID: 1028 Comm: systemd-udevd Not tainted 3.10.0-681.el7.bz1443999a.x86_64 #1^M
[ 6022.749397] Hardware name: Dell Inc. PowerEdge R820/0RN9TC, BIOS 2.0.20 01/16/2014^M
[ 6022.757853] task: ffff881ffb382f70 ti: ffff881ffe8ac000 task.ti: ffff881ffe8ac000^M
[ 6022.766212] RIP: 0010:[<ffffffff811df3e4>]  [<ffffffff811df3e4>] kmem_cache_alloc+0x74/0x1e0^M
[ 6022.775650] RSP: 0018:ffff881ffe8afc50  EFLAGS: 00010286^M
[ 6022.781582] RAX: 0000000000000000 RBX: ffff881ffe5e50e0 RCX: 0000000000071e9f^M
[ 6022.789543] RDX: 0000000000071e9e RSI: 00000000000000d0 RDI: ffff88017fc03b00^M
[ 6022.797509] RBP: ffff881ffe8afc80 R08: 0000000000019bc0 R09: ffffffff811bd488^M
[ 6022.805470] R10: 0000000000000029 R11: 0000000000000000 R12: 0000004600000090^M
[ 6022.813432] R13: 00000000000000d0 R14: ffff88017fc03b00 R15: ffff88017fc03b00^M
[ 6022.821402] FS:  00007f68e67228c0(0000) GS:ffff881ffeb40000(0000) knlGS:0000000000000000^M
[ 6022.830430] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033^M
[ 6022.836848] CR2: 0000004600000090 CR3: 0000000ffbe30000 CR4: 00000000000407e0^M
[ 6022.844809] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000^M
[ 6022.852774] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400^M
[ 6022.860734] Stack:^M
[ 6022.862976]  ffffffff811bd488 ffff881ffe5e50e0 0000000000000000 ffff880ffd70f6c0^M
[ 6022.871277]  ffff880ffc0ee998 ffff880ffd70f6c0 ffff881ffe8afcb8 ffffffff811bd488^M
[ 6022.879571]  00007f68e672e000 ffff881ffe5e50e0 ffff880ffd70f6c0 ffff880ffc0ee998^M
[ 6022.887867] Call Trace:^M
[ 6022.890606]  [<ffffffff811bd488>] ? anon_vma_prepare+0x48/0x130^M
[ 6022.897214]  [<ffffffff811bd488>] anon_vma_prepare+0x48/0x130^M
[ 6022.903620]  [<ffffffff811b23cd>] handle_mm_fault+0xc7d/0x1060^M
[ 6022.910142]  [<ffffffff816aeb74>] __do_page_fault+0x154/0x450^M
[ 6022.916560]  [<ffffffff8132dacb>] ? string.isra.7+0x3b/0xf0^M
[ 6022.922781]  [<ffffffff816aeea5>] do_page_fault+0x35/0x90^M
[ 6022.928806]  [<ffffffff816ab0c8>] page_fault+0x28/0x30^M
[ 6022.934552]  [<ffffffff8143aa4d>] ? show_uevent+0xed/0x110^M
[ 6022.940676]  [<ffffffff813300c0>] ? copy_user_generic_string+0x30/0x40^M
[ 6022.947981]  [<ffffffff812298cc>] ? simple_read_from_buffer+0x3c/0x90^M
[ 6022.955189]  [<ffffffff81280185>] sysfs_read_file+0xe5/0x1a0^M
[ 6022.961514]  [<ffffffff81200a0c>] vfs_read+0x9c/0x170^M
[ 6022.967151]  [<ffffffff812018cf>] SyS_read+0x7f/0xe0^M
[ 6022.972702]  [<ffffffff816b3b89>] system_call_fastpath+0x16/0x1b^M
[ 6022.979404] Code: dd e2 7e 49 8b 50 08 4d 8b 20 49 8b 40 10 4d 85 e4 0f 84 20 01 00 00 48 85 c0 0f 84 17 01 00 00 49 63 46 20 48 8d 4a 01 4d 8b 06 <49> 8b 1c 04 4c 89 e0 65 49 0f c7 08 0f 94 c0 84 c0 74 ba 49 63 ^M
[ 6023.001164] RIP  [<ffffffff811df3e4>] kmem_cache_alloc+0x74/0x1e0^M
[ 6023.007974]  RSP <ffff881ffe8afc50>^M
[ 6023.011873] CR2: 0000004600000090^M
Comment 3 Corey Marthaler 2017-06-23 13:45:37 EDT
In 3.10.0-685...


[27174.868064] md/raid:mdX: raid level 5 active with 5 out of 5 devices, algorithm 1^M
[27174.919298] dm-31: detected capacity change from 4362076160 to 3489660928^M
[27174.935032] md: reshape of RAID array mdX^M
[27192.403480] md: mdX: reshape done.^M
[27222.521477] attempt to access beyond end of device^M
[27222.526841] dm-31: rw=32, want=8518656, limit=6815744^M
[27222.532507] XFS (dm-31): last sector read failed^M
[27319.613194] BUG: unable to handle kernel NULL pointer dereference at 000000000000001d^M
[27319.621971] IP: [<ffffffff811df5b4>] kmem_cache_alloc+0x74/0x1e0^M
[27319.628714] PGD 0 ^M
[27319.630974] Oops: 0000 [#1] SMP ^M
[27319.634595] Modules linked in: raid10 raid1 raid0 dm_raid raid456 async_raid6_recov async_memcpy async_pq raid6_pq async_xor xor async_tx dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio dlm iTCO_wdt iTCO_vendor_support dcdbas sb_edac edac_core intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm irqbypass ipmi_ssif crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul ipmi_si glue_helper ablk_helper mei_me cryptd ipmi_devintf dm_service_time joydev pcspkr mei lpc_ich ipmi_msghandler wmi acpi_pad shpchp acpi_power_meter sg dm_multipath nfsd auth_rpcgss nfs_acl lockd grace sunrpc ip_tables xfs libcrc32c sd_mod crc_t10dif crct10dif_generic sr_mod cdrom mgag200 i2c_algo_bit qla2xxx drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm ahci tg3 libahci crct10dif_pclmul crct10dif_common libata nvme drm crc32c_intel scsi_transport_fc ptp megaraid_sas nvme_core i2c_core scsi_tgt pps_core dm_mirror dm_region_hash dm_log dm_mod^M
[27319.729635] CPU: 10 PID: 13624 Comm: systemd-cgroups Not tainted 3.10.0-685.el7.x86_64 #1^M
[27319.738771] Hardware name: Dell Inc. PowerEdge R820/0RN9TC, BIOS 2.0.20 01/16/2014^M
[27319.747221] task: ffff880fc1239fa0 ti: ffff880fdf2d4000 task.ti: ffff880fdf2d4000^M
[27319.755571] RIP: 0010:[<ffffffff811df5b4>]  [<ffffffff811df5b4>] kmem_cache_alloc+0x74/0x1e0^M
[27319.765016] RSP: 0018:ffff880fdf2d7a78  EFLAGS: 00010286^M
[27319.770948] RAX: 0000000000000000 RBX: ffff880ffef7a798 RCX: 00000000001bec51^M
[27319.778910] RDX: 00000000001bec50 RSI: 00000000000000d0 RDI: ffff88017fc03b00^M
[27319.786878] RBP: ffff880fdf2d7aa8 R08: 0000000000019bc0 R09: ffffffff811bd658^M
[27319.794841] R10: 0000000000000002 R11: 0000000000000000 R12: 000000000000001d^M
[27319.802817] R13: 00000000000000d0 R14: ffff88017fc03b00 R15: ffff88017fc03b00^M
[27319.810780] FS:  0000000000000000(0000) GS:ffff880fff340000(0000) knlGS:0000000000000000^M
[27319.819814] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033^M
[27319.826226] CR2: 000000000000001d CR3: 0000000ff56e3000 CR4: 00000000000407e0^M
[27319.834187] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000^M
[27319.842149] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400^M
[27319.850109] Stack:^M
[27319.852350]  ffffffff811bd658 ffff880ffef7a798 0000000000000000 ffff880f727dd290^M
[27319.860642]  0000000000000029 ffff880f81eb8c80 ffff880fdf2d7ae0 ffffffff811bd658^M
[27319.868935]  ffff880ffef7a798 000055c94a5d3024 ffff880f727dd290 0000000000000029^M
[27319.877225] Call Trace:^M
[27319.879964]  [<ffffffff811bd658>] ? anon_vma_prepare+0x48/0x130^M
[27319.886563]  [<ffffffff811bd658>] anon_vma_prepare+0x48/0x130^M
[27319.892996]  [<ffffffff811adb11>] do_cow_fault+0x41/0x290^M
[27319.899028]  [<ffffffff811b1afd>] handle_mm_fault+0x2bd/0x1010^M
[27319.905537]  [<ffffffff811b5258>] ? __vma_link_file+0x48/0x70^M
[27319.911950]  [<ffffffff811b5e0d>] ? vma_link+0x7d/0xc0^M
[27319.917692]  [<ffffffff811b704f>] ? vma_set_page_prot+0x3f/0x60^M
[27319.924305]  [<ffffffff816b0f74>] __do_page_fault+0x154/0x450^M
[27319.930718]  [<ffffffff816b12a5>] do_page_fault+0x35/0x90^M
[27319.936759]  [<ffffffff816ad4c8>] page_fault+0x28/0x30^M
[27319.942497]  [<ffffffff81331cd5>] ? __clear_user+0x25/0x50^M
[27319.948618]  [<ffffffff81331d30>] clear_user+0x30/0x40^M
[27319.954354]  [<ffffffff8125dc49>] padzero+0x29/0x40^M
[27319.959799]  [<ffffffff8125fd36>] load_elf_binary+0x8d6/0xe00^M
[27319.966214]  [<ffffffff812d4119>] ? ima_bprm_check+0x49/0x50^M
[27319.972520]  [<ffffffff8125f460>] ? load_elf_library+0x220/0x220^M
[27319.979228]  [<ffffffff81207e0d>] search_binary_handler+0xed/0x300^M
[27319.986127]  [<ffffffff81209336>] do_execve_common.isra.25+0x5b6/0x6c0^M
[27319.993405]  [<ffffffff81209458>] do_execve+0x18/0x20^M
[27319.999058]  [<ffffffff810a537c>] ____call_usermodehelper+0xfc/0x130^M
[27320.006148]  [<ffffffff810a53b0>] ? ____call_usermodehelper+0x130/0x130^M
[27320.013530]  [<ffffffff810a53ce>] call_helper+0x1e/0x20^M
[27320.019362]  [<ffffffff816b5ed8>] ret_from_fork+0x58/0x90^M
[27320.025389]  [<ffffffff810a53b0>] ? ____call_usermodehelper+0x130/0x130^M
[27320.032770] Code: db e2 7e 49 8b 50 08 4d 8b 20 49 8b 40 10 4d 85 e4 0f 84 20 01 00 00 48 85 c0 0f 84 17 01 00 00 49 63 46 20 48 8d 4a 01 4d 8b 06 <49> 8b 1c 04 4c 89 e0 65 49 0f c7 08 0f 94 c0 84 c0 74 ba 49 63 ^M
[27320.054537] RIP  [<ffffffff811df5b4>] kmem_cache_alloc+0x74/0x1e0^M
[27320.061348]  RSP <ffff880fdf2d7a78>^M
[27320.065238] CR2: 000000000000001d^M

Note You need to log in before you can comment on or make changes to this bug.