Bug 1480064 - vmcore due to tcmu-runner seen on gluster-block setup
vmcore due to tcmu-runner seen on gluster-block setup
Status: CLOSED NOTABUG
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: tcmu-runner (Show other bugs)
3.3
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Prasanna Kumar Kalever
krishnaram Karthick
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-08-10 00:17 EDT by krishnaram Karthick
Modified: 2017-10-31 11:12 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-10-31 11:12:06 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description krishnaram Karthick 2017-08-10 00:17:33 EDT
Description of problem:
A vmcore is seen on one of the test setups of CNS. Looks like the crash is due to tcmu-runner.

I'm not sure on what caused this issue, but I had done series of gluster-block delete operations.

[184856.592494] Modules linked in: dm_round_robin iscsi_tcp libiscsi_tcp libiscsi iscsi_target_mod target_core_pscsi target_core_file target_core_iblock scsi_transport_iscsi dm_multipath target_core_user target_core_mod uio fuse xt_nat xt_conntrack iptable_filter ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat ip_tables xt_statistic veth xt_recent vport_vxlan vxlan ip6_udp_tunnel udp_tunnel xt_comment xt_mark openvswitch nf_conntrack_ipv6 nf_nat_ipv6 nf_defrag_ipv6 nf_nat_ipv4 xt_addrtype nf_nat br_netfilter bridge stp llc dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio vmw_vsock_vmci_transport vsock ipt_REJECT nf_reject_ipv4 nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack sb_edac edac_core coretemp iosf_mbi crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd
[184856.599905]  ppdev vmw_balloon pcspkr joydev sg parport_pc parport vmw_vmci shpchp i2c_piix4 nfsd auth_rpcgss nfs_acl lockd grace sunrpc xfs libcrc32c sr_mod cdrom ata_generic pata_acpi sd_mod crc_t10dif crct10dif_generic vmwgfx drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm ahci ata_piix libahci drm crct10dif_pclmul crct10dif_common libata crc32c_intel serio_raw vmxnet3 i2c_core vmw_pvscsi floppy dm_mirror dm_region_hash dm_log dm_mod [last unloaded: xt_conntrack]
[184856.606070] CPU: 17 PID: 8030 Comm: tcmu-runner Not tainted 3.10.0-693.el7.x86_64 #1
[184856.607572] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 09/21/2015
[184856.609083] task: ffff8803f8f76eb0 ti: ffff880aeaba0000 task.ti: ffff880aeaba0000
[184856.610578] RIP: 0010:[<ffffffffc06027c2>]  [<ffffffffc06027c2>] tcmu_vma_fault+0x72/0xf0 [target_core_user]
[184856.612104] RSP: 0000:ffff880aeaba3d58  EFLAGS: 00010246
[184856.613577] RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff880000000000
[184856.615058] RDX: ffff8800000006f8 RSI: 00003ffffffff000 RDI: 0000000000000000
[184856.616504] RBP: ffff880aeaba3d68 R08: 0000000000000000 R09: ffff880aeaba3de8
[184856.617936] R10: 0000000000000002 R11: 0000000000000000 R12: ffff880aeaba3d80
[184856.619670] R13: ffff8809eaac76c8 R14: 0000000000000000 R15: ffff880c02fa5038
[184856.621126] FS:  00007f59535c2880(0000) GS:ffff880c0da40000(0000) knlGS:0000000000000000
[184856.622541] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[184856.623918] CR2: 0000000000000000 CR3: 00000003d7a55000 CR4: 00000000003407e0
[184856.625341] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[184856.626730] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[184856.628063] Stack:
[184856.629374]  0000000000000000 ffff880aeaba3de8 ffff880aeaba3dc8 ffffffff811ad162
[184856.630665]  0000000000000000 ffff8803000000a8 0000000000000000 00007f5940f52000
[184856.631961]  0000000000000000 0000000000000000 ffff880c02fa5038 00000000a40e4666
[184856.633213] Call Trace:
[184856.634449]  [<ffffffff811ad162>] __do_fault+0x52/0xe0
[184856.635665]  [<ffffffff811ad60b>] do_read_fault.isra.44+0x4b/0x130
[184856.636867]  [<ffffffff811b1f11>] handle_mm_fault+0x691/0xfa0
[184856.638060]  [<ffffffff811b8c6e>] ? do_mmap_pgoff+0x31e/0x3e0
[184856.639232]  [<ffffffff816affb4>] __do_page_fault+0x154/0x450
[184856.640369]  [<ffffffff816b02e5>] do_page_fault+0x35/0x90
[184856.641491]  [<ffffffff816ac508>] page_fault+0x28/0x30
[184856.642605] Code: d7 48 63 d2 48 8d 04 52 48 c1 e7 0c 48 c1 e0 04 48 01 c6 48 03 be 80 10 00 00 83 be 90 10 00 00 02 74 36 e8 01 dc bb c0 48 89 c3 <48> 8b 03 f6 c4 80 75 59 f0 ff 43 1c 48 8b 03 a9 00 00 00 80 74 
[184856.644905] RIP  [<ffffffffc06027c2>] tcmu_vma_fault+0x72/0xf0 [target_core_user]
[184856.646019]  RSP <ffff880aeaba3d58>
[184856.647107] CR2: 0000000000000000

Version-Release number of selected component (if applicable):
rpm -qa | grep 'gluster'
glusterfs-fuse-3.8.4-35.el7rhgs.x86_64
glusterfs-server-3.8.4-35.el7rhgs.x86_64
gluster-block-0.2.1-6.el7rhgs.x86_64
glusterfs-libs-3.8.4-35.el7rhgs.x86_64
glusterfs-3.8.4-35.el7rhgs.x86_64
glusterfs-api-3.8.4-35.el7rhgs.x86_64
glusterfs-cli-3.8.4-35.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-35.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-35.el7rhgs.x86_64

How reproducible:
Yet to determine

Steps to Reproduce:
1.
2.
3.

Actual results:
vmcrash seen. This will take down any app pods running on the node

Expected results:
No crashes should be seen

Additional info:
core file shall be attached
Comment 9 Prasanna Kumar Kalever 2017-10-31 11:12:06 EDT
We have not hit this again, hence closing this for now. Please feel free to re-open if you hit this in the future.

Note You need to log in before you can comment on or make changes to this bug.