Bug 1473162
Summary: | [Gluster-block]: VM core generated, with gluster-block (failed) create | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Sweta Anandpara <sanandpa> | |
Component: | tcmu-runner | Assignee: | Prasanna Kumar Kalever <prasanna.kalever> | |
Status: | CLOSED ERRATA | QA Contact: | Sweta Anandpara <sanandpa> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.3 | CC: | amukherj, hchiramm, kramdoss, mchristi, rhs-bugs, storage-qa-internal | |
Target Milestone: | --- | |||
Target Release: | RHGS 3.3.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | tcmu-runner-1.2.0-14.el7rhgs | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1477959 1490350 (view as bug list) | Environment: | ||
Last Closed: | 2017-09-21 04:20:54 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1477959, 1488610 | |||
Bug Blocks: | 1417151, 1474188, 1490350 |
Description
Sweta Anandpara
2017-07-20 07:14:25 UTC
Seen in twice on two different peer nodes. But I don't have straight-forward steps to reproduce. Would like this bug to be discussed in the wider forum, as I am not completely sure of the likelihood and the repercussions of this happening in CNS environment. Hence, setting blocker to '?' I hit another VM crash today when the block-create command that I gave failed. It was a not a negative test that I was doing. I was expecting the block to get created successfully. The bug title looks the same, the backtrace is different though. Please do advise if this is different. BUG: unable to handle kernel NULL pointer dereference at 00000000000001d0 IP: [<ffffffffc0623080>] uio_poll+0x20/0x70 [uio] PGD 7d462067 PUD ce7b0067 PMD 0 Oops: 0000 [#1] SMP Modules linked in: target_core_pscsi target_core_file target_core_iblock iscsi_target_mod target_core_user target_core_mod crc_t10dif crct10dif_generic uio crct10dif_common sctp_diag sctp dccp_diag dccp tcp_diag udp_diag inet_diag unix_diag af_packet_diag netlink_diag binfmt_misc fuse nf_conntrack_netbios_ns nf_conntrack_broadcast ip6t_rpfilter ipt_REJECT nf_reject_ipv4 ip6t_REJECT nf_reject_ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio ppdev pcspkr joydev sg virtio_balloon parport_pc i2c_piix4 parport nfsd auth_rpcgss nfs_acl lockd grace sunrpc dm_multipath ip_tables xfs libcrc32c sr_mod cdrom ata_generic pata_acpi cirrus drm_kms_helper virtio_blk syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm serio_raw 8139too virtio_pci virtio_ring virtio ata_piix libata 8139cp mii i2c_core floppy dm_mirror dm_region_hash dm_log dm_mod 8021q garp mrp bridge stp llc bonding CPU: 0 PID: 14320 Comm: tcmu-runner Not tainted 3.10.0-693.el7.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2007 task: ffff880049cf2f70 ti: ffff880117d94000 task.ti: ffff880117d94000 RIP: 0010:[<ffffffffc0623080>] [<ffffffffc0623080>] uio_poll+0x20/0x70 [uio] RSP: 0018:ffff880117d97b08 EFLAGS: 00010202 RAX: 00000000fffffffb RBX: ffff880049c781e0 RCX: 0000000000000000 RDX: ffffffffc0623060 RSI: ffff880117d97c90 RDI: ffff8800c34c8d00 RBP: ffff880117d97b18 R08: 0000000000000001 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000000 R12: ffff8800c866f560 R13: 0000000000000000 R14: 0000000000000000 R15: ffff880117d97b9c FS: 00007fb730e3b700(0000) GS:ffff88011fc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 00000000000001d0 CR3: 000000009c696000 CR4: 00000000000006f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Stack: ffff880117d97ba4 0000000000000000 ffff880117d97f38 ffffffff81217297 00007fb730e3ada0 ffff880117d97fd8 ffff880049cf2f70 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 Call Trace: [<ffffffff81217297>] do_sys_poll+0x327/0x580 [<ffffffff810cd794>] ? update_curr+0x104/0x190 [<ffffffff810c8f18>] ? __enqueue_entity+0x78/0x80 [<ffffffff810cf90c>] ? enqueue_entity+0x26c/0xb60 [<ffffffff810ce8d8>] ? check_preempt_wakeup+0x148/0x250 [<ffffffff810c12d5>] ? check_preempt_curr+0x85/0xa0 [<ffffffff81215dd0>] ? poll_select_copy_remaining+0x150/0x150 [<ffffffff810cd794>] ? update_curr+0x104/0x190 [<ffffffff810ca29e>] ? account_entity_dequeue+0xae/0xd0 [<ffffffff810cdc7c>] ? dequeue_entity+0x11c/0x5d0 [<ffffffff81062ede>] ? kvm_clock_read+0x1e/0x20 [<ffffffff810ce54e>] ? dequeue_task_fair+0x41e/0x660 [<ffffffff810cb62c>] ? set_next_entity+0x3c/0xe0 [<ffffffff810cb72f>] ? pick_next_task_fair+0x5f/0x1b0 [<ffffffff8133d9dd>] ? list_del+0xd/0x30 [<ffffffff810b1671>] ? remove_wait_queue+0x31/0x40 [<ffffffffc062394d>] ? uio_read+0x11d/0x180 [uio] [<ffffffff810c4810>] ? wake_up_state+0x20/0x20 [<ffffffff812175f4>] SyS_poll+0x74/0x110 [<ffffffff8111f5c6>] ? __audit_syscall_exit+0x1e6/0x280 [<ffffffff816b4fc9>] system_call_fastpath+0x16/0x1b Code: ff ff c3 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 b8 fb ff ff ff 48 89 e5 41 54 53 4c 8b a7 a8 00 00 00 49 8b 1c 24 48 8b 4b 40 <48> 83 b9 d0 01 00 00 00 75 06 5b 41 5c 5d c3 90 48 85 f6 74 19 RIP [<ffffffffc0623080>] uio_poll+0x20/0x70 [uio] RSP <ffff880117d97b08> The above trace is seen with glusterfs-3.8.4-35 and gluster-block-0.2.1-6 corresponding cns bug is verified https://bugzilla.redhat.com/show_bug.cgi?id=1490350#c3). We are good from the verification of cns perspective. Tested and verified this on the build tcmu-runner-1.2.0-15 and gluster-block-0.2.1-13. Executed multiple block creates and deletes. Stopped gluster-blockd service and did node reboots. I do not see the mentioned VM crash in all my attempts. I did see partially created blocks (on failed creates) for which bz 1490818 has been raised. Moving this bug to verified in rhgs 3.3.0. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:2773 |