RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 883046 - VM with RHEL 6.3 that runs on NFS over POSIX storage crashes (kernel panic) when trying to hot plug/unplug direct lun disk.
Summary: VM with RHEL 6.3 that runs on NFS over POSIX storage crashes (kernel panic) w...
Keywords:
Status: CLOSED DUPLICATE of bug 870344
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.5
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Asias He
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-12-03 16:59 UTC by Leonid Natapov
Modified: 2013-01-04 04:42 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-12-31 07:11:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
screen shot (16.07 KB, image/png)
2012-12-03 16:59 UTC, Leonid Natapov
no flags Details

Description Leonid Natapov 2012-12-03 16:59:24 UTC
Created attachment 656833 [details]
screen shot

VM with RHEL 6.3 that runs on NFS over POSIX storage crashes (kernel panic) when trying to hot plug/unplug direct lun disk.

Check attached picture for kernel panic screen shot.

How to reproduce:
1.RHEVM setup with  POSIX Compliant FS over NFS storage.
2.Create VM with RHEL 6.3 OS.
3.Attach direct lun disk to this VM.
4.Start VM.
5.Plug/Unplug disk.

100% Reproducible.



[root@purple-vds3 ~]# virsh -r dumpxml 34
<domain type='kvm' id='34'>
  <name>GLUSTER_VM1</name>
  <uuid>64affa70-3ce1-4788-a824-214a4224b916</uuid>
  <memory unit='KiB'>524288</memory>
  <currentMemory unit='KiB'>524288</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <cputune>
    <shares>1020</shares>
  </cputune>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>Red Hat</entry>
      <entry name='product'>RHEV Hypervisor</entry>
      <entry name='version'>6Server-6.3.0.3.el6</entry>
      <entry name='serial'>3A1792D7-9AD9-3F7F-BCEA-D468EC51E653_00:1A:64:10:2E:18</entry>
      <entry name='uuid'>64affa70-3ce1-4788-a824-214a4224b916</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='rhel6.3.0'>hvm</type>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Conroe</model>
    <topology sockets='1' cores='1' threads='1'/>
  </cpu>
  <clock offset='variable' adjustment='-43200'>
    <timer name='rtc' tickpolicy='catchup'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source startupPolicy='optional'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <serial></serial>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>
      <source file='/rhev/data-center/d0861b3f-898d-4500-b5fa-37af273953e7/c87b9039-a04e-4607-900d-a1c3a6e7edf1/images/857ea95a-51f9-4743-a0a4-66104c7b8a17/1c1c1c6b-8195-47f1-a728-2d12b7fdba80'>
        <seclabel relabel='no'/>
      </source>
      <target dev='vda' bus='virtio'/>
      <serial>857ea95a-51f9-4743-a0a4-66104c7b8a17</serial>
      <boot order='1'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads'/>
      <source file='/rhev/data-center/d0861b3f-898d-4500-b5fa-37af273953e7/c87b9039-a04e-4607-900d-a1c3a6e7edf1/images/ce11927b-2b40-422a-84f5-5d024d883233/74bfbfe5-fa9e-4b1d-8d14-7ae6b3e2b8c2'>
        <seclabel relabel='no'/>
      </source>
      <target dev='vdg' bus='virtio'/>
      <shareable/>
      <serial>ce11927b-2b40-422a-84f5-5d024d883233</serial>
      <alias name='virtio-disk6'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>
    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/>
      <source dev='/dev/mapper/3514f0c561000000c'/>
      <target dev='vdh' bus='virtio'/>
      <shareable/>
      <serial></serial>
      <alias name='virtio-disk7'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:1a:4a:23:a8:c6'/>
      <source bridge='rhevm'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <filterref filter='vdsm-no-mac-spoofing'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/GLUSTER_VM1.com.redhat.rhevm.vdsm'/>
      <target type='virtio' name='com.redhat.rhevm.vdsm'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/GLUSTER_VM1.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0'/>
      <alias name='channel2'/>
      <address type='virtio-serial' controller='0' bus='0' port='3'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <graphics type='spice' port='5900' tlsPort='5901' autoport='yes' listen='0' keymap='en-us' passwdValidTo='2012-12-03T16:35:10' connected='disconnect'>
      <listen type='address' address='0'/>
      <channel name='main' mode='secure'/>
      <channel name='display' mode='secure'/>
      <channel name='inputs' mode='secure'/>
      <channel name='cursor' mode='secure'/>
      <channel name='playback' mode='secure'/>
      <channel name='record' mode='secure'/>
      <channel name='smartcard' mode='secure'/>
      <channel name='usbredir' mode='secure'/>
    </graphics>
    <video>
      <model type='qxl' vram='65536' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c55,c292</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c55,c292</imagelabel>
  </seclabel>
</domain>

Comment 2 Asias He 2012-12-05 07:10:21 UTC
(In reply to comment #0)
> Created attachment 656833 [details]
> screen shot
> 
> VM with RHEL 6.3 that runs on NFS over POSIX storage crashes (kernel panic)
> when trying to hot plug/unplug direct lun disk.
> 
> Check attached picture for kernel panic screen shot.
> 
> How to reproduce:
> 1.RHEVM setup with  POSIX Compliant FS over NFS storage.
> 2.Create VM with RHEL 6.3 OS.
> 3.Attach direct lun disk to this VM.
> 4.Start VM.
> 5.Plug/Unplug disk.

Is the disk mounted when it is unplugged?

Comment 3 RHEL Program Management 2012-12-14 07:50:42 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 4 Chao Yang 2012-12-28 09:01:50 UTC
Cannot reproduce with neither NFS DC nor POSIX compliant FS.
vdsm-4.10.2-1.0.el6.x86_64
vdsm-gluster-4.10.2-0.101.26.el6.noarch
libvirt-0.10.2-11.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.348.el6.x86_64
glusterfs-3.3.1-6.el6.x86_64
rhevm-3.1.0-32.el6ev.noarch

Steps:
1. create a DC(NFS/POSIX compliant FS)
2. enable gluster service in Cluster tab
3. add a host into cluster
4. add New Domain by choosing "Data/POSIX compliant FS", VFS type: NFS, Mount Options: vers=3
5. create rhel6.3 VM
6. hot plug/unplug a direct LUN to this VM by activate/deactivate repeatedly

Questions:
1. How many hot plug/unplug loops needed to reproduce this issue?
2. How many nodes were added in this cluster? All the nodes connected by gluster?
3. Did you add volumes through "Volumes" tab?
4. Can you please provide with related packages? (As mentioned above)
5. Anything missing, please point out.

Comment 5 Chao Yang 2012-12-28 09:52:05 UTC
Reproduced with disk mounted in guest by repeatedly hot plug/unplug this direct LUN:
      KERNEL: /usr/lib/debug/lib/modules/2.6.32-279.el6.x86_64/vmlinux
    DUMPFILE: /var/crash/127.0.0.1-2012-12-27-16:33:11/vmcore  [PARTIAL DUMP]
        CPUS: 1
        DATE: Thu Dec 27 16:33:08 2012
      UPTIME: 00:59:30
LOAD AVERAGE: 0.01, 0.02, 0.00
       TASKS: 149
    NODENAME: localhost.localdomain
     RELEASE: 2.6.32-279.el6.x86_64
     VERSION: #1 SMP Wed Jun 13 18:24:36 EDT 2012
     MACHINE: x86_64  (2659 Mhz)
      MEMORY: 2 GB
       PANIC: ""
         PID: 4805
     COMMAND: "blkid"
        TASK: ffff88007ac69500  [THREAD_INFO: ffff880037bb2000]
         CPU: 0
       STATE: TASK_RUNNING (PANIC)

crash> bt
PID: 4805   TASK: ffff88007ac69500  CPU: 0   COMMAND: "blkid"
 #0 [ffff880037bb3b00] machine_kexec at ffffffff8103281b
 #1 [ffff880037bb3b60] crash_kexec at ffffffff810ba662
 #2 [ffff880037bb3c30] oops_end at ffffffff81501290
 #3 [ffff880037bb3c60] die at ffffffff8100f26b
 #4 [ffff880037bb3c90] do_general_protection at ffffffff81500e22
 #5 [ffff880037bb3cc0] general_protection at ffffffff815005f5
    [exception RIP: virtio_check_driver_offered_feature+27]
    RIP: ffffffffa00540cb  RSP: ffff880037bb3d78  RFLAGS: 00010206
    RAX: ffffffff81103350  RBX: ffff88007e379540  RCX: 000000004cf0758b
    RDX: 4ce86d8b4ce0658b  RSI: 0000000000000007  RDI: ffffffff81a971a0
    RBP: ffff880037bb3d78   R8: ffffffffa006b220   R9: 0000000000000000
    R10: 000000000000006e  R11: 0000000000000001  R12: 000000000000101d
    R13: 0000000000005331  R14: ffffffff81a971a0  R15: 0000000000000000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #6 [ffff880037bb3d80] virtblk_ioctl at ffffffffa006a65c [virtio_blk]
 #7 [ffff880037bb3dc0] __blkdev_driver_ioctl at ffffffff8125e357
 #8 [ffff880037bb3e00] blkdev_ioctl at ffffffff8125e7dd
 #9 [ffff880037bb3e50] block_ioctl at ffffffff811b381c
#10 [ffff880037bb3e60] vfs_ioctl at ffffffff8118dec2
#11 [ffff880037bb3ea0] do_vfs_ioctl at ffffffff8118e064
#12 [ffff880037bb3f30] sys_ioctl at ffffffff8118e5e1
#13 [ffff880037bb3f80] system_call_fastpath at ffffffff8100b0f2
    RIP: 00007f3715c9c7b7  RSP: 00007fffa5a711f0  RFLAGS: 00010202
    RAX: 0000000000000010  RBX: ffffffff8100b0f2  RCX: 0000000000000008
    RDX: 0000000000000000  RSI: 0000000000005331  RDI: 0000000000000003
    RBP: 000000050091ea00   R8: 00007f3715f49580   R9: 0000000000000100
    R10: 00007fffa5a71830  R11: 0000000000000246  R12: 0000000000000000
    R13: 00007f3716373ba0  R14: 0000000000000003  R15: 00000000008cf030
    ORIG_RAX: 0000000000000010  CS: 0033  SS: 002b

Comment 7 Chao Yang 2012-12-31 06:45:51 UTC
Reproduced in 6.4 guest(2.6.32-351.el6.x86_64) using qemu-kvm-rhev-0.12.1.2-2.348.el6.x86_64 by:
Steps:
1. create a DC(NFS/POSIX compliant FS)
2. enable gluster service in Cluster tab
3. add a host into cluster
4. add New Domain by choosing "Data/POSIX compliant FS", VFS type: NFS, Mount Options: vers=3
5. create rhel6.4 VM
6. hot plug a direct LUN to this VM by activate/deactivate and mount it in guest
7. hot plug/unplug repeatedly by clicking activate/deactivate button without umount first in guest

Actual Result:
After 10 loops more or less, guest crashed.

      KERNEL: /usr/lib/debug/lib/modules/2.6.32-351.el6.x86_64/vmlinux
    DUMPFILE: /var/crash/127.0.0.1-2012-12-30-13:14:36/vmcore  [PARTIAL DUMP]
        CPUS: 1
        DATE: Sun Dec 30 13:14:32 2012
      UPTIME: 00:35:35
LOAD AVERAGE: 0.00, 0.00, 0.03
       TASKS: 238
    NODENAME: localhost.localdomain
     RELEASE: 2.6.32-351.el6.x86_64
     VERSION: #1 SMP Thu Dec 20 16:13:16 EST 2012
     MACHINE: x86_64  (2659 Mhz)
      MEMORY: 2 GB
       PANIC: "Oops: 0000 [#1] SMP " (check log for details)
         PID: 6300
     COMMAND: "blkid"
        TASK: ffff88007c779500  [THREAD_INFO: ffff880037b40000]
         CPU: 0
       STATE: TASK_RUNNING (PANIC)

crash> bt
PID: 6300   TASK: ffff88007c779500  CPU: 0   COMMAND: "blkid"
 #0 [ffff880037b41940] machine_kexec at ffffffff81035b7b
 #1 [ffff880037b419a0] crash_kexec at ffffffff810c0d32
 #2 [ffff880037b41a70] oops_end at ffffffff81510d80
 #3 [ffff880037b41aa0] no_context at ffffffff81046bfb
 #4 [ffff880037b41af0] __bad_area_nosemaphore at ffffffff81046e85
 #5 [ffff880037b41b40] bad_area at ffffffff81046fae
 #6 [ffff880037b41b70] __do_page_fault at ffffffff81047760
 #7 [ffff880037b41c90] do_page_fault at ffffffff81512cce
 #8 [ffff880037b41cc0] page_fault at ffffffff81510085
    [exception RIP: virtio_check_driver_offered_feature+16]
    RIP: ffffffffa00550c0  RSP: ffff880037b41d78  RFLAGS: 00010286
    RAX: 0000000c9f3bcaae  RBX: ffff88007b1e6440  RCX: 0000000000000000
    RDX: 0000000000005331  RSI: 0000000000000007  RDI: ffff88007d358040
    RBP: ffff880037b41d78   R8: ffffffffa00881a0   R9: 0000000000000000
    R10: 000000000000007d  R11: 0000000000000001  R12: 000000000000101d
    R13: 0000000000005331  R14: ffff88007d358040  R15: 0000000000000000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #9 [ffff880037b41d80] virtblk_ioctl at ffffffffa008764b [virtio_blk]
#10 [ffff880037b41dc0] __blkdev_driver_ioctl at ffffffff812641e7
#11 [ffff880037b41e00] blkdev_ioctl at ffffffff8126466d
#12 [ffff880037b41e50] block_ioctl at ffffffff811bb67c
#13 [ffff880037b41e60] vfs_ioctl at ffffffff81194de2
#14 [ffff880037b41ea0] do_vfs_ioctl at ffffffff81194f84
#15 [ffff880037b41f30] sys_ioctl at ffffffff81195501
#16 [ffff880037b41f80] system_call_fastpath at ffffffff8100b072
    RIP: 00007fc110fb4a47  RSP: 00007fffbb434ea0  RFLAGS: 00010202
    RAX: 0000000000000010  RBX: ffffffff8100b072  RCX: 0000000000000008
    RDX: 0000000000000000  RSI: 0000000000005331  RDI: 0000000000000003
    RBP: 000000050091ea00   R8: 00007fc111261580   R9: 0000000000000100
    R10: 00007fffbb4354e0  R11: 0000000000000246  R12: 0000000000000000
    R13: 00007fc11168bba0  R14: 0000000000000003  R15: 0000000001112030
    ORIG_RAX: 0000000000000010  CS: 0033  SS: 002b

Comment 8 Chao Yang 2012-12-31 06:48:00 UTC
Hi Asias,
 Can you please comment?

Comment 10 Asias He 2012-12-31 07:11:13 UTC
I am pretty sure this is the same problem we have in BZ870344.

*** This bug has been marked as a duplicate of bug 870344 ***


Note You need to log in before you can comment on or make changes to this bug.