RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 741317 - [RHEL6.2] PANIC when starting virt guests
Summary: [RHEL6.2] PANIC when starting virt guests
Keywords:
Status: CLOSED DUPLICATE of bug 740786
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel
Version: 6.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Red Hat Kernel Manager
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-09-26 15:03 UTC by PaulB
Modified: 2011-09-27 12:35 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-09-27 12:35:03 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description PaulB 2011-09-26 15:03:27 UTC
Description of problem:
 While running Secondary Kernel Testing to confirm RHEL6.2 xen guest install on a RHEL5.7 DOM0, the RHEL6 xen guests PANIC during the /distribution/virt/start test. 

Version-Release number of selected component (if applicable):
 distro=RHEL6.2-20110921.1
 kernel=2.6.32-202.el6

How reproducible:
 Consistently (both i386 and x86_64 guests).

Steps to Reproduce:
1. Install RHEL5.7 x86_64 DOM0.
    distro=RHEL5-Server-U7
    xen kernel=2.6.18-274.el5

2. Create RHEL6.2 guests PARAVirt and FULLVirt 
    distro=RHEL6.2-20110921.1
    kernel=2.6.32-202.el6
   
3. Start guests.
  
Actual results:
 RHEL6.2 guests PANIC.

Expected results:
 RHEL6.2 guests install and start successfully. 

Additional info:
 The issue was seen here:
 [] https://beaker.engineering.redhat.com/jobs/135606
    http://lab2.rhts.eng.bos.redhat.com/beaker/logs/tasks/3072874//guest-guest-81-32.rhts.eng.bos.redhat.com.log
    <-SNIP->
    Kernel panic - not syncing: Attempted to kill init!
    PCI: Fatal: No config space access function found
    <-SNIP->
 
 I have attached he following captured core files:
 - 1613.06-guest-81-32.rhts.eng.bos.redhat.com.1.core
 - 1614.11-guest-80-38.rhts.eng.bos.redhat.com.2.core
 - 1613.51-guest-81-32.rhts.eng.bos.redhat.com.3.core

Best,
-pbunyan

Comment 2 PaulB 2011-09-26 16:05:06 UTC
Correction!!
The following files were listed as captured:
 - 1613.06-guest-81-32.rhts.eng.bos.redhat.com.1.core
 - 1614.11-guest-80-38.rhts.eng.bos.redhat.com.2.core
 - 1613.51-guest-81-32.rhts.eng.bos.redhat.com.3.core

However, only the 1613.51-guest-81-32.rhts.eng.bos.redhat.com.3.core file contained any data. The other two were empty files.

The guest-81-32.rhts.eng.bos.redhat.com.3.core has been copied to the following location:
http://file.bos.redhat.com/~pbunyan/BUGZILLA/guest-81-32.rhts.eng.bos.redhat.com.3.core.tgz


Best,
-pbunyan

Comment 4 Dave Anderson 2011-09-26 18:23:33 UTC
> However, only the 1613.51-guest-81-32.rhts.eng.bos.redhat.com.3.core file
> contained any data. The other two were empty files.
>
> The guest-81-32.rhts.eng.bos.redhat.com.3.core has been copied to the 
> following location:
> http://file.bos.redhat.com/~pbunyan/BUGZILLA/guest-
> 81-32.rhts.eng.bos.redhat.com.3.core.tgz

Unfortunately that core file is corrupt -- the first few bytes in
the header are correct for an ELF kdump, but it goes off into the
weeds shortly thereafter:

# readelf -a 2011-0923-1613.51-guest-81-32.rhts.eng.bos.redhat.com.3.core
ELF Header:
  Magic:   7f 45 4c 46 02 01 01 00 01 00 00 00 00 00 00 00 
  Class:                             ELF64
  Data:                              2's complement, little endian
  Version:                           1 (current)
  OS/ABI:                            UNIX - System V
  ABI Version:                       1
  Type:                              CORE (Core file)
  Machine:                           Advanced Micro Devices X86-64
  Version:                           0x1
  Entry point address:               0x0
  Start of program headers:          0 (bytes into file)
  Start of section headers:          64 (bytes into file)
  Flags:                             0x0
  Size of this header:               64 (bytes)
  Size of program headers:           56 (bytes)
  Number of program headers:         0
  Size of section headers:           64 (bytes)
  Number of section headers:         7
  Section header string table index: 1
readelf: Error: Unable to read in 0x48 bytes of string table

Section Headers:
  [Nr] Name              Type             Address           Offset
       Size              EntSize          Flags  Link  Info  Align
  [ 0] <no-name>         NULL             0000000000000000  00000000
       0000000000000000  0000000000000000           0     0     0
  [ 1] <no-name>         STRTAB           0000000000000000  40403000
       0000000000000048  0000000000000000           0     0     0
  [ 2] <no-name>         NOTE             0000000000000000  00000200
       0000000000000568  0000000000000000           0     0     0
  [ 3] <no-name>         PROGBITS         0000000000000000  00000768
       0000000000001430  0000000000001430           0     0     8
  [ 4] <no-name>         PROGBITS         0000000000000000  00001b98
       0000000000001000  0000000000001000           0     0     8
  [ 5] <no-name>         PROGBITS         0000000000000000  00002b98
       0000000000400000  0000000000000010           0     0     8
  [ 6] <no-name>         PROGBITS         0000000000000000  00403000
       0000000040000000  0000000000001000           0     0     4096
Key to Flags:
  W (write), A (alloc), X (execute), M (merge), S (strings)
  I (info), L (link order), G (group), x (unknown)
  O (extra OS processing required) o (OS specific), p (processor specific)

There are no section groups in this file.

There are no program headers in this file.

There are no relocations in this file.

There are no unwind sections in this file.

No version information found in this file.
No note segments present in the core file.
$

Comment 5 Dave Anderson 2011-09-26 19:34:40 UTC
OK, looking at the two sample vmcores that were OK, they both panicked
in the same manner, where the init task (Pid 1) exits during system
boot -- which should obviously never happen:

crash> bt
PID: 1      TASK: ffff88003ef254c0  CPU: 1   COMMAND: "init"
 #0 [ffff88003ef27dd0] xen_panic_event at ffffffff810033c2
 #1 [ffff88003ef27df0] notifier_call_chain at ffffffff814f16d5
 #2 [ffff88003ef27e30] atomic_notifier_call_chain at ffffffff814f173a
 #3 [ffff88003ef27e40] panic at ffffffff814eb4f9
 #4 [ffff88003ef27ec0] do_exit at ffffffff8106ee52
 #5 [ffff88003ef27f40] do_group_exit at ffffffff8106eeb8
 #6 [ffff88003ef27f70] sys_exit_group at ffffffff8106ef47
 #7 [ffff88003ef27f80] system_call_fastpath at ffffffff8100b0b2
    RIP: 00007f6aa49c1ec8  RSP: 00007fff4f9c3a40  RFLAGS: 00010206
    RAX: 00000000000000e7  RBX: ffffffff8100b0b2  RCX: 0000000000400cb4
    RDX: 0000000000000001  RSI: 000000000000003c  RDI: 0000000000000001
    RBP: 0000000000000000   R8: 00000000000000e7   R9: ffffffffffffffa8
    R10: 00007fff4f9c39b0  R11: 0000000000000206  R12: ffffffff8106ef47
    R13: ffff88003ef27f78  R14: 0000000000000000  R15: 00007fff4f9c3f00
    ORIG_RAX: 00000000000000e7  CS: 0033  SS: 002b
crash>

And here is the log just prior to the panic:

crash> log
... [ cut ] ...
Freeing unused kernel memory: 1044k freed
Freeing unused kernel memory: 1760k freed
dracut: dracut-004-242.el6
dracut: rd_NO_LUKS: removing cryptoluks activation
device-mapper: uevent: version 1.0.3
device-mapper: ioctl: 4.21.6-ioctl (2011-07-06) initialised: dm-devel
udev: starting version 147
dracut: Starting plymouth daemon
dracut: rd_NO_DM: removing DM RAID activation
dracut: rd_NO_MD: removing MD RAID activation
xlblk_init: register_blkdev major: 202 
blkfront: xvda: barriers disabled
 xvda: xvda1 xvda2
dracut Warning: No root device "block:/dev/mapper/vg_dhcp4738-lv_root" found
dracut Warning: LVM vg_dhcp4738/lv_swap not found
dracut Warning: LVM vg_dhcp4738/lv_root not found
dracut Warning: Boot has failed. To debug this issue add "rdshell" to the kernel command line.
dracut Warning: Signal caught!
dracut Warning: LVM vg_dhcp4738/lv_swap not found
dracut Warning: LVM vg_dhcp4738/lv_root not found
dracut Warning: Boot has failed. To debug this issue add "rdshell" to the kernel command line.
Kernel panic - not syncing: Attempted to kill init!
Pid: 1, comm: init Not tainted 2.6.32-201.el6.x86_64 #1
Call Trace:
 [<ffffffff814eb4cb>] ? panic+0x78/0x143
 [<ffffffff8106ee52>] ? do_exit+0x852/0x860
 [<ffffffff81177c95>] ? fput+0x25/0x30
 [<ffffffff8106eeb8>] ? do_group_exit+0x58/0xd0
 [<ffffffff8106ef47>] ? sys_exit_group+0x17/0x20
 [<ffffffff8100b0b2>] ? system_call_fastpath+0x16/0x1b
crash>

Comment 6 Jeff Burke 2011-09-27 12:35:03 UTC

*** This bug has been marked as a duplicate of bug 740786 ***


Note You need to log in before you can comment on or make changes to this bug.