RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 612453 - [abrt] qemu-kvm-2:0.12.1.2-2.90.el6: qemu_memalign: Process /usr/libexec/qemu-kvm was killed by signal 6 (SIGABRT)
Summary: [abrt] qemu-kvm-2:0.12.1.2-2.90.el6: qemu_memalign: Process /usr/libexec/qemu...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.0
Hardware: x86_64
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: john cooper
QA Contact: Virtualization Bugs
URL:
Whiteboard: abrt_hash:841543466c446020eb7a11ccd0e...
Depends On:
Blocks: 580953 619168
TreeView+ depends on / blocked
 
Reported: 2010-07-08 09:24 UTC by Michal Hlavinka
Modified: 2014-07-25 03:46 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 619168 (view as bug list)
Environment:
Last Closed: 2010-08-03 05:16:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
File: backtrace (12.47 KB, text/plain)
2010-07-08 09:24 UTC, Michal Hlavinka
no flags Details

Description Michal Hlavinka 2010-07-08 09:24:45 UTC
abrt version: 1.1.8
architecture: x86_64
Attached file: backtrace
cmdline: /usr/libexec/qemu-kvm -hda rawhide.x86_64.img -net nic,macaddr=DE:AD:BE:EF:92:19 -net tap -m 1024
comment: happens randomly in less then 2 seconds after start, it crashed 3 times so far (out of 20 runs)
component: qemu-kvm
crash_function: qemu_memalign
executable: /usr/libexec/qemu-kvm
kernel: 2.6.32-43.el6.x86_64
package: qemu-kvm-2:0.12.1.2-2.90.el6
rating: 4
reason: Process /usr/libexec/qemu-kvm was killed by signal 6 (SIGABRT)
release: Red Hat Enterprise Linux Client release 6.0 Beta (Santiago)
How to reproduce: see the command line
time: 1278576323
uid: 0

Comment 1 Michal Hlavinka 2010-07-08 09:24:48 UTC
Created attachment 430280 [details]
File: backtrace

Comment 3 Dor Laor 2010-07-12 11:20:36 UTC
Note that you're not running it with the right parameters. Nevertheless, I don't see a reason for the failure.
What happens when you use libvirt/virt-manager?

Comment 4 Michal Hlavinka 2010-07-12 12:45:48 UTC
(In reply to comment #3)
> Note that you're not running it with the right parameters.

what is wrong with them? I'm using them for almost two years and there were no problem so far.

> What happens when you use libvirt/virt-manager?    

I didn't have libvirt installed, because I was using only qemu-kvm with my own scripts. Anyway, I've installed it and tried to boot new virtual machine a few times and it was working fine, but this bug happens only occasionally and I was not able to reproduce it (before I installed libvirt) using qemu-kvm command from the reproducer neither.

Comment 5 RHEL Program Management 2010-07-15 14:02:57 UTC
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release. It has
been denied for the current Red Hat Enterprise Linux release.

** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **

Comment 6 Jes Sorensen 2010-07-22 20:49:37 UTC
This rawhide image you are using, is that something you installed yourself,
or was it downloaded from somewhere? Do you see this crash with other
image files, or only with this specific image?

It seems to be crashing in the early stage of launching qemu, in a call
to qemu_mem_alloc() from pc_init1().

Comment 7 Michal Hlavinka 2010-07-23 06:12:43 UTC
(In reply to comment #6)
> This rawhide image you are using, is that something you installed yourself,
> or was it downloaded from somewhere? 

I've installed using pxe and blank image

> Do you see this crash with other image files, or only with this specific image?

I've seen it with rhel5 image too (I don't use other than rawhide and rhel5 images too often)

Comment 8 Jes Sorensen 2010-07-23 06:41:57 UTC
Are you low on memory when this problem happens? It sounds like it is
unrelated to the image you use, which also matches where it crashes in
the code.

Comment 9 Michal Hlavinka 2010-07-23 07:24:45 UTC
(In reply to comment #8)
> Are you low on memory when this problem happens?

I'm not sure, but I don't think so. My machine has 4GB ram and no swap. I'm not running anything consuming too much memory except firefox. I'm reserving 1G for VM and I've tried I can use 2 VMs without any problem, so I guess there should be enough memory when running only 1 VM, but I don't know if qemu takes that 1 GB of memory immediately or only allocates it when needed.

Comment 10 john cooper 2010-07-23 07:29:39 UTC
Testing this with the most recent build:

http://qafiler.bos.redhat.com/redhat/nightly/RHEL6.0-20100722.n.0/6.0/Server/x86_64/iso/RHEL6.0-20100722.n.0-Server-x86_64-DVD1.iso

and with an installed guest of the same, I did not see the
failure reported above.  However a number of differences
exist including use of the latest rhel6.0 build, guest image,
and host environment.

Concerning the failure, puzzling the dump and source it appears
posix_memalign(3) is returning !0 to osdep.c:qemu_memalign()
which is calling abort().  So this is an internal integrity
check rather than qemu stumbling on a bad address AFAICT.

The only documented return from posix_memalign() which appears
possible given the code is ENOMEM.  EINVAL appears to not be
possible as the alignment is a fixed at:

./exec.c:#define PREFERRED_RAM_ALIGN (2*1024*1024)

So this may have been a transient failure due to host memory
unavailability.  If you are still able to create the failure,
it would be helpful to check whether you are indeed hitting
a host memory limitation at the time it occurs.

Also if this problem persists, could you make the "rawhide.x86_64.img"
available?  That would help narrow down the problem space.

Comment 11 Michal Hlavinka 2010-07-23 07:41:30 UTC
I can't test it right now, so I will test it on Monday

Comment 12 Jes Sorensen 2010-07-23 07:49:01 UTC
4GB of RAM isn't a whole lot if you are using Firefox and GNOME or KDE.
Try adding some swap and see if persists.

Per John's posting, this is the host that is out of suitable memory.

I don't think this is a bug.

Comment 13 Michal Hlavinka 2010-07-28 11:48:17 UTC
Odd, I was able to run two machines doing yum update without swap and it was working fine but next time I was not able to start any vm at all and there were the same other applications running. Anyway, I've created 8GB swap and was not able to reproduce this crash after swapon. So most probably some application just started to use much more memory than before, because it used to work fine in F-12.


Note You need to log in before you can comment on or make changes to this bug.