Description of problem:
When hunting bug 922419, I could not see any reason why qemu has quit before raising libvirt log level - vdsm just knew that libvirt couldn't connect to the qemu monitor.
This is certainly not good for debugging any issues, messages like the one in bug 922419 should be exposed at least at vdsm level.
Version-Release number of selected component (if applicable):
should be always
Steps to Reproduce:
1. do something with an image file that will prevent qemu from opening it
2. run the VM that uses that image for the disk
no meaningful error is reported either in qemu, libvirt or vdsm log
error message that tells what went wrong is printed in all three involved log files
filing as vdsm error as vdsm reconfigures libvirt when vdsm gets installed or upgraded
what log level did you define in libvirt to see the error?
log_level = 1
log_filters="1:keepalive 3:rpc 3:remote 3:util/json 3:event 3:udev 3:virobject 3:netlink"
in /etc/libvirt/libvirtd.conf as recommended by libvirt developers in different situation. The libvirtd.log is very verbose though.
(In reply to comment #2)
> I used:
> log_level = 1
> log_filters="1:keepalive 3:rpc 3:remote 3:util/json 3:event 3:udev
> 3:virobject 3:netlink"
> in /etc/libvirt/libvirtd.conf as recommended by libvirt developers in
> different situation. The libvirtd.log is very verbose though.
Then it sounds more like a libvirt issue (this error message should be written in a lower logging level)
Eric, would you agree?
I can't reproduce, or something wrong with my steps.
1. # chmod 000 rhel.img
//I don't know if this action is ok, or you mean should do like # echo > rhel.img
2. # virsh start rhel
# cat /var/log/libvirt/libvirtd.log
qemu-kvm: -drive file=/var/lib/libvirt/images/rhel.img,if=none,id=drive-ide0-0-0,format=raw,cache=none: could not open disk image /var/lib/libvirt/images/rhel.img: Permission denied
I hit this bug when using RHEV that created quite uncommon conditions when dealing with snapshots - for reproducing of buggy behaviour, see the bug linked from #c0 for my original reproducer and the bug it was marked as duplicate of.
This bug report is not about the bug which seems to be fixed by:
rather it's that given the environment you had, you didn't get notified of
the failure without making use of libvirtd debugging techniques. I'm not yet
familiar enough with the environments in order to install/use vdsm - it's
plain usage of virsh that's easiest (especially when describing issues).
I tried to create a process using the entrails of the linked bz's in order
to reproduce, but was unsuccessful in my 0.10.2 RHEL environment. Here's
what I did:
1. Create a qcow image:
qemu-img create -f qcow2 -o preallocation=metadata /home/vm-images/rh64qcow.qcow2 15G
2. Create a VM (I used virt-install):
virt-install --name rh64qcow --ram 3000 --disk path=/home/vm-images/rh64qcow.qcow2,format=qcow2,bus=virtio,cache=none --cdrom /home/ISO/RHEL6.4-20130130.0-Server-x86_64-DVD1.iso --noautoconsole --vnc --os-variant rhel6
3. Start the VM: (virsh start rh64qcow)
4. Generate a snapshot (or two): (virsh snapshot-create rh64qcow
virsh snapshot-list rh64qcow)
Name Creation Time State
1370632292 2013-06-07 15:11:32 -0400 running
1370632650 2013-06-07 15:17:30 -0400 running
5. Stop the VM: (virsh destroy rh64qcow)
6. Try to start again: (virsh start rh64qcow)
But that worked for me, so I'm a bit at a loss as to the next steps to try
or take. The original instructions assumed way too much regarding the
steps to take...
While doing research on this - it seems a recent upstream change may be the
"magic elixir" required in order to get the output as long as it's combined
with the fix from 903248. Until I'm able to reproduce though I cannot be sure.
Perhaps using your knowledge of how vdsm puts vm's together you can help me
with the steps to recreate in a non vdsm environment. Conversely be a bit
more explicit about how I'd go through creating a vdsm environment in which
I could reproduce.
This BZ is really a symptom of a more general problem of reporting qemu exit status, which is long term work going on upstream, so I'm closing this BZ as a duplicate of the Fedora bug that tracks it.
*** This bug has been marked as a duplicate of bug 868575 ***