Hide Forgot
Description of problem: Version-Release number of selected component (if applicable): [root@cnode1:/root] # guestfish --version guestfish 1.28.1rhel=7,release=1.55.el7.centos.4,libvirt os:centos7.2 x64 kvm: qemu-kvm-1.5.3-105.el7_2.7.x86_64 qemu-kvm-common-1.5.3-105.el7_2.7.x86_64 libvirt-daemon-kvm-1.2.17-13.el7_2.5.x86_64 winsupport: libguestfs-winsupport-7.2-1.el7.x86_64 How reproducible: guestfish --rw -i -d ceph-fs-7 upload /chost/guest/conf/ceph-fs-7/iface_0_static /cloudvminit.bat Steps to Reproduce: 1.define a vm from xml 2.upload init file to vm 3. Actual results: guestfish: no operating system was found on this disk Expected results: Additional info: virt-ls can list the dir of windows at the same env # virt-ls -d ceph-fs-7 / $RECYCLE.BIN PerfLogs Program Files Program Files (x86) ProgramData System Volume Information Users Windows cloudvminit.bat pagefile.sys
Please provide the full output of the guestfish command that fails, adding -v -x to it -- so: $ guestfish --rw -i -d ceph-fs-7 -v -x \ upload /chost/guest/conf/ceph-fs-7/iface_0_static /cloudvminit.bat
# guestfish --rw -i -d ceph-fs-7 -v -x \ > upload /chost/guest/conf/ceph-fs-7/iface_0_static /cloudvminit.bat libguestfs: trace: set_pgroup true libguestfs: trace: set_pgroup = 0 libguestfs: trace: add_domain "ceph-fs-7" "allowuuid:true" "readonlydisk:read" libguestfs: opening libvirt handle: URI = NULL, auth = default+wrapper, flags = 1 libguestfs: successfully opened libvirt handle: conn = 0x5576ae9a5c70 libguestfs: error: error: domain is a live virtual machine. Writing to the disks of a running virtual machine can cause disk corruption. Either use read-only access, or if the guest is running the guestfsd daemon specify live access. In most libguestfs tools these options are --ro or --live respectively. Consult the documentation for further information. libguestfs: trace: add_domain = -1 (error) libguestfs: trace: close libguestfs: closing guestfs handle 0x5576ae9a56e0 (state 0)
pls see this: # guestfish --rw -i -d ceph-rbd-linux -v -x upload /chost/guest/conf/ceph-fs/cloudvminit_full /cloudvminit.bat libguestfs: trace: set_pgroup true libguestfs: trace: set_pgroup = 0 libguestfs: trace: add_domain "ceph-rbd-linux" "allowuuid:true" "readonlydisk:read" libguestfs: opening libvirt handle: URI = NULL, auth = default+wrapper, flags = 1 libguestfs: successfully opened libvirt handle: conn = 0x559462d60c70 libguestfs: trace: clear_backend_setting "internal_libvirt_norelabel_disks" libguestfs: trace: clear_backend_setting = 0 libguestfs: disk[0]: network device libguestfs: disk[0]: protocol: rbd libguestfs: disk[0]: username: libvirt libguestfs: disk[0]: host: 172.1.1.10:6789 libguestfs: disk[0]: host: 172.1.1.11:6789 libguestfs: disk[0]: host: 172.1.1.12:6789 libguestfs: disk[0]: filename: ssd-pool1/ceph-fs.2106110215490000.root libguestfs: trace: add_drive "ssd-pool1/ceph-fs.2106110215490000.root" "readonly:false" "protocol:rbd" "server:172.1.1.10:6789 172.1.1.11:6789 172.1.1.12:6789" "username:libvirt" libguestfs: trace: add_drive = 0 libguestfs: disk[1]: network device libguestfs: disk[1]: protocol: rbd libguestfs: disk[1]: username: libvirt libguestfs: disk[1]: host: 172.1.1.10:6789 libguestfs: disk[1]: host: 172.1.1.11:6789 libguestfs: disk[1]: host: 172.1.1.12:6789 libguestfs: disk[1]: filename: ssd-pool1/ceph-fs.2016110215490000.data libguestfs: trace: add_drive "ssd-pool1/ceph-fs.2016110215490000.data" "readonly:false" "protocol:rbd" "server:172.1.1.10:6789 172.1.1.11:6789 172.1.1.12:6789" "username:libvirt" libguestfs: trace: add_drive = 0 libguestfs: trace: add_domain = 2 libguestfs: trace: is_config libguestfs: trace: is_config = 1 libguestfs: trace: launch libguestfs: trace: get_tmpdir libguestfs: trace: get_tmpdir = "/tmp" libguestfs: trace: version libguestfs: trace: version = <struct guestfs_version *> libguestfs: trace: get_backend libguestfs: trace: get_backend = "libvirt" libguestfs: launch: program=guestfish libguestfs: launch: version=1.28.1rhel=7,release=1.55.el7.centos.4,libvirt libguestfs: launch: backend registered: unix libguestfs: launch: backend registered: uml libguestfs: launch: backend registered: libvirt libguestfs: launch: backend registered: direct libguestfs: launch: backend=libvirt libguestfs: launch: tmpdir=/tmp/libguestfsEbgWfu libguestfs: launch: umask=0022 libguestfs: launch: euid=0 libguestfs: libvirt version = 1002017 (1.2.17) libguestfs: guest random name = guestfs-74df5z94olars8q6 libguestfs: [00000ms] connect to libvirt libguestfs: opening libvirt handle: URI = qemu:///system, auth = default+wrapper, flags = 0 libguestfs: successfully opened libvirt handle: conn = 0x559462d665e0 libguestfs: qemu version (reported by libvirt) = 1005003 (1.5.3) libguestfs: [00001ms] get libvirt capabilities libguestfs: [00007ms] parsing capabilities XML libguestfs: trace: get_backend_setting "force_tcg" libguestfs: trace: get_backend_setting = NULL (error) libguestfs: trace: get_backend_setting "internal_libvirt_label" libguestfs: trace: get_backend_setting = NULL (error) libguestfs: trace: get_backend_setting "internal_libvirt_imagelabel" libguestfs: trace: get_backend_setting = NULL (error) libguestfs: trace: get_backend_setting "internal_libvirt_norelabel_disks" libguestfs: trace: get_backend_setting = NULL (error) libguestfs: [00007ms] build appliance libguestfs: trace: get_cachedir libguestfs: trace: get_cachedir = "/var/tmp" libguestfs: [00008ms] begin building supermin appliance libguestfs: [00008ms] run supermin libguestfs: command: run: /usr/bin/supermin5 libguestfs: command: run: \ --build libguestfs: command: run: \ --verbose libguestfs: command: run: \ --if-newer libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock libguestfs: command: run: \ --copy-kernel libguestfs: command: run: \ -f ext2 libguestfs: command: run: \ --host-cpu x86_64 libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d supermin: version: 5.1.10 supermin: rpm: detected RPM version 4.11 supermin: package handler: fedora/rpm supermin: acquiring lock on /var/tmp/.guestfs-0/lock supermin: if-newer: output does not need rebuilding libguestfs: [00018ms] finished building supermin appliance libguestfs: trace: disk_create "/tmp/libguestfsEbgWfu/overlay1" "qcow2" -1 "backingfile:/var/tmp/.guestfs-0/appliance.d/root" "backingformat:raw" libguestfs: command: run: qemu-img libguestfs: command: run: \ create libguestfs: command: run: \ -f qcow2 libguestfs: command: run: \ -o backing_file=/var/tmp/.guestfs-0/appliance.d/root,backing_fmt=raw libguestfs: command: run: \ /tmp/libguestfsEbgWfu/overlay1 Formatting '/tmp/libguestfsEbgWfu/overlay1', fmt=qcow2 size=4294967296 backing_file='/var/tmp/.guestfs-0/appliance.d/root' backing_fmt='raw' encryption=off cluster_size=65536 lazy_refcounts=off libguestfs: trace: disk_create = 0 libguestfs: set_socket_create_context: getcon failed: (none): Invalid argument [you can ignore this UNLESS using SELinux + sVirt] libguestfs: clear_socket_create_context: setsockcreatecon failed: NULL: Invalid argument [you can ignore this UNLESS using SELinux + sVirt] libguestfs: [00048ms] create libvirt XML libguestfs: error: could not auto-detect the format when using a non-file protocol. If the format is known, pass the format to libguestfs, eg. using the '--format' option, or via the optional 'format' argument to 'add-drive'. libguestfs: clear_socket_create_context: setsockcreatecon failed: NULL: Invalid argument [you can ignore this UNLESS using SELinux + sVirt] libguestfs: trace: launch = -1 (error) libguestfs: trace: close libguestfs: closing guestfs handle 0x559462d606e0 (state 0) libguestfs: command: run: rm libguestfs: command: run: \ -rf /tmp/libguestfsEbgWfu
VM os is windows2008: # guestfish --rw -i -d ceph-rbd-win08 -v -x upload /chost/guest/conf/ceph-rbd-win08/cloudvminit_full_static /cloudvminit.bat libguestfs: trace: set_pgroup true libguestfs: trace: set_pgroup = 0 libguestfs: trace: add_domain "ceph-rbd-win08" "allowuuid:true" "readonlydisk:read" libguestfs: opening libvirt handle: URI = NULL, auth = default+wrapper, flags = 1 libguestfs: successfully opened libvirt handle: conn = 0x55d5edee5c70 libguestfs: trace: clear_backend_setting "internal_libvirt_norelabel_disks" libguestfs: trace: clear_backend_setting = 0 libguestfs: disk[0]: network device libguestfs: disk[0]: protocol: rbd libguestfs: disk[0]: username: libvirt libguestfs: disk[0]: host: 172.1.1.10:6789 libguestfs: disk[0]: host: 172.1.1.11:6789 libguestfs: disk[0]: host: 172.1.1.12:6789 libguestfs: disk[0]: filename: ssd-pool1/ceph-rbd-win08.2016110310320000.root libguestfs: trace: add_drive "ssd-pool1/ceph-rbd-win08.2016110310320000.root" "readonly:false" "protocol:rbd" "server:172.1.1.10:6789 172.1.1.11:6789 172.1.1.12:6789" "username:libvirt" libguestfs: trace: add_drive = 0 libguestfs: disk[1]: network device libguestfs: disk[1]: protocol: rbd libguestfs: disk[1]: username: libvirt libguestfs: disk[1]: host: 172.1.1.10:6789 libguestfs: disk[1]: host: 172.1.1.11:6789 libguestfs: disk[1]: host: 172.1.1.12:6789 libguestfs: disk[1]: filename: ceph-rbd-win08.2016110310320000.data libguestfs: trace: add_drive "ceph-rbd-win08.2016110310320000.data" "readonly:false" "protocol:rbd" "server:172.1.1.10:6789 172.1.1.11:6789 172.1.1.12:6789" "username:libvirt" libguestfs: trace: add_drive = 0 libguestfs: trace: add_domain = 2 libguestfs: trace: is_config libguestfs: trace: is_config = 1 libguestfs: trace: launch libguestfs: trace: get_tmpdir libguestfs: trace: get_tmpdir = "/tmp" libguestfs: trace: version libguestfs: trace: version = <struct guestfs_version *> libguestfs: trace: get_backend libguestfs: trace: get_backend = "libvirt" libguestfs: launch: program=guestfish libguestfs: launch: version=1.28.1rhel=7,release=1.55.el7.centos.4,libvirt libguestfs: launch: backend registered: unix libguestfs: launch: backend registered: uml libguestfs: launch: backend registered: libvirt libguestfs: launch: backend registered: direct libguestfs: launch: backend=libvirt libguestfs: launch: tmpdir=/tmp/libguestfsDHUYi8 libguestfs: launch: umask=0022 libguestfs: launch: euid=0 libguestfs: libvirt version = 1002017 (1.2.17) libguestfs: guest random name = guestfs-xkui2zpozipg3497 libguestfs: [00000ms] connect to libvirt libguestfs: opening libvirt handle: URI = qemu:///system, auth = default+wrapper, flags = 0 libguestfs: successfully opened libvirt handle: conn = 0x55d5edee5f90 libguestfs: qemu version (reported by libvirt) = 1005003 (1.5.3) libguestfs: [00001ms] get libvirt capabilities libguestfs: [00007ms] parsing capabilities XML libguestfs: trace: get_backend_setting "force_tcg" libguestfs: trace: get_backend_setting = NULL (error) libguestfs: trace: get_backend_setting "internal_libvirt_label" libguestfs: trace: get_backend_setting = NULL (error) libguestfs: trace: get_backend_setting "internal_libvirt_imagelabel" libguestfs: trace: get_backend_setting = NULL (error) libguestfs: trace: get_backend_setting "internal_libvirt_norelabel_disks" libguestfs: trace: get_backend_setting = NULL (error) libguestfs: [00007ms] build appliance libguestfs: trace: get_cachedir libguestfs: trace: get_cachedir = "/var/tmp" libguestfs: [00007ms] begin building supermin appliance libguestfs: [00007ms] run supermin libguestfs: command: run: /usr/bin/supermin5 libguestfs: command: run: \ --build libguestfs: command: run: \ --verbose libguestfs: command: run: \ --if-newer libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock libguestfs: command: run: \ --copy-kernel libguestfs: command: run: \ -f ext2 libguestfs: command: run: \ --host-cpu x86_64 libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d supermin: version: 5.1.10 supermin: rpm: detected RPM version 4.11 supermin: package handler: fedora/rpm supermin: acquiring lock on /var/tmp/.guestfs-0/lock supermin: if-newer: output does not need rebuilding libguestfs: [00023ms] finished building supermin appliance libguestfs: trace: disk_create "/tmp/libguestfsDHUYi8/overlay1" "qcow2" -1 "backingfile:/var/tmp/.guestfs-0/appliance.d/root" "backingformat:raw" libguestfs: command: run: qemu-img libguestfs: command: run: \ create libguestfs: command: run: \ -f qcow2 libguestfs: command: run: \ -o backing_file=/var/tmp/.guestfs-0/appliance.d/root,backing_fmt=raw libguestfs: command: run: \ /tmp/libguestfsDHUYi8/overlay1 Formatting '/tmp/libguestfsDHUYi8/overlay1', fmt=qcow2 size=4294967296 backing_file='/var/tmp/.guestfs-0/appliance.d/root' backing_fmt='raw' encryption=off cluster_size=65536 lazy_refcounts=off libguestfs: trace: disk_create = 0 libguestfs: set_socket_create_context: getcon failed: (none): Invalid argument [you can ignore this UNLESS using SELinux + sVirt] libguestfs: clear_socket_create_context: setsockcreatecon failed: NULL: Invalid argument [you can ignore this UNLESS using SELinux + sVirt] libguestfs: [00053ms] create libvirt XML libguestfs: error: could not auto-detect the format when using a non-file protocol. If the format is known, pass the format to libguestfs, eg. using the '--format' option, or via the optional 'format' argument to 'add-drive'. libguestfs: clear_socket_create_context: setsockcreatecon failed: NULL: Invalid argument [you can ignore this UNLESS using SELinux + sVirt] libguestfs: trace: launch = -1 (error) libguestfs: trace: close libguestfs: closing guestfs handle 0x55d5edee56e0 (state 0) libguestfs: command: run: rm libguestfs: command: run: \ -rf /tmp/libguestfsDHUYi8
(In reply to 395783748 from comment #2) > # guestfish --rw -i -d ceph-fs-7 -v -x \ > > upload /chost/guest/conf/ceph-fs-7/iface_0_static /cloudvminit.bat > libguestfs: trace: set_pgroup true > libguestfs: trace: set_pgroup = 0 > libguestfs: trace: add_domain "ceph-fs-7" "allowuuid:true" > "readonlydisk:read" > libguestfs: opening libvirt handle: URI = NULL, auth = default+wrapper, > flags = 1 > libguestfs: successfully opened libvirt handle: conn = 0x5576ae9a5c70 > libguestfs: error: error: domain is a live virtual machine. > Writing to the disks of a running virtual machine can cause disk corruption. > Either use read-only access, or if the guest is running the guestfsd daemon > specify live access. In most libguestfs tools these options are --ro or > --live respectively. Consult the documentation for further information. You cannot use guestfish in read-write mode on running libvirt domains. This is not an error.
The other error seems to be: libguestfs: error: could not auto-detect the format when using a non-file protocol. If the format is known, pass the format to libguestfs, eg. using the '--format' option, or via the optional 'format' argument to 'add-drive'. That is a real bug and probably happens because the libvirt XML doesn't describe the format of the disk (or we aren't parsing the XML and finding that properly). Try: virsh dumpxml ceph-rbd-linux Please don't truncate the output of commands. If the output is too large to put in a comment, then attach it to the bug using the "Add an attachment" link.
Created attachment 1216905 [details] xml of ceph-rbd-linux
Created attachment 1216906 [details] xml of ceph-rbd-win08
I believe this is a real bug. If you look at the libvirt XML for the Ceph drives: <disk type='network' device='disk'> <driver name='qemu' type='raw'/> ... <source protocol='rbd' name='ssd-pool1/ceph-fs.2106110215490000.root'> ... </disk> and <disk type='network' device='disk'> <driver name='qemu' type='raw'/> ... <source protocol='rbd' name='ssd-pool1/ceph-rbd-win08.2016110310320000.root'> ... </disk> In both cases the format is present (type='raw') but we are not passing that to the add_drive API: libguestfs: trace: add_drive "ssd-pool1/ceph-fs.2106110215490000.root" "readonly:false" "protocol:rbd" "server:172.1.1.10:6789 172.1.1.11:6789 172.1.1.12:6789" "username:libvirt"
It turns out this does work with the latest upstream version (1.35.14) and also with the latest RHEL version. Please try the RHEL 7.3 version of libguestfs: https://people.redhat.com/~rjones/libguestfs-RHEL-7.3-preview/
Created attachment 1217323 [details] debug info of guestfish injection i update the libguestfs to REHL 7.3 version but the same err exist. version of libguestfs are: [root@cnode1:/root] # rpm -qa|grep libguest libguestfs-tools-c-1.32.7-3.el7.x86_64 libguestfs-tools-1.32.7-3.el7.noarch libguestfs-1.32.7-3.el7.x86_64
Hi, I use guest -a /disk.img mode,the result show the kvm can't connect to rbd the details: # guestfish --format=raw -a rbd:///ssd-pool1/ceph-rbd-win08.2016110310320000.root libguestfs: trace: set_verbose true libguestfs: trace: set_verbose = 0 libguestfs: trace: set_backend "direct" libguestfs: trace: set_backend = 0 libguestfs: create: flags = 0, handle = 0x55daa9e6bae0, program = guestfish libguestfs: trace: set_pgroup true libguestfs: trace: set_pgroup = 0 libguestfs: trace: add_drive "ssd-pool1/ceph-rbd-win08.2016110310320000.root" "format:raw" "protocol:rbd" libguestfs: trace: add_drive = 0 Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell ><fs> run libguestfs: trace: launch libguestfs: trace: get_tmpdir libguestfs: trace: get_tmpdir = "/tmp" libguestfs: trace: version libguestfs: trace: version = <struct guestfs_version = major: 1, minor: 32, release: 7, extra: rhel=7,release=3.el7,libvirt, > libguestfs: trace: get_backend libguestfs: trace: get_backend = "direct" libguestfs: launch: program=guestfish libguestfs: launch: version=1.32.7rhel=7,release=3.el7,libvirt libguestfs: launch: backend registered: unix libguestfs: launch: backend registered: uml libguestfs: launch: backend registered: libvirt libguestfs: launch: backend registered: direct libguestfs: launch: backend=direct libguestfs: launch: tmpdir=/tmp/libguestfsF9QCer libguestfs: launch: umask=0022 libguestfs: launch: euid=0 libguestfs: trace: get_backend_setting "force_tcg" libguestfs: trace: get_backend_setting = NULL (error) libguestfs: trace: get_cachedir libguestfs: trace: get_cachedir = "/var/tmp" libguestfs: begin building supermin appliance libguestfs: run supermin libguestfs: command: run: /usr/bin/supermin5 libguestfs: command: run: \ --build libguestfs: command: run: \ --verbose libguestfs: command: run: \ --if-newer libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock libguestfs: command: run: \ --copy-kernel libguestfs: command: run: \ -f ext2 libguestfs: command: run: \ --host-cpu x86_64 libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d supermin: version: 5.1.16 supermin: rpm: detected RPM version 4.11 supermin: package handler: fedora/rpm supermin: acquiring lock on /var/tmp/.guestfs-0/lock supermin: if-newer: output does not need rebuilding libguestfs: finished building supermin appliance libguestfs: begin testing qemu features libguestfs: command: run: /usr/libexec/qemu-kvm libguestfs: command: run: \ -display none libguestfs: command: run: \ -help libguestfs: qemu version 1.5 libguestfs: command: run: /usr/libexec/qemu-kvm libguestfs: command: run: \ -display none libguestfs: command: run: \ -machine accel=kvm:tcg libguestfs: command: run: \ -device ? libguestfs: finished testing qemu features libguestfs: trace: get_backend_setting "gdb" libguestfs: trace: get_backend_setting = NULL (error) [00144ms] /usr/libexec/qemu-kvm \ -global virtio-blk-pci.scsi=off \ -nodefconfig \ -enable-fips \ -nodefaults \ -display none \ -machine accel=kvm:tcg \ -cpu host \ -m 500 \ -no-reboot \ -rtc driftfix=slew \ -no-hpet \ -global kvm-pit.lost_tick_policy=discard \ -kernel /var/tmp/.guestfs-0/appliance.d/kernel \ -initrd /var/tmp/.guestfs-0/appliance.d/initrd \ -object rng-random,filename=/dev/urandom,id=rng0 \ -device virtio-rng-pci,rng=rng0 \ -device virtio-scsi-pci,id=scsi \ -drive file=rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none,cache=writeback,format=raw,id=hd0,if=none \ -device scsi-hd,drive=hd0 \ -drive file=/var/tmp/.guestfs-0/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw \ -device scsi-hd,drive=appliance \ -device virtio-serial-pci \ -serial stdio \ -device sga \ -chardev socket,path=/tmp/libguestfsF9QCer/guestfsd.sock,id=channel0 \ -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \ -append 'panic=1 console=ttyS0 udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm' qemu-kvm: -drive file=rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none,cache=writeback,format=raw,id=hd0,if=none: error connecting qemu-kvm: -drive file=rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none,cache=writeback,format=raw,id=hd0,if=none: could not open disk image rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none: Could not open 'rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none': Operation not supported libguestfs: error: appliance closed the connection unexpectedly, see earlier error messages libguestfs: child_cleanup: 0x55daa9e6bae0: child process died libguestfs: sending SIGTERM to process 16978 libguestfs: error: /usr/libexec/qemu-kvm exited with error status 1, see debug messages above libguestfs: error: guestfs_launch failed, see earlier error messages libguestfs: trace: launch = -1 (error) ><fs> quit libguestfs: trace: shutdown libguestfs: trace: shutdown = 0 libguestfs: trace: close libguestfs: closing guestfs handle 0x55daa9e6bae0 (state 0) libguestfs: command: run: rm libguestfs: command: run: \ -rf /tmp/libguestfsF9QCer
(In reply to 395783748 from comment #12) > I use guest -a /disk.img mode,the result show the kvm can't connect to rbd > [...] > # guestfish --format=raw -a > rbd:///ssd-pool1/ceph-rbd-win08.2016110310320000.root > [...] > libguestfs: trace: add_drive > "ssd-pool1/ceph-rbd-win08.2016110310320000.root" "format:raw" "protocol:rbd" > libguestfs: trace: add_drive = 0 The format is properly specified. > [00144ms] /usr/libexec/qemu-kvm \ > -global virtio-blk-pci.scsi=off \ > -nodefconfig \ > -enable-fips \ > -nodefaults \ > -display none \ > -machine accel=kvm:tcg \ > -cpu host \ > -m 500 \ > -no-reboot \ > -rtc driftfix=slew \ > -no-hpet \ > -global kvm-pit.lost_tick_policy=discard \ > -kernel /var/tmp/.guestfs-0/appliance.d/kernel \ > -initrd /var/tmp/.guestfs-0/appliance.d/initrd \ > -object rng-random,filename=/dev/urandom,id=rng0 \ > -device virtio-rng-pci,rng=rng0 \ > -device virtio-scsi-pci,id=scsi \ > -drive > file=rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none, > cache=writeback,format=raw,id=hd0,if=none \ > -device scsi-hd,drive=hd0 \ > -drive > file=/var/tmp/.guestfs-0/appliance.d/root,snapshot=on,id=appliance, > cache=unsafe,if=none,format=raw \ > -device scsi-hd,drive=appliance \ > -device virtio-serial-pci \ > -serial stdio \ > -device sga \ > -chardev socket,path=/tmp/libguestfsF9QCer/guestfsd.sock,id=channel0 \ > -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \ > -append 'panic=1 console=ttyS0 udevtimeout=6000 udev.event-timeout=6000 > no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb > cryptomgr.notests 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 > TERM=xterm' > qemu-kvm: -drive > file=rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none, > cache=writeback,format=raw,id=hd0,if=none: error connecting > qemu-kvm: -drive > file=rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none, > cache=writeback,format=raw,id=hd0,if=none: could not open disk image > rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none: > Could not open > 'rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none': > Operation not supported This looks like a qemu issue -- what's the exact version of it? (i.e. `rpm -q qemu-kvm`)
# rpm -q qemu-kvm BDB2053 Freeing read locks for locker 0x171: 16436/139742737377344 BDB2053 Freeing read locks for locker 0x173: 16436/139742737377344 BDB2053 Freeing read locks for locker 0x174: 16436/139742737377344 BDB2053 Freeing read locks for locker 0x175: 16436/139742737377344 qemu-kvm-1.5.3-105.el7_2.7.x86_64
Pretty sure the non-rhev version of qemu does not support Ceph. You will need to use qemu-kvm-rhev (which is an extra subscription if you're using RHEL).
can you provide a version of qemu-kvm-rhev support ceph. i see many version at http://ftp.redhat.com/pub/redhat/linux/enterprise/7Server/en/RHEV/SRPMS/
i have tried,but the same problem. # guestfish --format=raw -a rbd:///ssd-pool1/ceph-rbd-win08.2016110310320000.root libguestfs: trace: set_verbose true libguestfs: trace: set_verbose = 0 libguestfs: trace: set_backend "direct" libguestfs: trace: set_backend = 0 libguestfs: create: flags = 0, handle = 0x559d8e065b00, program = guestfish libguestfs: trace: set_pgroup true libguestfs: trace: set_pgroup = 0 libguestfs: trace: add_drive "ssd-pool1/ceph-rbd-win08.2016110310320000.root" "format:raw" "protocol:rbd" libguestfs: trace: add_drive = 0 Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell ><fs> run libguestfs: trace: launch libguestfs: trace: get_tmpdir libguestfs: trace: get_tmpdir = "/tmp" libguestfs: trace: version libguestfs: trace: version = <struct guestfs_version = major: 1, minor: 32, release: 7, extra: rhel=7,release=3.el7,libvirt, > libguestfs: trace: get_backend libguestfs: trace: get_backend = "direct" libguestfs: launch: program=guestfish libguestfs: launch: version=1.32.7rhel=7,release=3.el7,libvirt libguestfs: launch: backend registered: unix libguestfs: launch: backend registered: uml libguestfs: launch: backend registered: libvirt libguestfs: launch: backend registered: direct libguestfs: launch: backend=direct libguestfs: launch: tmpdir=/tmp/libguestfsLP3rRE libguestfs: launch: umask=0022 libguestfs: launch: euid=0 libguestfs: trace: get_backend_setting "force_tcg" libguestfs: trace: get_backend_setting = NULL (error) libguestfs: trace: get_cachedir libguestfs: trace: get_cachedir = "/var/tmp" libguestfs: begin building supermin appliance libguestfs: run supermin libguestfs: command: run: /usr/bin/supermin5 libguestfs: command: run: \ --build libguestfs: command: run: \ --verbose libguestfs: command: run: \ --if-newer libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock libguestfs: command: run: \ --copy-kernel libguestfs: command: run: \ -f ext2 libguestfs: command: run: \ --host-cpu x86_64 libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d supermin: version: 5.1.16 supermin: rpm: detected RPM version 4.11 supermin: package handler: fedora/rpm supermin: acquiring lock on /var/tmp/.guestfs-0/lock supermin: if-newer: output does not need rebuilding libguestfs: finished building supermin appliance libguestfs: begin testing qemu features libguestfs: command: run: /usr/libexec/qemu-kvm libguestfs: command: run: \ -display none libguestfs: command: run: \ -help libguestfs: qemu version 2.6 libguestfs: command: run: /usr/libexec/qemu-kvm libguestfs: command: run: \ -display none libguestfs: command: run: \ -machine accel=kvm:tcg libguestfs: command: run: \ -device ? libguestfs: finished testing qemu features libguestfs: trace: get_backend_setting "gdb" libguestfs: trace: get_backend_setting = NULL (error) [00149ms] /usr/libexec/qemu-kvm \ -global virtio-blk-pci.scsi=off \ -nodefconfig \ -enable-fips \ -nodefaults \ -display none \ -machine accel=kvm:tcg \ -cpu host \ -m 500 \ -no-reboot \ -rtc driftfix=slew \ -no-hpet \ -global kvm-pit.lost_tick_policy=discard \ -kernel /var/tmp/.guestfs-0/appliance.d/kernel \ -initrd /var/tmp/.guestfs-0/appliance.d/initrd \ -object rng-random,filename=/dev/urandom,id=rng0 \ -device virtio-rng-pci,rng=rng0 \ -device virtio-scsi-pci,id=scsi \ -drive file=rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none,cache=writeback,format=raw,id=hd0,if=none \ -device scsi-hd,drive=hd0 \ -drive file=/var/tmp/.guestfs-0/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw \ -device scsi-hd,drive=appliance \ -device virtio-serial-pci \ -serial stdio \ -device sga \ -chardev socket,path=/tmp/libguestfsLP3rRE/guestfsd.sock,id=channel0 \ -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \ -append 'panic=1 console=ttyS0 udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm' qemu-kvm: -drive file=rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none,cache=writeback,format=raw,id=hd0,if=none: error connecting: Operation not supported libguestfs: error: appliance closed the connection unexpectedly, see earlier error messages libguestfs: child_cleanup: 0x559d8e065b00: child process died libguestfs: sending SIGTERM to process 23481 libguestfs: error: /usr/libexec/qemu-kvm exited with error status 1, see debug messages above libguestfs: error: guestfs_launch failed, see earlier error messages libguestfs: trace: launch = -1 (error) ><fs> quit libguestfs: trace: shutdown libguestfs: trace: shutdown = 0 libguestfs: trace: close libguestfs: closing guestfs handle 0x559d8e065b00 (state 0) libguestfs: command: run: rm libguestfs: command: run: \ -rf /tmp/libguestfsLP3rRE the kvm version is: [root@cnode1:/root] # rpm -qa|grep kvm qemu-kvm-tools-rhev-2.6.0-27.el7.centos.x86_64 qemu-kvm-common-rhev-2.6.0-27.el7.centos.x86_64 qemu-kvm-rhev-2.6.0-27.el7.centos.x86_64 libvirt-daemon-kvm-1.2.17-13.el7_2.5.x86_64 qemu-kvm-rhev-debuginfo-2.6.0-27.el7.centos.x86_64
i have update the qemu-kvm to qemu-kvm-rhev,but the same problem exist
(In reply to 395783748 from comment #18) > i have update the qemu-kvm to qemu-kvm-rhev,but the same problem exist The original problem you reported is fixed -- and thus this bug will be closed again. What you are facing now looks like a different issue -- please file a new bug about that, instead of changing an existing bug.