Bug 1392798 - secrets from libvirt domains are not read
Summary: secrets from libvirt domains are not read
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libguestfs
Version: 7.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Richard W.M. Jones
QA Contact: Virtualization Bugs
Yehuda Zimmerman
URL:
Whiteboard:
Depends On: 1359086
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-08 09:40 UTC by 395783748
Modified: 2017-08-01 22:11 UTC (History)
9 users (show)

Fixed In Version: libguestfs-1.36.1-1.el7
Doc Type: Bug Fix
Doc Text:
*libguestfs* can now correctly open *libvirt* domain disks that require authentication Previously, when adding disks from a *libvirt* domain, *libguestfs* did not read any disk secrets. Consequently, *libguestfs* could not open disks that required authentication. With this update, *libguestfs* reads secrets of disks in *libvirt* domains, if present. As a result, *libguestfs* can now correctly open disks of *libvirt* domains that require authentication.
Clone Of:
Environment:
Last Closed: 2017-08-01 22:11:26 UTC
Target Upstream Version:


Attachments (Terms of Use)
domain xml (4.72 KB, text/plain)
2016-11-08 09:40 UTC, 395783748
no flags Details
the output of guestfish inject (16.13 KB, text/plain)
2016-11-15 09:42 UTC, 395783748
no flags Details
the xml of domain (3.82 KB, text/plain)
2016-11-15 09:43 UTC, 395783748
no flags Details
[PATCH] proposed fix + needed patches (16.56 KB, patch)
2016-11-22 12:03 UTC, Pino Toscano
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2023 0 normal SHIPPED_LIVE libguestfs bug fix and enhancement update 2017-08-01 19:32:01 UTC

Description 395783748 2016-11-08 09:40:04 UTC
Created attachment 1218461 [details]
domain xml

Description of problem:

i use guestfish to injecting a file into a domian 

Version-Release number of selected component (if applicable):
libguestfs:
# rpm -qa|grep guest
libguestfs-winsupport-7.2-1.el7.x86_64
libguestfs-tools-c-1.32.7-3.el7.x86_64
libguestfs-tools-1.32.7-3.el7.noarch
libguestfs-1.32.7-3.el7.x86_64

kvm:
# rpm -qa|grep kvm
qemu-kvm-tools-rhev-2.6.0-27.el7.centos.x86_64
qemu-kvm-common-rhev-2.6.0-27.el7.centos.x86_64
qemu-kvm-rhev-2.6.0-27.el7.centos.x86_64
libvirt-daemon-kvm-1.2.17-13.el7_2.5.x86_64
qemu-kvm-rhev-debuginfo-2.6.0-27.el7.centos.x86_64

libvirt:
# rpm -qa|grep libvirt
libvirt-daemon-driver-interface-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-network-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-storage-1.2.17-13.el7_2.5.x86_64
libvirt-client-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-config-network-1.2.17-13.el7_2.5.x86_64
libvirt-python-1.2.17-2.el7.x86_64
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-lxc-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-kvm-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-secret-1.2.17-13.el7_2.5.x86_64
libvirt-1.2.17-13.el7_2.5.x86_64

How reproducible:


Steps to Reproduce:
1.define a domain from xml whith ceph-rbd image
2.use guestfish to inject a file into domain
  guestfish --rw -i -d ceph-rbd-win08 -v -x upload /chost/guest/conf/ceph-fs/cloudvminit_full /cloudvminit.bat
  
[root@cnode1:/root]
# guestfish --rw -i -d ceph-rbd-win08 -v -x upload /chost/guest/conf/ceph-fs/cloudvminit_full /cloudvminit.bat
libguestfs: trace: set_pgroup true
libguestfs: trace: set_pgroup = 0
libguestfs: trace: add_domain "ceph-rbd-win08" "allowuuid:true" "readonlydisk:read"
libguestfs: opening libvirt handle: URI = NULL, auth = default+wrapper, flags = 1
libguestfs: successfully opened libvirt handle: conn = 0x561b4a3b2230
libguestfs: trace: add_libvirt_dom (virDomainPtr)0x561b4a3b23f0 "readonlydisk:read"
libguestfs: original domain XML:\n<domain type='kvm'>\n  <name>ceph-rbd-win08</name>\n  <uuid>018ab772-c0b4-1525-8c99-171261ed261a</uuid>\n  <description>ceph-rbd-win08</description>\n  <memory unit='KiB'>4194304</memory>\n  <currentMemory unit='KiB'>4194304</currentMemory>\n  <memtune>\n    <soft_limit unit='KiB'>4194304</soft_limit>\n  </memtune>\n  <vcpu placement='static'>4</vcpu>\n  <cputune>\n    <shares>3072</shares>\n  </cputune>\n  <os>\n    <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>\n    <boot dev='hd'/>\n  </os>\n  <features>\n    <acpi/>\n    <apic/>\n    <pae/>\n    <hyperv>\n      <relaxed state='on'/>\n    </hyperv>\n  </features>\n  <cpu mode='host-passthrough'/>\n  <clock offset='localtime'>\n    <timer name='rtc' tickpolicy='catchup'/>\n  </clock>\n  <on_poweroff>destroy</on_poweroff>\n  <on_reboot>restart</on_reboot>\n  <on_crash>restart</on_crash>\n  <devices>\n    <emulator>/usr/libexec/qemu-kvm</emulator>\n    <disk type='network' device='disk'>\n      <driver name='qemu'/>\n      <auth username='libvirt'>\n        <secret type='ceph' uuid='d3af8319-14cd-49ca-a4d6-909ff4ce147f'/>\n      </auth>\n      <source protocol='rbd' name='ssd-pool1/ceph-rbd-win08.2016110310320000.root'>\n        <host name='172.1.1.10' port='6789'/>\n        <host name='172.1.1.11' port='6789'/>\n        <host name='172.1.1.12' port='6789'/>\n      </source>\n      <target dev='vda' bus='virtio'/>\n      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>\n    </disk>\n    <disk type='network' device='disk'>\n      <driver name='qemu'/>\n      <auth username='libvirt'>\n        <secret type='ceph' uuid='d3af8319-14cd-49ca-a4d6-909ff4ce147f'/>\n      </auth>\n      <source protocol='rbd' name='ssd-pool1/ceph-rbd-win08.2016110310320000.data'>\n        <host name='172.1.1.10' port='6789'/>\n        <host name='172.1.1.11' port='6789'/>\n        <host name='172.1.1.12' port='6789'/>\n      </source>\n      <target dev='vdb' bus='virtio'/>\n      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>\n    </disk>\n    <disk type='block' device='cdrom'>\n      <driver name='qemu' type='raw'/>\n      <target dev='hdc' bus='ide'/>\n      <readonly/>\n      <address type='drive' controller='0' bus='1' target='0' unit='0'/>\n    </disk>\n    <controller type='ide' index='0'>\n      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>\n    </controller>\n    <controller type='usb' index='0'>\n      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>\n    </controller>\n    <controller type='pci' index='0' model='pci-root'/>\n    <interface type='network'>\n      <mac address='52:54:00:cb:10:cc'/>\n      <source network='natnet'/>\n      <model type='virtio'/>\n      <filterref filter='clean-traffic'>\n        <parameter name='IP' value='10.0.0.30'/>\n      </filterref>\n      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>\n    </interface>\n    <interface type='network'>\n      <mac address='52:54:00:8d:b4:2a'/>\n      <source network='private'/>\n      <model type='virtio'/>\n      <filterref filter='clean-traffic'>\n        <parameter name='IP' value='192.168.0.30'/>\n      </filterref>\n      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>\n    </interface>\n    <serial type='pty'>\n      <target port='0'/>\n    </serial>\n    <console type='pty'>\n      <target type='serial' port='0'/>\n    </console>\n    <input type='tablet' bus='usb'/>\n    <input type='mouse' bus='ps2'/>\n    <input type='keyboard' bus='ps2'/>\n    <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'>\n      <listen type='address' address='0.0.0.0'/>\n    </graphics>\n    <video>\n      <model type='vga' vram='65536' heads='1'/>\n      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>\n    </video>\n    <memballoon model='virtio'>\n      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>\n    </memballoon>\n  </devices>\n</domain>\n
libguestfs: trace: clear_backend_setting "internal_libvirt_norelabel_disks"
libguestfs: trace: clear_backend_setting = 0
libguestfs: disk[0]: network device
libguestfs: disk[0]: protocol: rbd
libguestfs: disk[0]: username: libvirt
libguestfs: disk[0]: hostname: 172.1.1.10 port: 6789
libguestfs: disk[0]: hostname: 172.1.1.11 port: 6789
libguestfs: disk[0]: hostname: 172.1.1.12 port: 6789
libguestfs: disk[0]: filename: ssd-pool1/ceph-rbd-win08.2016110310320000.root
libguestfs: trace: add_drive "ssd-pool1/ceph-rbd-win08.2016110310320000.root" "readonly:false" "protocol:rbd" "server:172.1.1.10:6789 172.1.1.11:6789 172.1.1.12:6789" "username:libvirt"
libguestfs: trace: add_drive = 0
libguestfs: disk[1]: network device
libguestfs: disk[1]: protocol: rbd
libguestfs: disk[1]: username: libvirt
libguestfs: disk[1]: hostname: 172.1.1.10 port: 6789
libguestfs: disk[1]: hostname: 172.1.1.11 port: 6789
libguestfs: disk[1]: hostname: 172.1.1.12 port: 6789
libguestfs: disk[1]: filename: ssd-pool1/ceph-rbd-win08.2016110310320000.data
libguestfs: trace: add_drive "ssd-pool1/ceph-rbd-win08.2016110310320000.data" "readonly:false" "protocol:rbd" "server:172.1.1.10:6789 172.1.1.11:6789 172.1.1.12:6789" "username:libvirt"
libguestfs: trace: add_drive = 0
libguestfs: trace: add_libvirt_dom = 2
libguestfs: trace: add_domain = 2
libguestfs: trace: is_config
libguestfs: trace: is_config = 1
libguestfs: trace: launch
libguestfs: trace: get_tmpdir
libguestfs: trace: get_tmpdir = "/tmp"
libguestfs: trace: version
libguestfs: trace: version = <struct guestfs_version = major: 1, minor: 32, release: 7, extra: rhel=7,release=3.el7,libvirt, >
libguestfs: trace: get_backend
libguestfs: trace: get_backend = "libvirt"
libguestfs: launch: program=guestfish
libguestfs: launch: version=1.32.7rhel=7,release=3.el7,libvirt
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=libvirt
libguestfs: launch: tmpdir=/tmp/libguestfsAQ0Ov3
libguestfs: launch: umask=0022
libguestfs: launch: euid=0
libguestfs: libvirt version = 1002017 (1.2.17)
libguestfs: guest random name = guestfs-fjsgntn6z1xgpnrj
libguestfs: connect to libvirt
libguestfs: opening libvirt handle: URI = qemu:///system, auth = default+wrapper, flags = 0
libguestfs: successfully opened libvirt handle: conn = 0x561b4a3b52c0
libguestfs: qemu version (reported by libvirt) = 2006000 (2.6.0)
libguestfs: get libvirt capabilities
libguestfs: parsing capabilities XML
libguestfs: trace: get_backend_setting "force_tcg"
libguestfs: trace: get_backend_setting = NULL (error)
libguestfs: trace: get_backend_setting "internal_libvirt_label"
libguestfs: trace: get_backend_setting = NULL (error)
libguestfs: trace: get_backend_setting "internal_libvirt_imagelabel"
libguestfs: trace: get_backend_setting = NULL (error)
libguestfs: trace: get_backend_setting "internal_libvirt_norelabel_disks"
libguestfs: trace: get_backend_setting = NULL (error)
libguestfs: build appliance
libguestfs: trace: get_cachedir
libguestfs: trace: get_cachedir = "/var/tmp"
libguestfs: begin building supermin appliance
libguestfs: run supermin
libguestfs: command: run: /usr/bin/supermin5
libguestfs: command: run: \ --build
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ --if-newer
libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock
libguestfs: command: run: \ --copy-kernel
libguestfs: command: run: \ -f ext2
libguestfs: command: run: \ --host-cpu x86_64
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d
supermin: version: 5.1.16
supermin: rpm: detected RPM version 4.11
supermin: package handler: fedora/rpm
supermin: acquiring lock on /var/tmp/.guestfs-0/lock
supermin: if-newer: output does not need rebuilding
libguestfs: finished building supermin appliance
libguestfs: trace: disk_create "/tmp/libguestfsAQ0Ov3/overlay1" "qcow2" -1 "backingfile:/var/tmp/.guestfs-0/appliance.d/root" "backingformat:raw"
libguestfs: command: run: qemu-img
libguestfs: command: run: \ create
libguestfs: command: run: \ -f qcow2
libguestfs: command: run: \ -o backing_file=/var/tmp/.guestfs-0/appliance.d/root,backing_fmt=raw
libguestfs: command: run: \ /tmp/libguestfsAQ0Ov3/overlay1
Formatting '/tmp/libguestfsAQ0Ov3/overlay1', fmt=qcow2 size=4294967296 backing_file=/var/tmp/.guestfs-0/appliance.d/root backing_fmt=raw encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
libguestfs: trace: disk_create = 0
libguestfs: set_socket_create_context: getcon failed: (none): Invalid argument [you can ignore this message if you are not using SELinux + sVirt]
libguestfs: clear_socket_create_context: setsockcreatecon failed: NULL: Invalid argument [you can ignore this message if you are not using SELinux + sVirt]
libguestfs: create libvirt XML
libguestfs: error: could not auto-detect the format when using a non-file protocol.
If the format is known, pass the format to libguestfs, eg. using the
'--format' option, or via the optional 'format' argument to 'add-drive'.
libguestfs: clear_socket_create_context: setsockcreatecon failed: NULL: Invalid argument [you can ignore this message if you are not using SELinux + sVirt]
libguestfs: trace: launch = -1 (error)
libguestfs: trace: close
libguestfs: closing guestfs handle 0x561b4a3b1ca0 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsAQ0Ov3



when i use guestfish -a /disk.img mode another err ocur.

# guestfish --format=raw -a rbd:///ssd-pool1/ceph-rbd-win08.2016110310320000.root -i
libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: trace: set_backend "direct"
libguestfs: trace: set_backend = 0
libguestfs: create: flags = 0, handle = 0x55795e995ca0, program = guestfish
libguestfs: trace: set_pgroup true
libguestfs: trace: set_pgroup = 0
libguestfs: trace: add_drive "ssd-pool1/ceph-rbd-win08.2016110310320000.root" "format:raw" "protocol:rbd"
libguestfs: trace: add_drive = 0
libguestfs: trace: is_config
libguestfs: trace: is_config = 1
libguestfs: trace: launch
libguestfs: trace: get_tmpdir
libguestfs: trace: get_tmpdir = "/tmp"
libguestfs: trace: version
libguestfs: trace: version = <struct guestfs_version = major: 1, minor: 32, release: 7, extra: rhel=7,release=3.el7,libvirt, >
libguestfs: trace: get_backend
libguestfs: trace: get_backend = "direct"
libguestfs: launch: program=guestfish
libguestfs: launch: version=1.32.7rhel=7,release=3.el7,libvirt
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=direct
libguestfs: launch: tmpdir=/tmp/libguestfszmuu9v
libguestfs: launch: umask=0022
libguestfs: launch: euid=0
libguestfs: trace: get_backend_setting "force_tcg"
libguestfs: trace: get_backend_setting = NULL (error)
libguestfs: trace: get_cachedir
libguestfs: trace: get_cachedir = "/var/tmp"
libguestfs: begin building supermin appliance
libguestfs: run supermin
libguestfs: command: run: /usr/bin/supermin5
libguestfs: command: run: \ --build
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ --if-newer
libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock
libguestfs: command: run: \ --copy-kernel
libguestfs: command: run: \ -f ext2
libguestfs: command: run: \ --host-cpu x86_64
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d
supermin: version: 5.1.16
supermin: rpm: detected RPM version 4.11
supermin: package handler: fedora/rpm
supermin: acquiring lock on /var/tmp/.guestfs-0/lock
supermin: if-newer: output does not need rebuilding
libguestfs: finished building supermin appliance
libguestfs: begin testing qemu features
libguestfs: command: run: /usr/libexec/qemu-kvm
libguestfs: command: run: \ -display none
libguestfs: command: run: \ -help
libguestfs: qemu version 2.6
libguestfs: command: run: /usr/libexec/qemu-kvm
libguestfs: command: run: \ -display none
libguestfs: command: run: \ -machine accel=kvm:tcg
libguestfs: command: run: \ -device ?
libguestfs: finished testing qemu features
libguestfs: trace: get_backend_setting "gdb"
libguestfs: trace: get_backend_setting = NULL (error)
[00146ms] /usr/libexec/qemu-kvm \
    -global virtio-blk-pci.scsi=off \
    -nodefconfig \
    -enable-fips \
    -nodefaults \
    -display none \
    -machine accel=kvm:tcg \
    -cpu host \
    -m 500 \
    -no-reboot \
    -rtc driftfix=slew \
    -no-hpet \
    -global kvm-pit.lost_tick_policy=discard \
    -kernel /var/tmp/.guestfs-0/appliance.d/kernel \
    -initrd /var/tmp/.guestfs-0/appliance.d/initrd \
    -object rng-random,filename=/dev/urandom,id=rng0 \
    -device virtio-rng-pci,rng=rng0 \
    -device virtio-scsi-pci,id=scsi \
    -drive file=rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none,cache=writeback,format=raw,id=hd0,if=none \
    -device scsi-hd,drive=hd0 \
    -drive file=/var/tmp/.guestfs-0/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw \
    -device scsi-hd,drive=appliance \
    -device virtio-serial-pci \
    -serial stdio \
    -device sga \
    -chardev socket,path=/tmp/libguestfszmuu9v/guestfsd.sock,id=channel0 \
    -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
    -append 'panic=1 console=ttyS0 udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm'
qemu-kvm: -drive file=rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none,cache=writeback,format=raw,id=hd0,if=none: error connecting: Operation not supported
libguestfs: error: appliance closed the connection unexpectedly, see earlier error messages
libguestfs: child_cleanup: 0x55795e995ca0: child process died
libguestfs: sending SIGTERM to process 5818
libguestfs: error: /usr/libexec/qemu-kvm exited with error status 1, see debug messages above
libguestfs: error: guestfs_launch failed, see earlier error messages
libguestfs: trace: launch = -1 (error)
libguestfs: trace: close
libguestfs: closing guestfs handle 0x55795e995ca0 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfszmuu9v

Additional info:

the rbd image "ceph-rbd-win08.2016110310320000.root" is the boot disk of domain(ceph-rbd-win08)

Comment 3 Richard W.M. Jones 2016-11-08 12:46:55 UTC
I don't have a Ceph cluster at the moment.  Can you try testing
simpler qemu command lines to see what works, eg:

/usr/libexec/qemu-kvm \
    -drive snapshot=on,file=rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none,format=raw,id=hd0,if=ide

Comment 4 Richard W.M. Jones 2016-11-08 13:01:15 UTC
Ignore that, I didn't spot that there were two runs.

The libvirt XML is wrong as was established on the other bug,
so you're going to have to fix that first.  Add:
  <driver name='qemu' type='raw'/>
into the ceph disk.

Then use the -d option, and see what it says.

Comment 5 395783748 2016-11-09 04:40:19 UTC
hi, (In reply to Richard W.M. Jones from comment #3)
> I don't have a Ceph cluster at the moment.  Can you try testing
> simpler qemu command lines to see what works, eg:
> 
> /usr/libexec/qemu-kvm \
>     -drive
> snapshot=on,file=rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:
> auth_supported=none,format=raw,id=hd0,if=ide

[root@cnode1:/root]
# /usr/libexec/qemu-kvm -drive snapshot=on,file=rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none,format=raw,id=hd0,if=ide
qemu-kvm: -drive snapshot=on,file=rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:auth_supported=none,format=raw,id=hd0,if=ide: error connecting: Operation not supported

Comment 6 395783748 2016-11-09 04:42:13 UTC
(In reply to Richard W.M. Jones from comment #4)
> Ignore that, I didn't spot that there were two runs.
> 
> The libvirt XML is wrong as was established on the other bug,
> so you're going to have to fix that first.  Add:
>   <driver name='qemu' type='raw'/>
> into the ceph disk.
> 
> Then use the -d option, and see what it says.


i have add the type of the drive, the same err as using -a /disk.img option above

Comment 7 395783748 2016-11-11 08:09:27 UTC
Hi,

is there any result?

Comment 8 Richard W.M. Jones 2016-11-11 08:31:56 UTC
Please try some different command lines to find out what exactly
doesn't work, see comment 4.

Comment 9 Richard W.M. Jones 2016-11-11 08:32:37 UTC
I mean, see comment 5, not comment 4.

Comment 10 395783748 2016-11-11 10:33:45 UTC
can you give me a example?

Comment 11 Richard W.M. Jones 2016-11-11 10:37:57 UTC
Please modify the libvirt XML by adding:
  <driver name='qemu' type='raw'/>
into the Ceph disk in the libvirt XML.
Then run the test again using the '-d' option and see what it says.

Comment 12 395783748 2016-11-14 03:16:17 UTC
my ceph disk used the type of raw,


    <disk type='network' device='disk'>
      <driver name='qemu' type='raw'/>
      <auth username='libvirt'>
        <secret type='ceph' uuid='d3af8319-14cd-49ca-a4d6-909ff4ce147f'/>
      </auth>
      <source protocol='rbd' name='ssd-pool1/ceph-rbd-win08.2016110310320000.root'>
        <host name='172.1.1.10' port='6789'/>
        <host name='172.1.1.11' port='6789'/>
        <host name='172.1.1.12' port='6789'/>
      </source>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw'/>
      <auth username='libvirt'>
        <secret type='ceph' uuid='d3af8319-14cd-49ca-a4d6-909ff4ce147f'/>
      </auth>
      <source protocol='rbd' name='ssd-pool1/ceph-rbd-win08.2016110310320000.data'>
        <host name='172.1.1.10' port='6789'/>
        <host name='172.1.1.11' port='6789'/>
        <host name='172.1.1.12' port='6789'/>
      </source>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>


still the same problem:

qemu-kvm: -drive file=rbd:ssd-pool1/ceph-rbd-win08.2016110310320000.root:mon_host=172.1.1.10\:6789\;172.1.1.11\:6789\;172.1.1.12\:6789:id=libvirt:auth_supported=cephx\;none,cache=writeback,format=raw,id=hd0,if=none: error connecting: Operation not supported
libguestfs: error: appliance closed the connection unexpectedly, see earlier error messages
libguestfs: child_cleanup: 0x7f6f11d0e940: child process died
libguestfs: sending SIGTERM to process 30928
libguestfs: error: /usr/libexec/qemu-kvm exited with error status 1, see debug messages above
libguestfs: error: guestfs_launch failed, see earlier error messages
libguestfs: trace: launch = -1 (error)
libguestfs: trace: close
libguestfs: closing guestfs handle 0x7f6f11d0e940 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsBwHujS

Comment 13 395783748 2016-11-15 02:04:15 UTC
any idea?

Comment 14 Richard W.M. Jones 2016-11-15 08:32:05 UTC
No, please follow the steps in comment 4, else there's nothing we
can do about this bug.

Comment 15 395783748 2016-11-15 09:41:04 UTC
i have follow the steps in comment 4 already, the disk type has been changed to raw.

the result pls see the attachments

Comment 16 395783748 2016-11-15 09:42:11 UTC
Created attachment 1220776 [details]
the output of guestfish inject

Comment 17 395783748 2016-11-15 09:43:08 UTC
Created attachment 1220777 [details]
the xml of domain

Comment 18 Richard W.M. Jones 2016-11-15 10:05:07 UTC
The error is now completely different, and comes from libvirt.  Seems
to be something to do with the <secret> clause in the original guest
XML not matching any secret known by libvirt.

Original error from libvirt: XML error: missing auth secret uuid or usage attribute [code=27 int1=-1]

Comment 19 Ademar Reis 2016-11-15 13:37:37 UTC
Thanks for taking the time to enter a bug report with us. We use reports like yours to keep improving the quality of our products and releases. That said, we're not able to guarantee the timeliness or suitability of a resolution for issues entered here because this is not a mechanism for requesting support.
                                                                                
If this issue is critical or in any way time sensitive, please raise a ticket through your regular Red Hat support channels to make certain it receives the proper attention and prioritization that will result in a timely resolution.
                                                                                
For information on how to contact the Red Hat production support team, please visit: https://www.redhat.com/support/process/production/#howto

Comment 20 395783748 2016-11-16 02:16:19 UTC
(In reply to Richard W.M. Jones from comment #18)
> The error is now completely different, and comes from libvirt.  Seems
> to be something to do with the <secret> clause in the original guest
> XML not matching any secret known by libvirt.
> 
> Original error from libvirt: XML error: missing auth secret uuid or usage
> attribute [code=27 int1=-1]


but the domain can start and running well,.in the os,the rbd disk can work well.

it is proved that the secret must be right with libvirt and xml.

the libvirt secret uuid and xml are match

[root@cnode1:/root]
# virsh secret-list
 UUID                                  Usage
--------------------------------------------------------------------------------
 d3af8319-14cd-49ca-a4d6-909ff4ce147f  ceph client.libvirt secret


[root@cnode1:/root]
# virsh dumpxml ceph-rbd-win08|grep d3af
        <secret type='ceph' uuid='d3af8319-14cd-49ca-a4d6-909ff4ce147f'/>
        <secret type='ceph' uuid='d3af8319-14cd-49ca-a4d6-909ff4ce147f'/>

[root@cnode1:/root]
#

Comment 21 Pino Toscano 2016-11-16 12:04:09 UTC
OK, it looks clearer to me now: libguestfs does not read the authentication/secret parts of disks at all (only the username), so opening them later on will fail (since there are no credentials provided).

Just sent a couple of patches (one cleanup, and the actual implementation) that should change this:
https://www.redhat.com/archives/libguestfs/2016-November/msg00080.html
https://www.redhat.com/archives/libguestfs/2016-November/msg00081.html

Comment 22 395783748 2016-11-17 07:44:11 UTC
Thank you!

These two patches suitable for what version of libguestfs?

i use libguestfs-1.32.10

i tried to patch, but patch failed.


2 out of 11 hunks FAILED -- saving rejects to file src/libvirt-domain.c.rej




# cat src/libvirt-domain.c.rej
--- src/libvirt-domain.c
+++ src/libvirt-domain.c
@@ -42,7 +44,7 @@
 #if defined(HAVE_LIBVIRT)

 static xmlDocPtr get_domain_xml (guestfs_h *g, virDomainPtr dom);
-static ssize_t for_each_disk (guestfs_h *g, virConnectPtr conn, xmlDocPtr doc, int (*f) (guestfs_h *g, const char *filename, const char *format, int readonly, const char *protocol, char *const *server, const char *username, void *data), void *data);
+static ssize_t for_each_disk (guestfs_h *g, virConnectPtr conn, xmlDocPtr doc, int (*f) (guestfs_h *g, const char *filename, const char *format, int readonly, const char *protocol, char *const *server, const char *username, const char *secret, void *data), void *data);
 static int libvirt_selinux_label (guestfs_h *g, xmlDocPtr doc, char **label_rtn, char **imagelabel_rtn);
 static char *filename_from_pool (guestfs_h *g, virConnectPtr conn, const char *pool_nane, const char *volume_name);
 static bool xPathObjectIsEmpty (xmlXPathObjectPtr obj);
@@ -580,8 +591,111 @@
         xpusername = xmlXPathEvalExpression (BAD_CAST "./auth/@username",
                                              xpathCtx);
         if (!xPathObjectIsEmpty (xpusername)) {
+          CLEANUP_XMLXPATHFREEOBJECT xmlXPathObjectPtr xpsecrettype = NULL;
+          CLEANUP_XMLXPATHFREEOBJECT xmlXPathObjectPtr xpsecretuuid = NULL;
+          CLEANUP_XMLXPATHFREEOBJECT xmlXPathObjectPtr xpsecretusage = NULL;
+          CLEANUP_FREE char *typestr = NULL;
+          unsigned char *value = NULL;
+          size_t value_size = 0;
+
           username = xPathObjectGetString (doc, xpusername);
           debug (g, "disk[%zu]: username: %s", i, username);
+
+          /* <secret type="...">.  Mandatory given <auth> is specified. */
+          xpsecrettype = xmlXPathEvalExpression (BAD_CAST "./auth/secret/@type",
+                                                 xpathCtx);
+          if (xPathObjectIsEmpty (xpsecrettype))
+            continue;
+          typestr = xPathObjectGetString (doc, xpsecrettype);
+
+          /* <secret uuid="..."> and <secret usage="...">.
+           * At least one of them is required.
+           */
+          xpsecretuuid = xmlXPathEvalExpression (BAD_CAST "./auth/secret/@uuid",
+                                                 xpathCtx);
+          xpsecretusage = xmlXPathEvalExpression (BAD_CAST "./auth/secret/@usage",
+                                                  xpathCtx);
+          if (!xPathObjectIsEmpty (xpsecretuuid)) {
+            CLEANUP_FREE char *uuidstr = NULL;
+            virSecretPtr sec;
+
+            uuidstr = xPathObjectGetString (doc, xpsecretuuid);
+            debug (g, "disk[%zu]: secret type: %s; UUID: %s",
+                   i, typestr, uuidstr);
+            sec = virSecretLookupByUUIDString (conn, uuidstr);
+            if (sec == NULL) {
+              err = virGetLastError ();
+              error (g, _("no secret with UUID '%s': %s"),
+                     uuidstr, err ? err->message : "(none)");
+              continue;
+            }
+
+            value = virSecretGetValue (sec, &value_size, 0);
+            if (value == NULL) {
+              err = virGetLastError ();
+              error (g, _("cannot get the value of the secret with UUID '%s': %s"),
+                     uuidstr, err->message);
+              virSecretFree (sec);
+              continue;
+            }
+
+            virSecretFree (sec);
+          } else if (!xPathObjectIsEmpty (xpsecretusage)) {
+            virSecretUsageType usageType;
+            CLEANUP_FREE char *usagestr = NULL;
+            virSecretPtr sec;
+
+            usagestr = xPathObjectGetString (doc, xpsecretusage);
+            debug (g, "disk[%zu]: secret type: %s; usage: %s",
+                   i, typestr, usagestr);
+            if (STREQ (usagestr, "none"))
+              usageType = VIR_SECRET_USAGE_TYPE_NONE;
+            else if (STREQ (usagestr, "volume"))
+              usageType = VIR_SECRET_USAGE_TYPE_VOLUME;
+            else if (STREQ (usagestr, "ceph"))
+              usageType = VIR_SECRET_USAGE_TYPE_CEPH;
+            else if (STREQ (usagestr, "iscsi"))
+              usageType = VIR_SECRET_USAGE_TYPE_ISCSI;
+            else
+              continue;
+            sec = virSecretLookupByUsage (conn, usageType, usagestr);
+            if (sec == NULL) {
+              err = virGetLastError ();
+              error (g, _("no secret for usage '%s': %s"),
+                     usagestr, err->message);
+              continue;
+            }
+
+            value = virSecretGetValue (sec, &value_size, 0);
+            if (value == NULL) {
+              err = virGetLastError ();
+              error (g, _("cannot get the value of the secret with usage '%s': %s"),
+                     usagestr, err->message);
+              virSecretFree (sec);
+              continue;
+            }
+
+            virSecretFree (sec);
+          } else {
+            continue;
+          }
+
+          assert (value != NULL);
+          assert (value_size > 0);
+
+          if (STREQ (typestr, "ceph")) {
+            const size_t res = base64_encode_alloc ((const char *) value,
+                                                    value_size, &secret);
+            free (value);
+            if (res == 0 || secret == NULL) {
+              error (g, "internal error: cannot encode the rbd secret as base64");
+              return -1;
+            }
+          } else {
+            secret = (char *) value;
+          }
+
+          assert (secret != NULL);
         }

         xphost = xmlXPathEvalExpression (BAD_CAST "./source/host",

Comment 23 Pino Toscano 2016-11-22 11:59:16 UTC
(In reply to 395783748 from comment #22)
> Thank you!
> 
> These two patches suitable for what version of libguestfs?

They apply on current git/master, which currently is 1.35.x. Also, there were other commits on that file, so it widens the difference to 1.32.x.
I'll provide you a single diff of the proposed fix plus all the needed patches for it, that applies on top of 1.32.x.

Comment 24 Pino Toscano 2016-11-22 12:03:38 UTC
Created attachment 1222686 [details]
[PATCH] proposed fix + needed patches

This patch is for libguestfs 1.32.x, and includes:
- 4c3968f262e8a45f65f8980d6af39144bd52f0ea (small parts, mostly the virConnect stuff)
- the two patches linked in comment 21

Comment 26 Pino Toscano 2016-12-14 10:15:11 UTC
(In reply to Pino Toscano from comment #25)
> Fixed with
> https://github.com/libguestfs/libguestfs/commit/
> bef838202b533aa008e62af3f78e0c4654b7c5e9 (cleanup)
> https://github.com/libguestfs/libguestfs/commit/
> a94d5513456d7255d6e562953ac163f2d7a816fb
> which are in libguestfs >= 1.35.15.

... and also a followup/fix:
https://github.com/libguestfs/libguestfs/commit/7bd6a73f0092cf1e23f9b0584a3212df5309367c
which is in libguestfs >= 1.35.18.

Comment 27 Xianghua Chen 2016-12-15 11:50:04 UTC
Can reproduce it with:
libguestfs-1.32.7-3.el7.x86_64

Steps:
1. Prepare ceph server according to :
Refer to: https://drive.google.com/open?id=1ryfe-6D968kEiy2YzloMnLXC0tw1cfmeGeIKBndpbUg

In this bug, we use 10.66.10.242 as the ceph mon.
Create a pool: libvirt-pool

2. on mon node:
# cat /etc/ceph/ceph.conf
... ...
auth cluster required = cephx
auth service required = cephx
auth client required = cephx

# ceph auth get-or-create client.libvirt mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=libvirt-pool'
[[client.libvirt]
    key = AQDnPkpYgg4hIRAAM3z67RZ1spc28zAVi0XC6w==

# ceph auth list
client.libvirt
    key: AQDnPkpYgg4hIRAAM3z67RZ1spc28zAVi0XC6w==
    caps: [mon] allow r
    caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=libvirt-pool

3. On client:
# vim secret.xml
<secret ephemeral='no' private='no'>
    <description>CEPH passphrase example</description>
        <usage type='ceph'>
          <name>client.libvirt secret</name>
        </usage>
</secret>

# virsh secret-define secret.xml
Secret 27be818b-9248-40e3-b0a9-706e3ae72925 created

#  virsh secret-list
UUID                                  Usage
--------------------------------------------------------------------------------
 27be818b-9248-40e3-b0a9-706e3ae72925  ceph client.libvirt secret

# virsh secret-set-value --secret 27be818b-9248-40e3-b0a9-706e3ae72925 --base64 AQDnPkpYgg4hIRAAM3z67RZ1spc28zAVi0XC6w==

4. On client, Prepare a guest image:rbd-secret.img
# qemu-img create -f raw rbd:libvirt-pool/rbd-secret.img:id=libvirt:key=AQDnPkpYgg4hIRAAM3z67RZ1spc28zAVi0XC6w==:mon_host=10.66.10.242 8G

Create and start a rhel7.2 guest with the following ceph-secret.xml:
... ...
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<auth username='libvirt'>
<secret type='ceph' usage='client.libvirt secret'/>
</auth>
<source protocol='rbd' name='libvirt-pool/rbd-secret.img'>
<host name='10.66.10.242' port='6789'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
</disk>
... ...

# vish create  ceph-secret.xml
# virsh list --all
 Id    Name                           State
----------------------------------------------------
 23    ceph-secret                    running

5. On client, Use guestfish to access the guest image:
# guestfish -d ceph-secret --ro -i
libguestfs: error: qemu-img: /tmp/libguestfsBs0eXd/overlay1: qemu-img exited with error status 1.
To see full error messages you may need to enable debugging.
Do:
  export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1
and run the command again.  For further information, read:
  http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs
You can also run 'libguestfs-test-tool' and post the *complete* output
into a bug report or message to the libguestfs mailing list.


So, this bug can be reproduced.

Comment 29 YongkuiGuo 2017-04-13 10:26:46 UTC
Verified with package:
libguestfs-1.36.3-1.el7.x86_64

Steps:
1. Prepare ceph server according to :
Refer to: https://drive.google.com/open?id=1ryfe-6D968kEiy2YzloMnLXC0tw1cfmeGeIKBndpbUg

Here we use 10.66.144.75 as the ceph mon.


2. on mon node:
#ceph osd pool create libvirt-pool 128 128

# ceph auth get-or-create client.libvirt mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=libvirt-pool'
[[client.libvirt]
    key = AQCEcuhY4eicORAAR65g5TTjL9086ltA1Lbmfg==

# ceph auth list
client.libvirt
    key: AQCEcuhY4eicORAAR65g5TTjL9086ltA1Lbmfg==
    caps: [mon] allow r
    caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=libvirt-pool

3. On client:
# vim secret.xml
<secret ephemeral='no' private='no'>
    <description>CEPH passphrase example</description>
        <usage type='ceph'>
          <name>client.libvirt secret</name>
        </usage>
</secret>

# virsh secret-define secret.xml
Secret 27be818b-9248-40e3-b0a9-706e3ae72925 created

#  virsh secret-list
UUID                                  Usage
--------------------------------------------------------------------------------
 b710e6bf-de07-4cef-bef9-cad0ee06ee2e  ceph client.libvirt secret

# virsh secret-set-value --secret b710e6bf-de07-4cef-bef9-cad0ee06ee2e --base64 AQDnPkpYgg4hIRAAM3z67RZ1spc28zAVi0XC6w==

4. On client, Prepare a guest image:rbd-secret.img
# qemu-img create -f raw rbd:libvirt-pool/rbd-secret.img:id=libvirt:key=AQCEcuhY4eicORAAR65g5TTjL9086ltA1Lbmfg==:mon_host=10.66.144.75 8G

Create a rhel7.3 vm on rbd-secret.img with the following ceph-secret.xml:
... ...
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<auth username='libvirt'>
<secret type='ceph' usage='client.libvirt secret'/>
</auth>
<source protocol='rbd' name='libvirt-pool/rbd-secret.img'>
<host name='10.66.144.75' port='6789'/>
</source>
<backingStore/>
<target dev='hda' bus='ide'/>
</disk>
... ...

# vish create  ceph-secret.xml
# virsh list --all
 Id    Name                           State
----------------------------------------------------
 23    ceph-secret                    running

5. On client, Use guestfish to access the guest image:
# guestfish -d ceph-secret --ro
><fs> run
><fs> list-filesystems
/dev/sda1: xfs
/dev/rhel/root: xfs
/dev/rhel/swap: swap

From the results above, the guest image can be inspected via guestfish.
So veriried this bug.

Comment 32 errata-xmlrpc 2017-08-01 22:11:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2023


Note You need to log in before you can comment on or make changes to this bug.