RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2033247 - document encrypted RBD disk limitation
Summary: document encrypted RBD disk limitation
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libguestfs
Version: 9.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Laszlo Ersek
QA Contact: YongkuiGuo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-12-16 10:57 UTC by YongkuiGuo
Modified: 2022-11-15 10:13 UTC (History)
5 users (show)

Fixed In Version: libguestfs-1.48.4-2.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-11-15 09:52:35 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-105988 0 None None None 2021-12-16 11:03:00 UTC
Red Hat Product Errata RHSA-2022:7958 0 None None None 2022-11-15 09:53:18 UTC

Description YongkuiGuo 2021-12-16 10:57:13 UTC
Description of problem:
Prepare an rbd env (using three bare metal on beaker) and create a image: rhel8.6-secret.img. Use virt-inspector to detect the image info.

Version-Release number of selected component (if applicable):
libguestfs-1.46.1-1.el9.x86_64
guestfs-tools-1.46.1-6.el9.x86_64
libvirt-7.9.0-1.el9.x86_64

How reproducible:
100%


Steps:

1. On rhel9 host
# virt-inspector --format=raw -a rbd://libvirt:AQDv2LpheIW2OhAASKiQ3x0Hz1Bpc3tdASjvcw==.144.57:6789/libvirt-pool/rhel8.6-secret.img
libguestfs: trace: add_drive "libvirt-pool/rhel8.6-secret.img" "readonly:true" "format:raw" "protocol:rbd" "server:tcp:10.66.144.57:6789" "username:libvirt" "secret:AQDv2LpheIW2OhAASKiQ3x0Hz1Bpc3tdASjvcw=="
libguestfs: creating COW overlay to protect original drive content
libguestfs: trace: get_tmpdir
libguestfs: trace: get_tmpdir = "/tmp"
libguestfs: trace: disk_create "/tmp/libguestfsgIpdNX/overlay1.qcow2" "qcow2" -1 "backingfile:rbd:libvirt-pool/rhel8.6-secret.img:mon_host=10.66.144.57\:6789:id=libvirt:auth_supported=cephx\;none:key=AQDv2LpheIW2OhAASKiQ3x0Hz1Bpc3tdASjvcw==" "backingformat:raw"
libguestfs: command: run: qemu-img
libguestfs: command: run: \ create
libguestfs: command: run: \ -f qcow2
libguestfs: command: run: \ -o backing_file=rbd:libvirt-pool/rhel8.6-secret.img:mon_host=10.66.144.57\:6789:id=libvirt:auth_supported=cephx\;none:key=AQDv2LpheIW2OhAASKiQ3x0Hz1Bpc3tdASjvcw==,backing_fmt=raw
libguestfs: command: run: \ /tmp/libguestfsgIpdNX/overlay1.qcow2
Formatting '/tmp/libguestfsgIpdNX/overlay1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=10737418240 backing_file=rbd:libvirt-pool/rhel8.6-secret.img:mon_host=10.66.144.57\:6789:id=libvirt:auth_supported=cephx\;none:key=AQDv2LpheIW2OhAASKiQ3x0Hz1Bpc3tdASjvcw== backing_fmt=raw lazy_refcounts=off refcount_bits=16
libguestfs: trace: disk_create = 0
libguestfs: trace: get_backend_setting "internal_libvirt_imagelabel"
libguestfs: trace: get_backend_setting = NULL (error)
libguestfs: trace: add_drive = 0
libguestfs: trace: launch
libguestfs: trace: max_disks
libguestfs: trace: max_disks = 255
libguestfs: trace: version
libguestfs: trace: version = <struct guestfs_version = major: 1, minor: 46, release: 1, extra: rhel=9,release=1.el9,libvirt, >
libguestfs: trace: get_backend
libguestfs: trace: get_backend = "libvirt"
libguestfs: launch: program=virt-inspector
libguestfs: launch: version=1.46.1rhel=9,release=1.el9,libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: unix
libguestfs: launch: backend=libvirt
libguestfs: launch: tmpdir=/tmp/libguestfsgIpdNX
libguestfs: launch: umask=0022
libguestfs: launch: euid=0
libguestfs: libvirt version = 7009000 (7.9.0)
libguestfs: guest random name = guestfs-nhvu0nd6xtaaw2ed
libguestfs: connect to libvirt
libguestfs: opening libvirt handle: URI = qemu:///system, auth = default+wrapper, flags = 0
libguestfs: successfully opened libvirt handle: conn = 0x55a0c37f6030
libguestfs: qemu version (reported by libvirt) = 6001000 (6.1.0)
libguestfs: get libvirt capabilities
libguestfs: parsing capabilities XML
libguestfs: trace: get_backend_setting "force_kvm"
libguestfs: trace: get_backend_setting = NULL (error)
libguestfs: trace: get_backend_setting "force_tcg"
libguestfs: trace: get_backend_setting = NULL (error)
libguestfs: parsing domcapabilities XML
libguestfs: trace: get_backend_setting "internal_libvirt_label"
libguestfs: trace: get_backend_setting = NULL (error)
libguestfs: trace: get_backend_setting "internal_libvirt_imagelabel"
libguestfs: trace: get_backend_setting = NULL (error)
libguestfs: trace: get_backend_setting "internal_libvirt_norelabel_disks"
libguestfs: trace: get_backend_setting = NULL (error)
libguestfs: build appliance
libguestfs: trace: get_cachedir
libguestfs: trace: get_cachedir = "/var/tmp"
libguestfs: begin building supermin appliance
libguestfs: run supermin
libguestfs: command: run: /usr/bin/supermin
libguestfs: command: run: \ --build
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ --if-newer
libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock
libguestfs: command: run: \ --copy-kernel
libguestfs: command: run: \ -f ext2
libguestfs: command: run: \ --host-cpu x86_64
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d
supermin: version: 5.2.1
supermin: rpm: detected RPM version 4.16
supermin: rpm: detected RPM architecture x86_64
supermin: package handler: fedora/rpm
supermin: acquiring lock on /var/tmp/.guestfs-0/lock
supermin: if-newer: output does not need rebuilding
libguestfs: finished building supermin appliance
libguestfs: trace: disk_create "/tmp/libguestfsgIpdNX/overlay2.qcow2" "qcow2" -1 "backingfile:/var/tmp/.guestfs-0/appliance.d/root"
libguestfs: trace: disk_format "/var/tmp/.guestfs-0/appliance.d/root"
libguestfs: command: run: qemu-img --help | grep -sqE -- '\binfo\b.*-U\b'
libguestfs: command: run: qemu-img
libguestfs: command: run: \ info
libguestfs: command: run: \ -U
libguestfs: command: run: \ --output json
libguestfs: command: run: \ /var/tmp/.guestfs-0/appliance.d/root
libguestfs: parse_json: qemu-img info JSON output:\n{\n    "virtual-size": 4294967296,\n    "filename": "/var/tmp/.guestfs-0/appliance.d/root",\n    "format": "raw",\n    "actual-size": 301125632,\n    "dirty-flag": false\n}\n\n
libguestfs: trace: disk_format = "raw"
libguestfs: command: run: qemu-img
libguestfs: command: run: \ create
libguestfs: command: run: \ -f qcow2
libguestfs: command: run: \ -o backing_file=/var/tmp/.guestfs-0/appliance.d/root,backing_fmt=raw
libguestfs: command: run: \ /tmp/libguestfsgIpdNX/overlay2.qcow2
Formatting '/tmp/libguestfsgIpdNX/overlay2.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=4294967296 backing_file=/var/tmp/.guestfs-0/appliance.d/root backing_fmt=raw lazy_refcounts=off refcount_bits=16
libguestfs: trace: disk_create = 0
libguestfs: trace: get_sockdir
libguestfs: trace: get_sockdir = "/tmp"
libguestfs: libvirt secret XML:\n<?xml version="1.0"?>\n<secret ephemeral="yes" private="yes">\n  <description>guestfs secret associated with guestfs-nhvu0nd6xtaaw2ed libvirt-pool/rhel8.6-secret.img</description>\n</secret>\n
libguestfs: create libvirt XML
libguestfs: trace: get_cachedir
libguestfs: trace: get_cachedir = "/var/tmp"
libguestfs: libvirt XML:\n<?xml version="1.0"?>\n<domain type="kvm" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">\n  <name>guestfs-nhvu0nd6xtaaw2ed</name>\n  <memory unit="MiB">1280</memory>\n  <currentMemory unit="MiB">1280</currentMemory>\n  <cpu mode="maximum"/>\n  <vcpu>1</vcpu>\n  <clock offset="utc">\n    <timer name="rtc" tickpolicy="catchup"/>\n    <timer name="pit" tickpolicy="delay"/>\n    <timer name="hpet" present="no"/>\n  </clock>\n  <os>\n    <type>hvm</type>\n    <kernel>/var/tmp/.guestfs-0/appliance.d/kernel</kernel>\n    <initrd>/var/tmp/.guestfs-0/appliance.d/initrd</initrd>\n    <cmdline>panic=1 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=UUID=65748fc4-206e-4f5f-adc2-031ebe0e803b selinux=0 guestfs_verbose=1 TERM=xterm-256color</cmdline>\n    <bios useserial="yes"/>\n  </os>\n  <on_reboot>destroy</on_reboot>\n  <devices>\n    <rng model="virtio">\n      <backend model="random">/dev/urandom</backend>\n    </rng>\n    <controller type="scsi" index="0" model="virtio-scsi"/>\n    <disk device="disk" type="file">\n      <source file="/tmp/libguestfsgIpdNX/overlay1.qcow2"/>\n      <target dev="sda" bus="scsi"/>\n      <driver name="qemu" type="qcow2" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="0" unit="0"/>\n    </disk>\n    <disk type="file" device="disk">\n      <source file="/tmp/libguestfsgIpdNX/overlay2.qcow2"/>\n      <target dev="sdb" bus="scsi"/>\n      <driver name="qemu" type="qcow2" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="1" unit="0"/>\n    </disk>\n    <serial type="unix">\n      <source mode="connect" path="/tmp/libguestfsX7KQGZ/console.sock"/>\n      <target port="0"/>\n    </serial>\n    <channel type="unix">\n      <source mode="connect" path="/tmp/libguestfsX7KQGZ/guestfsd.sock"/>\n      <target type="virtio" name="org.libguestfs.channel.0"/>\n    </channel>\n    <controller type="usb" model="none"/>\n    <memballoon model="none"/>\n  </devices>\n  <qemu:commandline>\n    <qemu:env name="TMPDIR" value="/var/tmp"/>\n  </qemu:commandline>\n</domain>\n
libguestfs: trace: get_cachedir
libguestfs: trace: get_cachedir = "/var/tmp"
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -R
libguestfs: command: run: \ -Z /var/tmp/.guestfs-0
libguestfs: /var/tmp/.guestfs-0:
libguestfs: total 220
libguestfs: drwxr-xr-x. 3 root root unconfined_u:object_r:user_tmp_t:s0   4096 Dec 16 05:16 .
libguestfs: drwxrwxrwt. 9 root root system_u:object_r:tmp_t:s0            4096 Dec 16 05:16 ..
libguestfs: drwxr-xr-x. 2 root root unconfined_u:object_r:user_tmp_t:s0     46 Dec 14 21:54 appliance.d
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0      0 Dec  8 22:10 lock
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0   9935 Dec 14 07:23 qemu-18057384-1637309029.devices
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0  29565 Dec 14 07:23 qemu-18057384-1637309029.help
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0 162656 Dec 14 07:23 qemu-18057384-1637309029.qmp-schema
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0     48 Dec 14 07:23 qemu-18057384-1637309029.query-kvm
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0     44 Dec 14 07:23 qemu-18057384-1637309029.stat
libguestfs:
libguestfs: /var/tmp/.guestfs-0/appliance.d:
libguestfs: total 310816
libguestfs: drwxr-xr-x. 2 root root unconfined_u:object_r:user_tmp_t:s0         46 Dec 14 21:54 .
libguestfs: drwxr-xr-x. 3 root root unconfined_u:object_r:user_tmp_t:s0       4096 Dec 16 05:16 ..
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0    6084608 Dec 16 05:16 initrd
libguestfs: -rwxr-xr-x. 1 root root unconfined_u:object_r:user_tmp_t:s0   11056856 Dec 16 05:16 kernel
libguestfs: -rw-r--r--. 1 qemu qemu system_u:object_r:virt_content_t:s0 4294967296 Dec 16 05:16 root
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -Z /tmp/libguestfsX7KQGZ
libguestfs: total 4
libguestfs: drwxr-xr-x.  2 root root unconfined_u:object_r:user_tmp_t:s0   47 Dec 16 05:16 .
libguestfs: drwxrwxrwt. 19 root root system_u:object_r:tmp_t:s0          4096 Dec 16 05:16 ..
libguestfs: srw-rw----.  1 root qemu unconfined_u:object_r:user_tmp_t:s0    0 Dec 16 05:16 console.sock
libguestfs: srw-rw----.  1 root qemu unconfined_u:object_r:user_tmp_t:s0    0 Dec 16 05:16 guestfsd.sock
libguestfs: launch libvirt guest
libguestfs: error: could not create appliance through libvirt.

Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct

Original error from libvirt: internal error: process exited while connecting to monitor: 2021-12-16T10:16:31.926461Z qemu-kvm: -blockdev {"driver":"rbd","pool":"libvirt-pool","image":"rhel8.6-secret.img","server":[{"host":"10.66.144.57","port":"6789"}],"node-name":"libvirt-4-storage","cache":{"direct":false,"no-flush":true},"auto-read-only":true,"discard":"unmap"}: error connecting: Operation not supported [code=1 int1=-1]
libguestfs: trace: launch = -1 (error)
libguestfs: trace: close
libguestfs: closing guestfs handle 0x55a0c37ea5d0 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsgIpdNX
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsX7KQGZ


Actual results:
As above

Expected results:
The virt-inspector command should be run successfully.

Additional info:
1. The same issue on rhel8.6
2. Got the same error when running virt-cat, virt-ls,...
3. virt-inspector works if using direct backend:
LIBGUESTFS_BACKEND=direct virt-inspector --format=raw -a rbd://libvirt:AQDv2LpheIW2OhAASKiQ3x0Hz1Bpc3tdASjvcw==.144.57:6789/libvirt-pool/rhel8.6-secret.img

Comment 1 Laszlo Ersek 2021-12-16 11:27:10 UTC
The error message comes from QEMU:

qemu-kvm: -blockdev {"driver":"rbd","pool":"libvirt-pool","image":"rhel8.6-secret.img","server":[{"host":"10.66.144.57","port":"6789"}],"node-name":"libvirt-4-storage","cache":{"direct":false,"no-flush":true},"auto-read-only":true,"discard":"unmap"}: error connecting: Operation not supported [code=1 int1=-1]

Regarding the fact that it works with the direct backend: I think the QEMU command lines must be slightly different.

The QEMU command line should be in the libguestfs log when you use the direct backend, can you please upload that?

And when you use the libvirt backend, only the temporary domain XML is in the libguestfs log; the corresponding QEMU cmdline should be in the libvirtd log. Can you please upload that too? For comparison. Thanks.

Comment 4 Laszlo Ersek 2021-12-16 12:13:11 UTC
Thanks for the logs.

With both the libvirt backend (comment#0) and the direct backend (comment#2), libguestfs invokes qemu-img to create an overlay qcow2, setting the "backing file" field in the qcow2 image to "rbd:libvirt-pool/rhel8.6-secret.img...". The properties are:

libvirt-pool/rhel8.6-secret.img
mon_host=10.66.144.57:6789
id=libvirt
auth_supported=cephx;none
key=AQDv2LpheIW2OhAASKiQ3x0Hz1Bpc3tdASjvcw==

Then, in the direct backend case (comment#2), libguestfs passes only one related option to QEMU, namely:

    -drive file=/tmp/libguestfssrNDCl/overlay1.qcow2,format=qcow2,cache=unsafe,id=hd0,if=none \

Consequently, the backing file on rbd is handled entirely internally to QEMU.

In the libvirt backend case, libguestfs again passes only one related XML element to libvirtd, namely:

    <disk device="disk" type="file">
      <source file="/tmp/libguestfsgIpdNX/overlay1.qcow2"/>
      <target dev="sda" bus="scsi"/>
      <driver name="qemu" type="qcow2" cache="unsafe"/>
      <address type="drive" controller="0" bus="0" target="0" unit="0"/>
    </disk>

(overlay2 is for the appliance root). *However*, libvirtd turns this into a multitude of -blockdev options for QEMU. First layer:

-blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":false,"no-flush":true},"driver":"qcow2","file":"libvirt-2-storage","backing":"libvirt-4-format"}' \
-blockdev '{"driver":"file","filename":"/tmp/libguestfsJHYFSp/overlay1.qcow2","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":true},"auto-read-only":true,"discard":"unmap"}' \

Second layer:

-blockdev '{"node-name":"libvirt-4-format","read-only":true,"cache":{"direct":false,"no-flush":true},"driver":"raw","file":"libvirt-4-storage"}' \
-blockdev '{"driver":"rbd","pool":"libvirt-pool","image":"rhel8.6-secret.img","server":[{"host":"10.66.144.57","port":"6789"}],"node-name":"libvirt-4-storage","cache":{"direct":false,"no-flush":true},"auto-read-only":true,"discard":"unmap"}' \

Thus (I think) libvirtd itself collects the backing file information first, and then passes that info, possibly with some modifications, to QEMU, explicitly. The rbd connection is thus not handled internally to QEMU.

Minimally, the "discard:unmap" part is unique to the libvirtd backend -- the "-drive" option from the direct backend does not have that!

Comment 5 Laszlo Ersek 2021-12-16 12:24:31 UTC
I think this BZ should be further triaged by libvirt / qemu / rbd storage folks. Something fails in qemu_rbd_connect(), I think.

Comment 9 Peter Krempa 2021-12-16 13:33:35 UTC
IIRC the problem will be that in cases where a secret is required to access a storage volume, it can't be passed in via teh backing storage string, but at that point you must enter it into the XML, and define the secret.

We

Comment 10 Peter Krempa 2021-12-16 13:56:14 UTC
I wanted to add that it's also generally not recomended, and actually dangerous, to pass secrets on the commandline.

Comment 11 Klaus Heinrich Kiwi 2021-12-20 18:17:28 UTC
Correct me if I'm wrong, but I guess this is a new use case (using encrypted RBD volumes?) and not a regression? And in that case, does this really qualifies as an RFE to have the secret in the XML then (as opposed to the commnand-line)? Are there existing use-cases where the secret was passes through command-line arguments that we would need to revisit as well?

Comment 12 YongkuiGuo 2021-12-21 03:14:08 UTC
(In reply to Klaus Heinrich Kiwi from comment #11)
> Correct me if I'm wrong, but I guess this is a new use case (using encrypted
> RBD volumes?) and not a regression? And in that case, does this really
> qualifies as an RFE to have the secret in the XML then (as opposed to the
> commnand-line)? Are there existing use-cases where the secret was passes
> through command-line arguments that we would need to revisit as well?

It should be a regression. It's a manual test case for libguestfs component. But I don't know whether there are users or customers to use libguestfs to detect encrypted rbd images in this way (specify the secret in the command-line like comment 0). In general, we test the non-encrypted rbd image as high priority.

Comment 13 Laszlo Ersek 2022-05-13 16:41:56 UTC
(In reply to Peter Krempa from comment #9)
> IIRC the problem will be that in cases where a secret is required to
> access a storage volume, it can't be passed in via teh backing storage
> string, but at that point you must enter it into the XML, and define
> the secret.

After hours of analysis, I must confirm that this is indeed the correct
explanation.

As of current upstream libvirt (@7b0e2e4a558d), the
virStorageSourceParseRBDColonString() function in file
"src/storage_file/storage_source_backingstore.c" is responsible for
parsing the RBD URL details from the backing chain. As a reminder from
comment 4, in the specific case, we have the following key-value list,
for "libvirt-pool/rhel8.6-secret.img":

> mon_host=10.66.144.57:6789
> id=libvirt
> auth_supported=cephx;none
> key=AQDv2LpheIW2OhAASKiQ3x0Hz1Bpc3tdASjvcw==

The function in question does *not* parse the "key" key at all. The only
key, related to authentication, that the function parses, is "id":

>         if (STRPREFIX(p, "id=")) {
>             /* formulate authdef for src->auth */
>             if (src->auth) {
>                 virReportError(VIR_ERR_INTERNAL_ERROR,
>                                _("duplicate 'id' found in '%s'"), src->path);
>                 return -1;
>             }
>
>             authdef = g_new0(virStorageAuthDef, 1);
>
>             authdef->username = g_strdup(p + strlen("id="));
>
>             authdef->secrettype = g_strdup(virSecretUsageTypeToString(VIR_SECRET_USAGE_TYPE_CEPH));
>             src->auth = g_steal_pointer(&authdef);
>
>             /* Cannot formulate a secretType (eg, usage or uuid) given
>              * what is provided.
>              */
>         }

Compare this with the domain XML documentation
<https://libvirt.org/formatdomain.html#hard-drives-floppy-disks-cdroms>:

>   <disk type='network'>
>     <driver name="qemu" type="raw"/>
>     <source protocol="rbd" name="image_name2">
>       <host name="hostname" port="7000"/>
>       <snapshot name="snapname"/>
>       <config file="/path/to/file"/>
>       <auth username='myuser'>
>         <secret type='ceph' usage='mypassid'/>
>       </auth>
>     </source>
>     <target dev="hdc" bus="ide"/>
>   </disk>

> auth
>
>     [Since libvirt 3.9.0] the auth element is supported for a disk
>     type "network" that is using a source element with the protocol
>     attributes "rbd" or "iscsi". If present, the auth element provides
>     the authentication credentials needed to access the source. It
>     includes a mandatory attribute username, which identifies the
>     username to use during authentication, as well as a sub-element
>     secret with mandatory attribute type, to tie back to a libvirt
>     secret object that holds the actual password or other credentials
>     (the domain XML intentionally does not expose the password, only
>     the reference to the object that does manage the password). Known
>     secret types are "ceph" for Ceph RBD network sources and "iscsi"
>     for CHAP authentication of iSCSI targets. Both will require either
>     a uuid attribute with the UUID of the secret object or a usage
>     attribute matching the key that was specified in the secret
>     object.

That is, libvirt represents the (userid, secret) pair via the <auth>
element:

[src/conf/storage_source_conf.h]

> typedef struct _virStorageAuthDef virStorageAuthDef;
> struct _virStorageAuthDef {
>     char *username;
>     char *secrettype; /* <secret type='%s' for disk source */
>     int authType;     /* virStorageAuthType */
>     virSecretLookupTypeDef seclookupdef;
> };

[src/conf/storage_source_conf.h]

> typedef enum {
>     VIR_STORAGE_AUTH_TYPE_NONE,
>     VIR_STORAGE_AUTH_TYPE_CHAP,
>     VIR_STORAGE_AUTH_TYPE_CEPHX,
>
>     VIR_STORAGE_AUTH_TYPE_LAST,
> } virStorageAuthType;

[src/util/virsecret.h]

> typedef struct _virSecretLookupTypeDef virSecretLookupTypeDef;
> struct _virSecretLookupTypeDef {
>     int type;   /* virSecretLookupType */
>     union {
>         unsigned char uuid[VIR_UUID_BUFLEN];
>         char *usage;
>     } u;
>
> };

and the virStorageSourceParseRBDColonString() can only fill in the
"username" and "secrettype" fields from the RBD key-value string, it
*cannot* fill in the "authType" and "seclookupdef" fields from the
key-value string. For that, it would have to create a separate secret
object somehow.

(The comment quoted higher up refers to the field name incorrectly -- it
says "Cannot formulate a secretType (eg, usage or uuid) given what is
provided", but what it means is "seclookupdef".)

Now, looking a bit at the libvirt git history, a very relevant commit is
6887af392cbb ("Utilize virDomainDiskAuth for domain disk", 2014-07-03).
That's when the current "virStorageAuthDef" structure was first
introduced -- except back then it was called "virDomainDiskAuth". That's
when the RBD key-value string parsing was also modified to produce this
central authentication structure, and that's when the comment in
question was added, too:

> @@ -2619,9 +2620,24 @@ static int qemuParseRBDString(virDomainDiskDefPtr disk)
>              *e = '\0';
>          }
>
> -        if (STRPREFIX(p, "id=") &&
> -            VIR_STRDUP(disk->src->auth.username, p + strlen("id=")) < 0)
> -            goto error;
> +        if (STRPREFIX(p, "id=")) {
> +            const char *secrettype;
> +            /* formulate authdef for disk->src->auth */
> +            if (VIR_ALLOC(authdef) < 0)
> +                goto error;
> +
> +            if (VIR_STRDUP(authdef->username, p + strlen("id=")) < 0)
> +                goto error;
> +            secrettype = virSecretUsageTypeToString(VIR_SECRET_USAGE_TYPE_CEPH);
> +            if (VIR_STRDUP(authdef->secrettype, secrettype) < 0)
> +                goto error;
> +            disk->src->auth = authdef;
> +            authdef = NULL;
> +
> +            /* Cannot formulate a secretType (eg, usage or uuid) given
> +             * what is provided.
> +             */
> +        }
>          if (STRPREFIX(p, "mon_host=")) {
>              char *h, *sep;

In other words, when parsing RBD key-value strings from the backing
chain, libvirt has *never* parsed the "key" entry. Over time it has
formalized and changed the authentication structure, and moved the
comment in question around (quite many times in fact), but this use case
was never supported by it.

So here's what I think:

- We prepare a valid qcow2 overlay with a backing file specification,
  but that specification is unsupported by libvirtd. As long as we use
  an overlay at all, the problem is unfixable in libguestfs.

- Not a regression in libvirtd -- this never worked in libvirtd, as much
  as I can tell.

- If it is *supposedly* regression in libguestfs, then please precisely
  identify the version in which this use case (with the libvirtd
  backend) last *worked* [NEEDINFO]. Perhaps, at that time, we did not
  prepare an overlay at all.

  Note: the guestfish command in comment 6 works even with the libvirtd
  backend because no overlay is created there! But if you retry it with
  the "--ro" flag, an overlay will be created, and I expect that it will
  show the same symptom.

  https://libguestfs.org/guestfish.1.html#add-drive-opts

> readonly
>
>     If true then the image is treated as read-only. Writes are still
>     allowed, but they are stored in a temporary snapshot overlay which
>     is discarded at the end. The disk that you add is not modified.

I think the best we can do here is update the documentation. Example:

  https://libguestfs.org/guestfs.3.html#network-block-device

This section already has Notes, which list various backend-specific
limitations:

> * The libvirt backend requires that you set the format parameter of
>   "guestfs_add_drive_opts" accurately when you use writable NBD disks.
>
> * The libvirt backend has a bug that stops Unix domain socket
>   connections from working:
>   https://bugzilla.redhat.com/show_bug.cgi?id=922888
>
> * The direct backend does not support readonly connections because of
>   a bug in qemu: https://bugs.launchpad.net/qemu/+bug/1155677

A sibling section there is about Ceph:

  https://libguestfs.org/guestfs.3.html#ceph

We should add a similar Note there, explaining that an encrypted RBD
disk does not work with the libvirt backend *IF* it is specified as the
backing file of a QCOW2 image. With this particular scope (and this
scope only), i.e. for updating the documentation, I'm taking the bug.

Comment 14 Laszlo Ersek 2022-05-13 16:53:58 UTC
libguestfs commit 6d6644d52d80 ("launch: libvirt: Implement drive secrets (RHBZ#1159016).", 2014-10-31) is what makes the guestfish command in comment 6 work -- but, again, that's entirely different. In that case, we don't have an overlay, and libguestfs explicitly prepares secrets for libvirtd in advance.

In the problematic case, libvirtd itself parses the RBD params from the "backing file" field of the QCOW2 image, and it does not create a (temporary) secret for itself, from the "key" RBD param.

Comment 15 Laszlo Ersek 2022-05-13 17:00:27 UTC
(We create overlays ultimately with disk_create_qcow2() in libguestfs [lib/create.c], which invokes qemu-img.)

Comment 16 Laszlo Ersek 2022-05-18 08:31:44 UTC
[libguestfs PATCH] guestfs.pod: document encrypted RBD disk limitation
Message-Id: <20220518083014.9890-1-lersek>
https://listman.redhat.com/archives/libguestfs/2022-May/028903.html

Also, since there's been no answer to my NEEDINFO request in comment 13, I'm removing the Regression keyword.

Comment 17 YongkuiGuo 2022-05-18 09:17:18 UTC
(In reply to Laszlo Ersek from comment #16)
> [libguestfs PATCH] guestfs.pod: document encrypted RBD disk limitation
> Message-Id: <20220518083014.9890-1-lersek>
> https://listman.redhat.com/archives/libguestfs/2022-May/028903.html
> 
> Also, since there's been no answer to my NEEDINFO request in comment 13, I'm
> removing the Regression keyword.

Sorry for the late response. Documenting the encrypted RBD disk limitation in the man page is ok for me.

Comment 18 Laszlo Ersek 2022-05-19 12:15:04 UTC
(In reply to Laszlo Ersek from comment #16)
> [libguestfs PATCH] guestfs.pod: document encrypted RBD disk limitation
> Message-Id: <20220518083014.9890-1-lersek>
> https://listman.redhat.com/archives/libguestfs/2022-May/028903.html

Upstream commit 544bb0ff5079 ("guestfs.pod: document encrypted RBD disk limitation", 2022-05-19).

Comment 21 YongkuiGuo 2022-05-27 08:36:18 UTC
Verified with libguestfs-1.48.3-1.el9.x86_64:

Steps:

1. On rhel9.1 host

$ man 3 guestfs 

An encrypted RBD disk -- directly opening which would require the "username" and "secret" parameters -- cannot be accessed if the following conditions all hold:
•   the backend is libvirt,

•   the image specified by the "filename" parameter is different from the encrypted RBD disk,

•   the image specified by the "filename" parameter has qcow2 format,

•   the encrypted RBD disk is specified as a backing file at some level in the qcow2 backing chain.

This limitation is due to libvirt's (justified) separate handling of disks vs. secrets.  When the RBD username and secret are provided inside a qcow2 backing file specification, libvirt does not construct an ephemeral secret object from those, for Ceph authentication.  Refer to https://bugzilla.redhat.com/2033247.

Comment 23 errata-xmlrpc 2022-11-15 09:52:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Low: libguestfs security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:7958


Note You need to log in before you can comment on or make changes to this bug.