Description of problem:
When creating a vm image on F30 virtualzation server with glusterfs storage pool it fails with virtio-disk0X: Unknown protocol 'gluster'
Normally one would just install "qemu-block-gluster" package but for whatever reason it's not being built for F30 o_O
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. dnf install qemu-block-gluster
2. carefully inspect https://koji.fedoraproject.org/koji/buildinfo?buildID=1348234 for package
No match for argument: qemu-block-gluster
Error: Unable to find a match: qemu-block-gluster
No packaged being built as visible observed in koji
qemu-block-gluster being built and available on 3.1.x release in F30 as it is for 3.0.x branch in F29 and 4.0.x branch in F31
It was disabled a while back because of https://bugzilla.redhat.com/show_bug.cgi?id=1684298
and then reenabled. However I can see that F30 was branched off from Rawhide while
it was disabled and it is still disabled in F30. (This may not necessarily be wrong,
it depends if bug 1684298 was also fixed in F30).
There's a scratch build against F30 with the gluster block driver enabled here:
I'll test it if it builds ;)
Thanks for the quick response
Well it failed to build on ppc64le but that doesn't seem to have anything
to do with gluster (although the actual failure in migration does look
serious). I have anyway pushed this change and submitted a real build:
Probably this will fail so I won't be able to submit an update.
FEDORA-2019-80e8403bf4 has been submitted as an update to Fedora 30. https://bodhi.fedoraproject.org/updates/FEDORA-2019-80e8403bf4
qemu-3.1.1-2.fc30 has been pushed to the Fedora 30 testing repository. If problems still persist, please make note of it in this bug report.
See https://fedoraproject.org/wiki/QA:Updates_Testing for
instructions on how to install test updates.
You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2019-80e8403bf4
qemu-3.1.1-2.fc30 has been pushed to the Fedora 30 stable repository. If problems still persist, please make note of it in this bug report.