Description of problem: ----------------------- qemu-img command hits segmentation fault while creating a qcow2 image using qemu's native driver for glusterfs This happens only for qcow2 image type. (i.e) Creating raw image file works as expected. Version-Release number of selected component (if applicable): ------------------------------------------------------------- glusterfs-common 3.8.5-1 qemu-img version 2.6.2 How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Create a gluster pure replicate volume of type replica 2 2. Optimize the volume for virt-store (i.e) gluster volume set <vol> group virt 3. Set the proper ownership of qemu on the volume (i.e) gluster volume set <vol> storage.owner-uid 107 gluster volume set <vol> storage.owner-gid 107 4. Start the volume 5. create a image file (i.e) qemu-img create -f qcow2 gluster://<gluster-server>/<vol>/<image_file> 25G Actual results: --------------- qemu-img command hits segmentation fault Expected results: ----------------- qemu-img command should succeed in creating the image file Additional info: ----------------- Here is the console logs : glusterfs-common 3.8.4-1 is working fine. Oct 21 09:34:19 muc1-vh3 kernel: [61102.400261] qemu-img[22528]: segfault at 18 ip 00007f85fdc36bd0 sp 00007f85fa5c0048 error 6 Oct 21 09:34:19 muc1-vh3 kernel: [61102.400269] qemu-img[22542]: segfault at 8 ip 00007f85f11da453 sp 00007f85eab76df0 error 4 Oct 21 09:34:19 muc1-vh3 kernel: [61102.400270] in libpthread-2.19.so[7f85fdc2a000+18000] Oct 21 09:34:19 muc1-vh3 kernel: [61102.400272] in client.so[7f85f11a3000+56000] Oct 21 09:34:19 muc1-vh3 kernel: [61102.400274]
Hi Michael, I see that you are using glusterfs-3.8.5-1, which is the upstream GlusterFS. So changing the product accordingly
Missed to move the bz to the correct component as per comment2
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.