RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1425293 - qemu_gluster_co_get_block_status gets SIGABRT when doing blockcommit continually
Summary: qemu_gluster_co_get_block_status gets SIGABRT when doing blockcommit continually
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: glusterfs
Version: 7.4
Hardware: Unspecified
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Niels de Vos
QA Contact: SATHEESARAN
URL: http://lists.nongnu.org/archive/html/...
Whiteboard:
Depends On: 1454558
Blocks: 1425296
TreeView+ depends on / blocked
 
Reported: 2017-02-21 06:11 UTC by Han Han
Modified: 2021-01-15 07:32 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1425296 1454558 (view as bug list)
Environment:
Last Closed: 2021-01-15 07:32:04 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs,scripts and backtrace (9.44 KB, application/x-gzip)
2017-02-21 06:11 UTC, Han Han
no flags Details
The log of script and bricks log (2.51 KB, application/x-gzip)
2017-03-03 08:05 UTC, Han Han
no flags Details

Description Han Han 2017-02-21 06:11:03 UTC
Created attachment 1255960 [details]
logs,scripts and backtrace

Description of problem:
As subject

Version-Release number of selected component (if applicable):
kernel-3.10.0-514.12.1.el7.x86_64
libvirt-3.0.0-2.el7.x86_64
libvirt-python-3.0.0-1.el7.x86_64
glusterfs-api-3.8.4-15.el7rhgs.x86_64
qemu-kvm-rhev-2.8.0-4.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Setup a glusterfs server.
2. Copy the qcow2 file installed with OS to the glusterfs server.

3. Create a VM whose disk based on the glusterfs server
# DOM=c16572
# cat c16572.xml|awk '/<disk/,/<\/disk/'
    <disk type='network' device='disk'>
      <driver name='qemu' type='qcow2' />
      <source protocol='gluster' name='gluster-vol1/c16572.qcow2'>
        <host name='xx.xx.xx.xx'/>
      </source>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
# virsh create $DOM.xml
Domain c16572 created from c16572.xml

4. Create 2 snapshots, 1 based on glusterfs, 1 based on local
# cat s1.xml 
<domainsnapshot>
<name>s1</name>
<disks>
<disk name='vda' type='network'>
<driver type='qcow2'/>
<source protocol='gluster' name='gluster-vol1/c16572.s1'>
<host name='xx.xx.xx.xx'/>
</source>
</disk>
</disks>
</domainsnapshot>

# virsh snapshot-create $DOM s1.xml --disk-only
Domain snapshot s1 created from 's1.xml'
# virsh snapshot-create-as $DOM s2 --disk-only --diskspec vda,file=/var/lib/libvirt/images/$DOM.s2
Domain snapshot s2 created

5. Do blockcommit&blockjob --pivot continually
# virsh blockcommit $DOM vda --active --shallow --wait --verbose
Block commit: [100 %]
Now in synchronized phase
# virsh dumpxml $DOM|awk '/<disk/,/<\/disk/'
<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/c16572.s2'/>
      <backingStore type='network' index='1'>
        <format type='qcow2'/>
        <source protocol='gluster' name='gluster-vol1/c16572.s1'>
          <host name='xx.xx.xx.xx'/>
        </source>
        <backingStore type='network' index='2'>
          <format type='qcow2'/>
          <source protocol='gluster' name='gluster-vol1/c16572.qcow2'>
            <host name='xx.xx.xx.xx'/>
          </source>
          <backingStore/>
        </backingStore>
      </backingStore>
      <mirror type='network' job='active-commit' ready='yes'>
        <format type='qcow2'/>
        <source protocol='gluster' name='gluster-vol1/c16572.s1'>
          <host name='xx.xx.xx.xx'/>
        </source>
      </mirror>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>

# virsh blockjob $DOM vda --pivot

# virsh dumpxml $DOM|awk '/<disk/,/<\/disk/'
 <disk type='network' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source protocol='gluster' name='gluster-vol1/c16572.s1'>
        <host name='xx.xx.xx.xx'/>
      </source>
      <backingStore type='network' index='1'>
        <format type='qcow2'/>
        <source protocol='gluster' name='gluster-vol1/c16572.qcow2'>
          <host name='xx.xx.xx.xx'/>
        </source>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>

# virsh blockcommit $DOM vda --active --shallow --wait --verbose
error: internal error: child reported: Kernel does not provide mount namespace: No such file or directory
# virsh dumpxml $DOM|awk '/<disk/,/<\/disk/'
error: failed to get domain 'c16572'
error: Domain not found: no domain with matching name 'c16572'
# virsh blockjob $DOM vda --pivot
error: failed to get domain 'c16572'
error: Domain not found: no domain with matching name 'c16572'
# virsh dumpxml $DOM|awk '/<disk/,/<\/disk/'
error: failed to get domain 'c16572'
error: Domain not found: no domain with matching name 'c16572'

6.
There is coredump:
# id 98246f231004278367ef68284bed2d3823a699cb
reason:         qemu-kvm killed by SIGABRT
time:           Tue 21 Feb 2017 11:24:15 AM CST
cmdline:        /usr/libexec/qemu-kvm -name guest=16572,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-8-16572/master-key.aes -machine pc-i440fx-rhel7.2.0,accel=kvm
,usb=off,vmport=off,dump-guest-core=off -m 1024 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid ad912fb3-9e5e-4426-9e13-a9ab5f72e207 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/
var/lib/libvirt/qemu/domain-8-16572/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global PI
IX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x6.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x6 -devi
ce ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x6.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x6.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -driv
e file=gluster://xx.xx.xx.xx/gluster-vol1/16572.qcow2,file.debug=4,format=qcow2,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0
,bootindex=1 -netdev tap,fd=29,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:4e:f5:57,bus=pci.0,addr=0x9 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev 
spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channel/
target/domain-8-16572/org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,por
t=1 -spice port=5901,addr=127.0.0.1,disable-ticketing,image-compression=off,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 -
device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0,bus=usb.0,port=2 
-chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1,bus=usb.0,port=3 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on
package:        qemu-kvm-rhev-2.8.0-4.el7
uid:            107 (qemu)
count:          9
Directory:      /var/spool/abrt/ccpp-2017-02-20-22:24:15-9498
Run 'abrt-cli report /var/spool/abrt/ccpp-2017-02-20-22:24:15-9498' for creating a case in Red Hat Customer Portal

The backtrace of coredump:
(gdb) bt
#0  0x00007ffb4483d1d7 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x00007ffb4483e8c8 in __GI_abort () at abort.c:90
#2  0x00007ffb44836146 in __assert_fail_base (fmt=0x7ffb44987428 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x7ffb5da443f4 "offs >= start", file=file@entry=0x7ffb5da49608 "block/gluster.c", line=line@entry=1285, function=function@entry=0x7ffb5da49d40 <__PRETTY_FUNCTION__.24238> "find_allocation") at assert.c:92
#3  0x00007ffb448361f2 in __GI___assert_fail (assertion=assertion@entry=0x7ffb5da443f4 "offs >= start", file=file@entry=0x7ffb5da49608 "block/gluster.c", line=line@entry=1285, function=function@entry=0x7ffb5da49d40 <__PRETTY_FUNCTION__.24238> "find_allocation") at assert.c:101
#4  0x00007ffb5d930ca2 in qemu_gluster_co_get_block_status (bs=0x7ffb61ed6800, hole=<synthetic pointer>, data=<synthetic pointer>, start=2555904) at block/gluster.c:1285
#5  0x00007ffb5d930ca2 in qemu_gluster_co_get_block_status (bs=0x7ffb61ed6800, sector_num=<optimized out>, nb_sectors=4224, pnum=0x7ffae34d8cfc, file=0x7ffae34d8d00) at block/gluster.c:1378
#6  0x00007ffb5d918236 in bdrv_co_get_block_status (bs=0x7ffb61ed6800, sector_num=sector_num@entry=4992, nb_sectors=4224, pnum=pnum@entry=0x7ffae34d8cfc, file=file@entry=0x7ffae34d8d00) at block/io.c:1749
#7  0x00007ffb5d91839a in bdrv_co_get_block_status (bs=bs@entry=0x7ffb61534800, sector_num=sector_num@entry=1054080, nb_sectors=<optimized out>, 
    nb_sectors@entry=4194303, pnum=pnum@entry=0x7ffae34d8e34, file=file@entry=0x7ffae34d8e00) at block/io.c:1783
#8  0x00007ffb5d91843b in bdrv_get_block_status_above_co_entry (file=0x7ffae34d8e00, pnum=0x7ffae34d8e34, nb_sectors=4194303, sector_num=1054080, base=0x7ffb5fd0e000, bs=<optimized out>) at block/io.c:1819
#9  0x00007ffb5d91843b in bdrv_get_block_status_above_co_entry (opaque=opaque@entry=0x7ffae34d8d90) at block/io.c:1835
#10 0x00007ffb5d918138 in bdrv_get_block_status_above (bs=0x7ffb61534800, base=<optimized out>, sector_num=sector_num@entry=1054080, nb_sectors=nb_sectors@entry=4194303, pnum=<optimized out>, file=file@entry=0x7ffae34d8e00) at block/io.c:1867
#11 0x00007ffb5d918516 in bdrv_is_allocated (file=0x7ffae34d8e00, pnum=0x7ffae34d8e34, nb_sectors=4194303, sector_num=1054080, bs=0x7ffb61534800) at block/io.c:1882
#12 0x00007ffb5d918516 in bdrv_is_allocated (bs=bs@entry=0x7ffb61534800, sector_num=sector_num@entry=1054080, nb_sectors=nb_sectors@entry=4194303, pnum=pnum@entry=0x7ffae34d8e34) at block/io.c:1890
#13 0x00007ffb5d9185b1 in bdrv_is_allocated_above (top=top@entry=0x7ffb61534800, base=base@entry=0x7ffb5fd0e000, sector_num=sector_num@entry=1054080, nb_sectors=nb_sectors@entry=4194303, pnum=pnum@entry=0x7ffae34d8f48) at block/io.c:1921
#14 0x00007ffb5d915ac6 in mirror_run (s=0x7ffb5fc2bee0) at block/mirror.c:604
#15 0x00007ffb5d915ac6 in mirror_run (opaque=0x7ffb5fc2bee0) at block/mirror.c:695
#16 0x00007ffb5d98a55a in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at util/coroutine-ucontext.c:79
#17 0x00007ffb4484ecf0 in __start_context () at /usr/lib64/libc-2.17.so
#18 0x00007ffcf69a1900 in  ()
#19 0x0000000000000000 in  ()



Actual results:
qemu gest SIGABRT 

Expected results:
NO SIGABRT

Additional info:
Log and reproducing scripts are in the attachment.
Qemu VM log:
# cat /var/log/libvirt/qemu/c16572.log                                                                                                                                                                  
2017-02-21 06:00:13.610+0000: starting up libvirt version: 3.0.0, package: 2.el7 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2017-02-09-04:47:33, x86-037.build.eng.bos.redhat.com), qemu version: 2.8.0(qemu-kvm-rhev-2.8.0-4.el7), hostname: lab.work.me
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name guest=c16572,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-32-c16572/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,vmport=off,dump-guest-core=off -cpu Penryn -m 1024 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 9abaaea3-e69b-4a9d-bce6-841f03763474 -no-user-config -nodefaults -charde
v socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-32-c16572/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x5 -device ich9-usb-uhci2,mast
erbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=gluster://xx.xx.xx.xx/gluster-vol1/c16572.qcow2,file.debug=4,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=31,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac
=52:54:00:ad:71:b5,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -spice port=5901,addr=127.0.0.1,disable-ticketing,image-compression=off,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 -
device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev spicevmc,id=charredir0,name=usbredir -device usb-redir,chardev=charredir0,id=redir0,bus=usb.0,port=1 -chardev spicevmc,id=charredir1,name=usbredir -device usb-redir,chardev=charredir1,id=redir1,bus=usb.0,port=2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on
2017-02-21 06:00:13.696+0000: 25103: debug : virFileClose:109 : Closed fd 32
2017-02-21 06:00:13.696+0000: 25103: debug : virFileClose:109 : Closed fd 37
2017-02-21 06:00:13.726+0000: 25103: debug : virFileClose:109 : Closed fd 3
2017-02-21 06:00:13.728+0000: 25104: debug : virExec:697 : Run hook 0x7f2e5ec78a80 0x7f2e78561840
2017-02-21 06:00:13.728+0000: 25104: debug : qemuProcessHook:2646 : Obtaining domain lock
2017-02-21 06:00:13.728+0000: 25104: debug : virSecuritySELinuxSetSocketLabel:2574 : Setting VM c16572 socket context system_u:system_r:svirt_t:s0:c232,c920
2017-02-21 06:00:13.729+0000: 25104: debug : virDomainLockProcessStart:179 : plugin=0x7f2e141276e0 dom=0x7f2e689d4c60 paused=1 fd=0x7f2e78561370
2017-02-21 06:00:13.729+0000: 25104: debug : virDomainLockManagerNew:134 : plugin=0x7f2e141276e0 dom=0x7f2e689d4c60 withResources=1
2017-02-21 06:00:13.729+0000: 25104: debug : virLockManagerPluginGetDriver:281 : plugin=0x7f2e141276e0
2017-02-21 06:00:13.729+0000: 25104: debug : virLockManagerNew:305 : driver=0x7f2e1edc4000 type=0 nparams=5 params=0x7f2e78561220 flags=1
2017-02-21 06:00:13.730+0000: 25104: debug : virLockManagerLogParams:98 :   key=uuid type=uuid value=9abaaea3-e69b-4a9d-bce6-841f03763474
2017-02-21 06:00:13.730+0000: 25104: debug : virLockManagerLogParams:91 :   key=name type=string value=c16572
2017-02-21 06:00:13.730+0000: 25104: debug : virLockManagerLogParams:79 :   key=id type=uint value=32
2017-02-21 06:00:13.730+0000: 25104: debug : virLockManagerLogParams:79 :   key=pid type=uint value=25104
2017-02-21 06:00:13.730+0000: 25104: debug : virLockManagerLogParams:94 :   key=uri type=cstring value=qemu:///system
2017-02-21 06:00:13.730+0000: 25104: debug : virDomainLockManagerNew:146 : Adding leases
2017-02-21 06:00:13.730+0000: 25104: debug : virDomainLockManagerNew:151 : Adding disks
2017-02-21 06:00:13.730+0000: 25104: debug : virLockManagerAcquire:350 : lock=0x7f2e68a89990 state='<null>' flags=3 action=0 fd=0x7f2e78561370
2017-02-21 06:00:13.730+0000: 25104: debug : virLockManagerSanlockAcquire:935 : Register sanlock 3
2017-02-21 06:00:13.731+0000: 25104: debug : virLockManagerSanlockAcquire:1029 : Acquire completed fd=3
2017-02-21 06:00:13.731+0000: 25104: debug : virLockManagerFree:387 : lock=0x7f2e68a89990
2017-02-21 06:00:13.731+0000: 25104: info : virObjectRef:296 : OBJECT_REF: obj=0x7f2e141b83d0
2017-02-21 06:00:13.731+0000: 25104: info : virObjectRef:296 : OBJECT_REF: obj=0x7f2e141b83d0
2017-02-21 06:00:13.731+0000: 25104: debug : virFileGetMountSubtreeImpl:1921 : prefix=/dev
2017-02-21 06:00:13.732+0000: 25104: info : virObjectUnref:259 : OBJECT_UNREF: obj=0x7f2e141b83d0
2017-02-21 06:00:13.732+0000: 25104: debug : qemuDomainSetupDev:7194 : Setting up /dev/ for domain c16572
2017-02-21 06:00:13.732+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.dev mode=0777
2017-02-21 06:00:13.732+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu mode=0777
2017-02-21 06:00:13.732+0000: 25104: debug : virFileSetupDev:3595 : Mount devfs on /var/run/libvirt/qemu/c16572.dev type=tmpfs flags=2, opts=mode=755,size=65536
2017-02-21 06:00:13.732+0000: 25104: info : virObjectRef:296 : OBJECT_REF: obj=0x7f2e141b83d0
2017-02-21 06:00:13.732+0000: 25104: debug : virFileMakeParentPath:2982 : path=/var/run/libvirt/qemu/c16572.dev/null
2017-02-21 06:00:13.732+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.dev mode=0777
2017-02-21 06:00:13.733+0000: 25104: debug : virFileMakeParentPath:2982 : path=/var/run/libvirt/qemu/c16572.dev/full
2017-02-21 06:00:13.733+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.dev mode=0777
2017-02-21 06:00:13.734+0000: 25104: debug : virFileMakeParentPath:2982 : path=/var/run/libvirt/qemu/c16572.dev/zero
2017-02-21 06:00:13.734+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.dev mode=0777
2017-02-21 06:00:13.734+0000: 25104: debug : virFileMakeParentPath:2982 : path=/var/run/libvirt/qemu/c16572.dev/random
2017-02-21 06:00:13.734+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.dev mode=0777
2017-02-21 06:00:13.734+0000: 25104: debug : virFileMakeParentPath:2982 : path=/var/run/libvirt/qemu/c16572.dev/urandom
2017-02-21 06:00:13.734+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.dev mode=0777
2017-02-21 06:00:13.734+0000: 25104: debug : virFileMakeParentPath:2982 : path=/var/run/libvirt/qemu/c16572.dev/ptmx
2017-02-21 06:00:13.734+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.dev mode=0777
2017-02-21 06:00:13.734+0000: 25104: debug : virFileMakeParentPath:2982 : path=/var/run/libvirt/qemu/c16572.dev/kvm
2017-02-21 06:00:13.734+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.dev mode=0777
2017-02-21 06:00:13.735+0000: 25104: debug : virFileMakeParentPath:2982 : path=/var/run/libvirt/qemu/c16572.dev/rtc
2017-02-21 06:00:13.735+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.dev mode=0777
2017-02-21 06:00:13.735+0000: 25104: debug : virFileMakeParentPath:2982 : path=/var/run/libvirt/qemu/c16572.dev/rtc0
2017-02-21 06:00:13.735+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.dev mode=0777
2017-02-21 06:00:13.735+0000: 25104: debug : virFileMakeParentPath:2982 : path=/var/run/libvirt/qemu/c16572.dev/hpet
2017-02-21 06:00:13.735+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.dev mode=0777
2017-02-21 06:00:13.735+0000: 25104: debug : virFileMakeParentPath:2982 : path=/var/run/libvirt/qemu/c16572.dev/vfio/vfio
2017-02-21 06:00:13.735+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.dev/vfio mode=0777
2017-02-21 06:00:13.735+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.dev mode=0777
2017-02-21 06:00:13.736+0000: 25104: info : virObjectUnref:259 : OBJECT_UNREF: obj=0x7f2e141b83d0
2017-02-21 06:00:13.736+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.hugepages mode=0777
2017-02-21 06:00:13.736+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu mode=0777
2017-02-21 06:00:13.736+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.mqueue mode=0777
2017-02-21 06:00:13.736+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu mode=0777
2017-02-21 06:00:13.736+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.pts mode=0777
2017-02-21 06:00:13.736+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu mode=0777
2017-02-21 06:00:13.736+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu/c16572.shm mode=0777
2017-02-21 06:00:13.736+0000: 25104: debug : virFileMakePathHelper:2910 : path=/var/run/libvirt/qemu mode=0777
2017-02-21 06:00:13.736+0000: 25104: debug : qemuDomainSetupAllDisks:7257 : Setting up disks
2017-02-21 06:00:13.736+0000: 25104: debug : qemuDomainSetupAllDisks:7266 : Setup all disks
2017-02-21 06:00:13.736+0000: 25104: debug : qemuDomainSetupAllHostdevs:7305 : Setting up hostdevs
2017-02-21 06:00:13.736+0000: 25104: debug : qemuDomainSetupAllHostdevs:7312 : Setup all hostdevs
2017-02-21 06:00:13.736+0000: 25104: debug : qemuDomainSetupAllChardevs:7336 : Setting up chardevs
2017-02-21 06:00:13.736+0000: 25104: debug : qemuDomainSetupAllChardevs:7344 : Setup all chardevs
2017-02-21 06:00:13.736+0000: 25104: debug : qemuDomainSetupAllInputs:7412 : Setting up inputs
2017-02-21 06:00:13.736+0000: 25104: debug : qemuDomainSetupAllInputs:7419 : Setup all inputs
2017-02-21 06:00:13.736+0000: 25104: debug : qemuDomainSetupAllRNGs:7451 : Setting up RNGs
2017-02-21 06:00:13.736+0000: 25104: debug : qemuDomainSetupAllRNGs:7459 : Setup all RNGs
2017-02-21 06:00:13.736+0000: 25104: debug : virFileMakePathHelper:2910 : path=/dev/hugepages mode=0777
2017-02-21 06:00:13.736+0000: 25104: debug : virFileMakePathHelper:2910 : path=/dev mode=0777
2017-02-21 06:00:13.736+0000: 25104: debug : virFileMakePathHelper:2910 : path=/dev/mqueue mode=0777
2017-02-21 06:00:13.736+0000: 25104: debug : virFileMakePathHelper:2910 : path=/dev mode=0777
2017-02-21 06:00:13.736+0000: 25104: debug : virFileMakePathHelper:2910 : path=/dev/pts mode=0777
2017-02-21 06:00:13.736+0000: 25104: debug : virFileMakePathHelper:2910 : path=/dev mode=0777
2017-02-21 06:00:13.736+0000: 25104: debug : virFileMakePathHelper:2910 : path=/dev/shm mode=0777
2017-02-21 06:00:13.736+0000: 25104: debug : virFileMakePathHelper:2910 : path=/dev mode=0777
2017-02-21 06:00:13.736+0000: 25104: info : virObjectUnref:259 : OBJECT_UNREF: obj=0x7f2e141b83d0
2017-02-21 06:00:13.736+0000: 25104: info : virObjectUnref:259 : OBJECT_UNREF: obj=0x7f2e141b83d0
2017-02-21 06:00:13.736+0000: 25104: debug : qemuProcessHook:2690 : Hook complete ret=0
2017-02-21 06:00:13.737+0000: 25104: debug : virExec:699 : Done hook 0
2017-02-21 06:00:13.737+0000: 25104: debug : virExec:706 : Setting child security label to system_u:system_r:svirt_t:s0:c232,c920
2017-02-21 06:00:13.737+0000: 25104: debug : virExec:736 : Setting child uid:gid to 107:107 with caps 0
2017-02-21 06:00:13.737+0000: 25104: debug : virCommandHandshakeChild:435 : Notifying parent for handshake start on 34
2017-02-21 06:00:13.737+0000: 25104: debug : virCommandHandshakeChild:443 : Waiting on parent for handshake complete on 35
2017-02-21 06:00:14.905+0000: 25104: debug : virFileClose:109 : Closed fd 34
2017-02-21 06:00:14.905+0000: 25104: debug : virFileClose:109 : Closed fd 35
2017-02-21 06:00:14.905+0000: 25104: debug : virCommandHandshakeChild:463 : Handshake with parent is done
char device redirected to /dev/pts/41 (label charserial0)
[2017-02-21 06:00:15.055594] I [MSGID: 104045] [glfs-master.c:91:notify] 0-gfapi: New graph 6c61622e-776f-726b-2e6d-652d32353130 (0) coming up
[2017-02-21 06:00:15.055654] I [MSGID: 114020] [client.c:2356:notify] 0-gluster-vol1-client-0: parent translators are ready, attempting connect on transport
[2017-02-21 06:00:15.059810] I [rpc-clnt.c:1965:rpc_clnt_reconfig] 0-gluster-vol1-client-0: changing port to 49152 (from 0)
[2017-02-21 06:00:15.063487] I [MSGID: 114057] [client-handshake.c:1439:select_server_supported_programs] 0-gluster-vol1-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2017-02-21 06:00:15.064025] I [MSGID: 114046] [client-handshake.c:1215:client_setvolume_cbk] 0-gluster-vol1-client-0: Connected to gluster-vol1-client-0, attached to remote volume '/br1'.
[2017-02-21 06:00:15.064041] I [MSGID: 114047] [client-handshake.c:1226:client_setvolume_cbk] 0-gluster-vol1-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2017-02-21 06:00:15.086593] I [MSGID: 114035] [client-handshake.c:201:client_set_lk_version_cbk] 0-gluster-vol1-client-0: Server lk version = 1
[2017-02-21 06:00:15.087550] I [MSGID: 104041] [glfs-resolve.c:885:__glfs_active_subvol] 0-gluster-vol1: switched to graph 6c61622e-776f-726b-2e6d-652d32353130 (0)
[2017-02-21 06:00:15.089021] W [MSGID: 114031] [client-rpc-fops.c:2211:client3_3_seek_cbk] 0-gluster-vol1-client-0: remote operation failed [No such device or address]
Formatting 'gluster://xx.xx.xx.xx/gluster-vol1/c16572.s1', fmt=qcow2 size=10737418240 backing_file=gluster://xx.xx.xx.xx/gluster-vol1/c16572.qcow2 backing_fmt=qcow2 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
Formatting '/var/lib/libvirt/images/c16572.s2', fmt=qcow2 size=10737418240 backing_file=gluster://xx.xx.xx.xx/gluster-vol1/c16572.s1 backing_fmt=qcow2 encryption=off cluster_size=65536 lazy_refcounts=off refcount_bits=16
qemu-kvm: block/gluster.c:1285: find_allocation: Assertion `offs >= start' failed.
2017-02-21 06:00:30.320+0000: shutting down, reason=crashed

Comment 1 Han Han 2017-02-21 06:16:25 UTC
I did't encounter the bug when testing with libvirt-2.5.0-1.el7 and qemu-kvm-rhev-2.6.0-29.el7 one month ago. Moreover, it blocks a case of libvirt. Marked as regression and testblocker.

Comment 4 Jeff Cody 2017-02-21 17:46:37 UTC
From the backtrace, this looks to be aborting when trying to do a lseek via the gluster api:

From qemu block/gluster.c:
1270     /*
1271      * SEEK_DATA cases:
1272      * D1. offs == start: start is in data
1273      * D2. offs > start: start is in a hole, next data at offs
1274      * D3. offs < 0, errno = ENXIO: either start is in a trailing hole
1275      *                              or start is beyond EOF
1276      *     If the latter happens, the file has been truncated behind
1277      *     our back since we opened it.  All bets are off then.
1278      *     Treating like a trailing hole is simplest.
1279      * D4. offs < 0, errno != ENXIO: we learned nothing
1280      */
1281     offs = glfs_lseek(s->fd, start, SEEK_DATA);
1282     if (offs < 0) {
1283         return -errno;          /* D3 or D4 */
1284     }
1285     assert(offs >= start);
1286 
1287     if (offs > start) {
1288         /* D2: in hole, next data at offs */
1289         *hole = start;
1290         *data = offs;
1291         return 0;
1292     }

Gluster indicated an error attempting the seek, from the logs:

[2017-02-21 06:00:15.089021] W [MSGID: 114031] [client-rpc-fops.c:2211:client3_3_seek_cbk] 0-gluster-vol1-client-0: remote operation failed [No such device or address]

A failure for glfs_lseek() should mean a value of -1 is returned, with errno set appropriately.  But if for some reason glfs_lseek() silently failed (returning the last offset, or 0), this could be the path that triggered the assert.

The fact that there was a seek failure right before the assertion failure leads me to believe there is indeed a path somehow in the gluster library that returns a bogus value on glfs_lseek() failure.

Assuming that glfs_lseek() mimics Linux lseek behavior, there should be no way the assert should happen, as the only two value ranges that can be returned are offs>=start (success), or -1 (failure):



"
LSEEK(2)

[...]

SEEK_DATA
Adjust the file offset to the next location in the file greater than or equal to offset containing data.  If offset points to data, then the file offset is set to offset.

[...]

RETURN VALUE
Upon successful completion, lseek() returns the resulting offset location as measured in bytes from the beginning of the file.  On error, the value (off_t) -1 is returned and errno is set to indicate the error."


Based on the above, re-assigning to the gluster team.

Comment 5 Niels de Vos 2017-03-03 06:22:21 UTC
Please provide the output of 'gluster volume info gluster-vol1' and if possible the logs from the bricks at the time of the problem (note that time is in UTC in Gluster logs). If we know how the volume is configured, we may be able to reproduce this.

Comment 6 Han Han 2017-03-03 07:56:21 UTC
Hi, Niels, here is my configuration to setup glusterfs server:
# mkdir -p /br1
# cat /etc/glusterfs/glusterd.vol
volume management
    type mgmt/glusterd
    option working-directory /var/lib/glusterd
    option transport-type socket,rdma
    option transport.socket.keepalive-time 10
    option transport.socket.keepalive-interval 2
    option transport.socket.read-fail-log off
    option ping-timeout 0
    option event-threads 1
#manual added "rpc-auth-allow-insecure"
    option rpc-auth-allow-insecure on
#   option transport.address-family inet6
#   option base-port 49152
end-volume

# service glusterd restart

# gluster volume create gluster-vol1 xx.xx.xx.xx:/br1

# gluster volume set gluster-vol1 server.allow-insecure on

# gluster volume start gluster-vol1

# gluster volume set gluster-vol1 nfs.disable on


The gluster volume info
# gluster volume info gluster-vol1
 
Volume Name: gluster-vol1
Type: Distribute
Volume ID: 93004af1-e4bc-4ac6-a105-dfed6ec10b62
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: xx.xx.xx.xx:/br1
Options Reconfigured:
server.allow-insecure: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on


Prepare a image with OS on the glusterfs server:
# qemu-img convert xxx.qcow2 gluster://xx.xx.xx.xx/gluster-vol1/c16572.qcow2 -O qcow2

Comment 7 Han Han 2017-03-03 08:05:40 UTC
Created attachment 1259422 [details]
The log of script and bricks log

There are logs of script and brick log. You can debug by their timestamp.
Note that you should remove the following xml from the c16572.xml file in case that you cannot create the domain.
 <interface type='bridge'>
      <mac address='52:54:00:ad:71:b5'/>
      <source bridge='br0'/>
      <target dev='vnet1'/>
      <model type='rtl8139'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

Comment 8 Han Han 2017-03-03 08:24:13 UTC
Version:
libvirt-3.0.0-2.el7.x86_64
glusterfs-3.8.4-15.el7rhgs.x86_64
qemu-kvm-rhev-2.8.0-4.el7.x86_64

Comment 9 Niels de Vos 2017-05-19 19:58:28 UTC
When looking through bugfixes that went in Gluster Community 3.8, I came across https://review.gluster.org/15943 . This change does not seem to have been backported to RHGS. Without this change, I think it is possible that the offset is incorrectly reset to 0, and that would cause the assertion that Jeff pointed out in comment #4 (assuming 'start' is > 0):

1281     offs = glfs_lseek(s->fd, start, SEEK_DATA);
1282     if (offs < 0) {
1283         return -errno;          /* D3 or D4 */
1284     }
1285     assert(offs >= start);

Han, is it easy for you to verify this workflow with a test-package that contains that particular patch? It is a change that is only relevant on the glusterfs-server (brick) environment. If you would be able to find time for a test, I can provide you with the packages.

Comment 10 Han Han 2017-05-22 02:21:43 UTC
No problem. Please provide me the scratch build.

Comment 13 Jeff Cody 2017-05-22 21:08:57 UTC
(In reply to Niels de Vos from comment #9)
> When looking through bugfixes that went in Gluster Community 3.8, I came
> across https://review.gluster.org/15943 . This change does not seem to have
> been backported to RHGS. Without this change, I think it is possible that
> the offset is incorrectly reset to 0, and that would cause the assertion
> that Jeff pointed out in comment #4 (assuming 'start' is > 0):
> 
> 1281     offs = glfs_lseek(s->fd, start, SEEK_DATA);
> 1282     if (offs < 0) {
> 1283         return -errno;          /* D3 or D4 */
> 1284     }
> 1285     assert(offs >= start);
> 
> Han, is it easy for you to verify this workflow with a test-package that
> contains that particular patch? It is a change that is only relevant on the
> glusterfs-server (brick) environment. If you would be able to find time for
> a test, I can provide you with the packages.

Hi,

I am able to reproduce this (I have a duplicate bug, BZ #1451191, that I have not re-assigned or closed yet).  What is being returned is not offs = 0, but rather (offs > 0 && offs < start).  This does not seem to be a legitimate return value for lseek for SEEK_DATA or SEEK_HOLE.

Here is an example of a bad return:

start == 7608336384
offs == 7607877632

I am able to reproduce this easily by using qemu-img convert with a larger image size (> 6GB or so).  

For instance:
qemu-img convert -f qcow2 -O raw gluster://192.168.15.180/gv0/stock-fed-i686.qcow2 convert.img

Comment 16 Niels de Vos 2017-05-24 10:04:04 UTC
Moving this back to the 'qemu-kvm' component, Jeff sent a patch to prevent QEMU from segfaulting. I suggest to get this change included in the RHEL/RHV package(s). Assigning to Jeff for now, hope thats ok.

The missing backport (mentioned in comment #9) in glusterfs-server will be included through bug 1454558. Note that glusterfs-server is not part of RHEL, but only of the Red Hat Gluster Storage layered product.

Comment 17 Jeff Cody 2017-05-24 20:52:09 UTC
(In reply to Niels de Vos from comment #16)
> Moving this back to the 'qemu-kvm' component, Jeff sent a patch to prevent
> QEMU from segfaulting. I suggest to get this change included in the RHEL/RHV
> package(s). Assigning to Jeff for now, hope thats ok.
> 
> The missing backport (mentioned in comment #9) in glusterfs-server will be
> included through bug 1454558. Note that glusterfs-server is not part of
> RHEL, but only of the Red Hat Gluster Storage layered product.

The bugfix mentioned in comment #9 references 3.8.  I am able to reproduce this bug with a gluster server-side version of 3.11.0rc0.

Comment 18 Niels de Vos 2017-05-28 17:38:23 UTC
I can not reproduce the problem mentioned in comment #9, this is what I have:

# rpm -q qemu-img glusterfs
qemu-img-2.9.0-1.fc27.x86_64
glusterfs-3.11.0-0.4.rc1.fc25.x86_64

The volume that is configured consists out of a single brick (default volume options).

The .qcow2 image that I use for testing was created like:
- download a Fedora cloud image (is in .raw)
- convert the .raw to .qcow2
- resize the 1st partition of the image, add 8GB
- copy a 6GB random filled file into the image

Running "qemu-img convert -f qcow2 -O raw gluster://..." does not cause crashes. Inspection with "ltrace -x glfs_lseek ..." does not show a return 'offset < start'. No segfaults occur.

Could you provide the steps and Gluster configuration that you use to reproduce this problem?

Comment 19 Jeff Cody 2017-05-30 19:53:37 UTC
I built glusterfs from git, commit id 787d224:

[root@localhost ~]# /usr/local/sbin/glusterfsd --version
glusterfs 3.11.0rc0
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

[root@localhost ~]# /usr/local/sbin/glusterd --version
glusterfs 3.11.0rc0
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.


[root@localhost ~]# ps auxww|grep gluster
root      1006  0.0  1.1 605992 11712 ?        Ssl  15:35   0:00 /usr/local/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
root      1141  0.0  0.9 749672 10040 ?        Ssl  15:35   0:00 /usr/local/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glusters9b88bef28cc6cabc23f1141afa9afb2.socket --xlator-option *replicate*.node-uuid=f9952781-1b78-42dc-9726-2d5df825a27f
root      1156  6.7  1.7 2009320 18260 ?       Ssl  15:35   0:50 /usr/local/sbin/glusterfsd -s 192.168.15.180 --volfile-id gv0.192.168.15.180.mnt-brick1-brick -p /var/lib/glusterd/vols/gv0/run/192.168.15.180-mnt-brickter/8a0f4b8c54a8ce692977310eb42baf7f.socket --brick-name /mnt/brick1/brick -l /var/log/glusterfs/bricks/mnt-brick1-brick.log --xlator-option *-posix.glusterd-uuid=f9952781-1b78-42dc-9726-2d5df825a27f --brick-port 4915.listen-port=49152
root      1162  0.0  1.4 1022380 14264 ?       Ssl  15:35   0:00 /usr/local/sbin/glusterfsd -s 192.168.15.180 --volfile-id gv0.192.168.15.180.mnt-brick2-brick -p /var/lib/glusterd/vols/gv0/run/192.168.15.180-mnt-brickter/299e5b31e75f9554e4abf0d5073268a7.socket --brick-name /mnt/brick2/brick -l /var/log/glusterfs/bricks/mnt-brick2-brick.log --xlator-option *-posix.glusterd-uuid=f9952781-1b78-42dc-9726-2d5df825a27f --brick-port 4915.listen-port=49153
root      1170  0.0  1.3 1022380 13948 ?       Ssl  15:35   0:00 /usr/local/sbin/glusterfsd -s 192.168.15.180 --volfile-id gv1.192.168.15.180.mnt-gv1-brick-small-1-brick -p /var/lib/glusterd/vols/gv1/run/192.168.15.18k.pid -S /var/run/gluster/9610d012ec3c2db9b2acb6f872acdd1e.socket --brick-name /mnt/gv1-brick-small-1/brick -l /var/log/glusterfs/bricks/mnt-gv1-brick-small-1-brick.log --xlator-option *-posix.glusterd-uuid=f9952781-1-brick-port 49154 --xlator-option gv1-server.listen-port=49154
root      1176  0.0  1.3 1022380 13872 ?       Ssl  15:35   0:00 /usr/local/sbin/glusterfsd -s 192.168.15.180 --volfile-id gv1.192.168.15.180.mnt-gv1-brick-small-2-brick -p /var/lib/glusterd/vols/gv1/run/192.168.15.18k.pid -S /var/run/gluster/edd3ca7e2f7da3d71ed515b2b2e3d2d9.socket --brick-name /mnt/gv1-brick-small-2/brick -l /var/log/glusterfs/bricks/mnt-gv1-brick-small-2-brick.log --xlator-option *-posix.glusterd-uuid=f9952781-1-brick-port 49155 --xlator-option gv1-server.listen-port=49155

[root@localhost ~]# gluster volume info gv0
 
Volume Name: gv0
Type: Replicate
Volume ID: 6bcb7964-0594-4801-a60b-22dae7f871f6
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.15.180:/mnt/brick1/brick
Brick2: 192.168.15.180:/mnt/brick2/brick
Options Reconfigured:
performance.readdir-ahead: on



# qemu-img info gluster://192.168.15.180/gv0/stock-fed-i686.qcow2
image: gluster://192.168.15.180/gv0/stock-fed-i686.qcow2
file format: qcow2
virtual size: 256G (274877906944 bytes)
disk size: 7.5G
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false


# ./qemu-img convert -f qcow2 -O raw gluster://192.168.15.180/gv0/stock-fed-i686.qcow2 convert.img
qemu-img: block/gluster.c:1278: find_allocation: Assertion `offs >= start' failed.
Aborted (core dumped)


I don't always get the abort on each run of the convert; the larger the qcow2 image file (in actual disk size), the more likely I am to hit it.

Comment 20 Jeff Cody 2017-05-30 19:56:15 UTC
Re-assigning back to glusterfs; there already exists a BZ #1451191 (now on POST) for the QEMU workaround, so this BZ is just for the glusterfs component.  Go ahead and set it to POST (or the appropriate status) if it is fixed in glusterfs.  Thanks!

Comment 21 Han Han 2017-06-15 05:30:46 UTC
Now it works on libvirt-3.2.0-10.el7.x86_64 qemu-kvm-rhev-2.9.0-10.el7.x86_64 glusterfs-3.8.4-28.el7rhgs.x86_64 . And I can use older glusterfs-server as server. So remove TestBlocker flag.

Comment 24 Jeff Cody 2018-04-03 14:44:53 UTC
The invalid lseek return value is also seen on Gluster FUSE mounts, as reported by BZ #1536636

Comment 26 RHEL Program Management 2021-01-15 07:32:04 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.