Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
I am getting the same error:
[2016-08-16 18:48:48.668513] E [glfs-fops.c:746:glfs_io_async_cbk] (-->/usr/lib64/glusterfs/3.7.9/xlator/debug/io-stats.so(io_stats_writev_cbk+0x24c) [0x7fb5df07d31c] -->/lib64/libgfapi.so.0(+0xb81d) [0x7fb5fa17681d] -->/lib64/libgfapi.so.0(+0xb736) [0x7fb5fa176736] ) 0-gfapi: invalid argument: iovec [Invalid argument]
This error was encountered with just using qemu-img to create a qcow2 image:
qemu-img create -f qcow2 gluster://192.168.15.180/gv0/public/bz.qcow2 1G
This looks to be an error in the libgfapi library.
Comment 3Prasanna Kumar Kalever
2016-08-23 10:33:20 UTC
This bug needs two patches actually
Cause:
It was side effect for
1. neglecting count in glfs_io struct, i.e. gio->count is not updated, hence it
dereference invalid addresses. (http://review.gluster.org/#/c/14859/)
2. In all async ops such as write, fsync, ftruncate expect for read the value for "iovec" was NULL, hence glfs_io_async_cbk checks the value in common routine which may end up in failures. (http://review.gluster.org/#/c/14779/)
Consequence:
we see dereferencing of invalid addresses while performing fops, hence using qemu-img and qemu-system-x86_64 will not behave as expected meaning when qemu block driver invoke glfs api's we see 'iovec [Invalid argument]', may cause the hang or failure.
Fix:
This bug needs two patches
http://review.gluster.org/#/c/14779/ (BZ#1350880)
http://review.gluster.org/#/c/14859/ (BZ#1352482)
Both of them went in gluster 3.7.13 release.
Comment 4Prasanna Kumar Kalever
2016-08-23 10:33:50 UTC
This bug needs two patches actually
Cause:
It was side effect for
1. neglecting count in glfs_io struct, i.e. gio->count is not updated, hence it
dereference invalid addresses. (http://review.gluster.org/#/c/14859/)
2. In all async ops such as write, fsync, ftruncate expect for read the value for "iovec" was NULL, hence glfs_io_async_cbk checks the value in common routine which may end up in failures. (http://review.gluster.org/#/c/14779/)
Consequence:
we see dereferencing of invalid addresses while performing fops, hence using qemu-img and qemu-system-x86_64 will not behave as expected meaning when qemu block driver invoke glfs api's we see 'iovec [Invalid argument]', may cause the hang or failure.
Fix:
This bug needs two patches
http://review.gluster.org/#/c/14779/ (BZ#1350880)
http://review.gluster.org/#/c/14859/ (BZ#1352482)
Both of them went in gluster 3.7.13 release.
Tested with RHEL 7.3 and RHGS 3.1.3 async build ( glusterfs-3.7.9-12.el7rhgs on server and glusterfs-3.7.9-12.el7 on hypervisor )
With qemu-kvm-rhev-2.6.0-22.el7.x86_64 and qemu-kvm-img-2.6.0-22.el7.x86_64
1. Create a image using qemu-img ( with format - raw and qcow2 )
2. Installed RHEL 7 on that VM
3. Also attached additional disks from the gluster volume and partitioned the disks.
All works well.
(In reply to SATHEESARAN from comment #6)
> Tested with RHEL 7.3 and RHGS 3.1.3 async build ( glusterfs-3.7.9-12.el7rhgs
> on server and glusterfs-3.7.9-12.el7 on hypervisor )
>
> With qemu-kvm-rhev-2.6.0-22.el7.x86_64 and qemu-kvm-img-2.6.0-22.el7.x86_64
>
> 1. Create a image using qemu-img ( with format - raw and qcow2 )
> 2. Installed RHEL 7 on that VM
> 3. Also attached additional disks from the gluster volume and partitioned
> the disks.
>
> All works well.
Missed that information - Formatted the disks with XFS and observed that all worked well
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://rhn.redhat.com/errata/RHBA-2016-1450.html
Description of problem: boot up guest with a gluster disk, try to format it ,it will fail. Version-Release number of selected component (if applicable): host kernel&qemu: qemu-kvm-rhev-2.6.0-12.el7.x86_64 kernel-3.10.0-461.el7.x86_64 server gluster: glusterfs-api-3.7.9-11.el7rhgs.x86_64 glusterfs-api-devel-3.7.9-11.el7rhgs.x86_64 glusterfs-libs-3.7.9-11.el7rhgs.x86_64 glusterfs-3.7.9-11.el7rhgs.x86_64 glusterfs-fuse-3.7.9-11.el7rhgs.x86_64 glusterfs-server-3.7.9-11.el7rhgs.x86_64 glusterfs-rdma-3.7.9-11.el7rhgs.x86_64 glusterfs-client-xlators-3.7.9-11.el7rhgs.x86_64 glusterfs-cli-3.7.9-11.el7rhgs.x86_64 glusterfs-devel-3.7.9-11.el7rhgs.x86_64 glusterfs-debuginfo-3.7.9-11.el7rhgs.x86_64 glusterfs-geo-replication-3.7.9-11.el7rhgs.x86_64 client gluster: glusterfs-fuse-3.7.9-11.el7.x86_64 glusterfs-3.7.9-11.el7.x86_64 glusterfs-client-xlators-3.7.9-11.el7.x86_64 glusterfs-api-3.7.9-11.el7.x86_64 glusterfs-libs-3.7.9-11.el7.x86_64 How reproducible: Steps to Reproduce: 1.create gluster disk: qemu-img create -f raw gluster://intel-e52650-16-1.englab.nay.redhat.com:0/distdata01/gluster_disk.raw 10G got some debug info: Formatting 'gluster://intel-e52650-16-1.englab.nay.redhat.com:0/distdata01/gluster_disk.raw', fmt=raw size=10737418240 [2016-08-15 07:27:49.279372] I [MSGID: 104045] [glfs-master.c:95:notify] 0-gfapi: New graph 696e7465-6c2d-6535-3632-302d31322d35 (0) coming up [2016-08-15 07:27:49.279407] I [MSGID: 114020] [client.c:2113:notify] 0-distdata01-client-0: parent translators are ready, attempting connect on transport [2016-08-15 07:27:49.283244] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-distdata01-client-0: changing port to 49152 (from 0) [2016-08-15 07:27:49.286885] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-distdata01-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2016-08-15 07:27:49.287384] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-distdata01-client-0: Connected to distdata01-client-0, attached to remote volume '/home/gluster/gv0'. [2016-08-15 07:27:49.287408] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-distdata01-client-0: Server and Client lk-version numbers are not same, reopening the fds [2016-08-15 07:27:49.298296] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-distdata01-client-0: Server lk version = 1 [2016-08-15 07:27:49.299289] I [MSGID: 104041] [glfs-resolve.c:870:__glfs_active_subvol] 0-distdata01: switched to graph 696e7465-6c2d-6535-3632-302d31322d35 (0) [2016-08-15 07:27:49.402349] I [MSGID: 114021] [client.c:2122:notify] 0-distdata01-client-0: current graph is no longer active, destroying rpc_client [2016-08-15 07:27:49.402539] I [MSGID: 114018] [client.c:2037:client_rpc_notify] 0-distdata01-client-0: disconnected from distdata01-client-0. Client process will keep trying to connect to glusterd until brick's port is available [2016-08-15 07:27:49.402822] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-gfapi: size=84 max=1 total=1 [2016-08-15 07:27:49.403085] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-gfapi: size=156 max=2 total=2 [2016-08-15 07:27:49.403310] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-gfapi: size=108 max=1 total=1 [2016-08-15 07:27:49.403326] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-distdata01-client-0: size=1300 max=2 total=6 [2016-08-15 07:27:49.403337] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-distdata01-dht: size=1148 max=0 total=0 [2016-08-15 07:27:49.403406] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-distdata01-dht: size=2316 max=2 total=7 [2016-08-15 07:27:49.403530] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-distdata01-read-ahead: size=188 max=0 total=0 [2016-08-15 07:27:49.403542] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-distdata01-readdir-ahead: size=60 max=0 total=0 [2016-08-15 07:27:49.403551] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-distdata01-io-cache: size=68 max=0 total=0 [2016-08-15 07:27:49.403611] I [MSGID: 101053] [mem-pool.c:616:mem_pool_destroy] 0-distdata01-io-cache: size=252 max=1 total=4 [2016-08-15 07:27:49.403624] I [io-stats.c:2951:fini] 0-distdata01: io-stats translator unloaded [2016-08-15 07:27:49.403798] I [MSGID: 101191] [event-epoll.c:663:event_dispatch_epoll_worker] 0-epoll: Exited thread with index 2.boot up guest with the disk: MALLOC_PERTURB_=1 /usr/libexec/qemu-kvm \ -S \ -name 'avocado-vt-vm1' \ -sandbox off \ -machine pc \ -nodefaults \ -vga cirrus \ -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/monitor-qmpmonitor1-20160815-024154-NsYYKL2y,server,nowait \ -mon chardev=qmp_id_qmpmonitor1,mode=control \ -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/monitor-catch_monitor-20160815-024154-NsYYKL2y,server,nowait \ -mon chardev=qmp_id_catch_monitor,mode=control \ -device pvpanic,ioport=0x505,id=idzuYnSK \ -chardev socket,id=serial_id_serial0,path=/var/tmp/serial-serial0-20160815-024154-NsYYKL2y,server,nowait \ -device isa-serial,chardev=serial_id_serial0 \ -chardev socket,id=seabioslog_id_20160815-024154-NsYYKL2y,path=/var/tmp/seabios-20160815-024154-NsYYKL2y,server,nowait \ -device isa-debugcon,chardev=seabioslog_id_20160815-024154-NsYYKL2y,iobase=0x402 \ -device ich9-usb-ehci1,id=usb1,addr=1d.7,multifunction=on,bus=pci.0 \ -device ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=1d.0,firstport=0,bus=pci.0 \ -device ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=1d.2,firstport=2,bus=pci.0 \ -device ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=1d.4,firstport=4,bus=pci.0 \ -drive id=drive_image1,if=none,snapshot=off,aio=native,cache=none,format=qcow2,file=/usr/share/avocado/data/avocado-vt/images/RHEL-Server-7.3-64-virtio.qcow2 \ -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,bus=pci.0,addr=03,disable-legacy=off,disable-modern=on \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=04,disable-legacy=off,disable-modern=on \ -drive id=drive_gluster_disk,if=none,snapshot=off,aio=native,cache=none,format=raw,file=gluster://intel-e52650-16-1.englab.nay.redhat.com:0/distdata01/gluster_disk.raw,discard=on \ -device scsi-hd,id=gluster_disk,drive=drive_gluster_disk,bootindex=1 \ -device virtio-net-pci,mac=9a:ac:ad:ae:af:b0,id=idiIqjQW,vectors=4,netdev=idMGl1L3,bus=pci.0,addr=05,disable-legacy=off,disable-modern=on \ -netdev tap,id=idMGl1L3,vhost=on \ -m 8192 \ -smp 4,maxcpus=4,cores=2,threads=1,sockets=2 \ -cpu 'Westmere',+kvm_pv_unhalt \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ -vnc :0 \ -rtc base=utc,clock=host,driftfix=slew \ -boot order=cdn,once=c,menu=off,strict=off \ -enable-kvm \ -monitor stdio \ got some debug infoļ¼ [2016-08-15 07:27:59.937326] I [MSGID: 104045] [glfs-master.c:95:notify] 0-gfapi: New graph 696e7465-6c2d-6535-3632-302d31322d35 (0) coming up [2016-08-15 07:27:59.937363] I [MSGID: 114020] [client.c:2113:notify] 0-distdata01-client-0: parent translators are ready, attempting connect on transport [2016-08-15 07:27:59.941124] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-distdata01-client-0: changing port to 49152 (from 0) [2016-08-15 07:27:59.944970] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-distdata01-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2016-08-15 07:27:59.945357] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-distdata01-client-0: Connected to distdata01-client-0, attached to remote volume '/home/gluster/gv0'. [2016-08-15 07:27:59.945374] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-distdata01-client-0: Server and Client lk-version numbers are not same, reopening the fds [2016-08-15 07:27:59.957499] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-distdata01-client-0: Server lk version = 1 [2016-08-15 07:27:59.958505] I [MSGID: 104041] [glfs-resolve.c:870:__glfs_active_subvol] 0-distdata01: switched to graph 696e7465-6c2d-6535-3632-302d31322d35 (0) guest boots up successfully, check the disk using "fdisk -l" could see the disk, every seems fine until now. 3. fdisk in guest and format the gluster disk to ext4 mkfs.ext4 -F /dev/sda it will hang ,and the qemu has some output: (qemu) [2016-08-15 07:29:48.919866] E [glfs-fops.c:746:glfs_io_async_cbk] (-->/usr/lib64/glusterfs/3.7.9/xlator/debug/io-stats.so(io_stats_discard_cbk+0x13b) [0x7f670c21220b] -->/lib64/libgfapi.so.0(+0xb8ad) [0x7f672880d8ad] -->/lib64/libgfapi.so.0(+0xb736) [0x7f672880d736] ) 0-gfapi: invalid argument: iovec [Invalid argument] Actual results: format failed Expected results: format successfully Additional info: i also tried formating it to xfs, failed ethier. host cpuinfo: processor : 7 vendor_id : GenuineIntel cpu family : 6 model : 44 model name : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz stepping : 2 microcode : 0x10 cpu MHz : 2394.117 cache size : 12288 KB physical id : 0 siblings : 8 core id : 10 cpu cores : 4 apicid : 21 initial apicid : 21 fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm ida arat epb dtherm tpr_shadow vnmi flexpriority ept vpid bogomips : 4788.23 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: gluster volume info: [root@intel-e52650-16-1 ~]# gluster volume info Volume Name: distdata01 Type: Distribute Volume ID: 4f8a5fbf-b478-4971-aea9-828e0cf29685 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: 10.66.144.31:/home/gluster/gv0 Options Reconfigured: performance.readdir-ahead: on