Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1458846

Summary: [gluster] Problem moving vm from one storage domain to another within the same datacenter
Product: [Community] GlusterFS Reporter: Bryan Sockel <bryan.sockel>
Component: posixAssignee: Krutika Dhananjay <kdhananj>
Status: CLOSED EOL QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.8CC: amureini, bryan.sockel, bugs, bugs, bzlotnik, johan, kdhananj, kwolf, ndevos, sabose, tnisan
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-11-07 10:41:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Gluster RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1472758, 1479692, 1479717, 1480193    
Bug Blocks:    
Attachments:
Description Flags
VDSM and Engine Log filles none

Description Bryan Sockel 2017-06-05 15:53:15 UTC
Created attachment 1285081 [details]
VDSM and Engine Log filles

Description of problem:
Unable to migrate virtual machine disks from one specific storage domain to another.  I am able to migrate other vm's that do not exist on this storage domain.  The issue is with two moving from specific storage domains to any other storage domain.

The storage domain's I am having trouble with is a single brick gluster file system storage domain.


Version-Release number of selected component (if applicable):
Currently running on CentOS Version 7.3.1611. Ovirt version 4.1.2.2


How reproducible:
Able to reproduce locally for vm's running on specific storage domain


Steps to Reproduce:
1.Click on Virtual Machine tab
2.Select Virtual machine on specific domain
3.Select Disk
4. Click Move
5. Select Target Domain and Disk Profile
6. Click OK

Actual results:
1. VDSM vm-host-colo-1 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed.
2. User admin@internal-authz have failed to move disk Trint-Services2 to domain server-vs1-storage-1.

Expected results:


Additional info:

Comment 1 Allon Mureinik 2017-06-06 14:15:48 UTC
Thanks for the report Bryan!

Can you please share your vdsm* and qemu* rpm versions?

Comment 2 Bryan Sockel 2017-06-06 15:28:34 UTC
# rpm -qa | grep qemu
libvirt-daemon-driver-qemu-2.0.0-10.el7_3.9.x86_64
qemu-kvm-ev-2.6.0-28.el7_3.9.1.x86_64
qemu-kvm-tools-ev-2.6.0-28.el7_3.9.1.x86_64
qemu-kvm-common-ev-2.6.0-28.el7_3.9.1.x86_64
ipxe-roms-qemu-20160127-5.git6366fa7a.el7.noarch
qemu-img-ev-2.6.0-28.el7_3.9.1.x86_64

 rpm -qa | grep vdsm
vdsm-yajsonrpc-4.19.15-1.el7.centos.noarch
vdsm-gluster-4.19.15-1.el7.centos.noarch
vdsm-xmlrpc-4.19.15-1.el7.centos.noarch
vdsm-client-4.19.15-1.el7.centos.noarch
vdsm-cli-4.19.15-1.el7.centos.noarch
vdsm-api-4.19.15-1.el7.centos.noarch
vdsm-4.19.15-1.el7.centos.x86_64
vdsm-jsonrpc-4.19.15-1.el7.centos.noarch
vdsm-hook-vmfex-dev-4.19.15-1.el7.centos.noarch
vdsm-python-4.19.15-1.el7.centos.noarch


I am currently testing exporting the Disk via cmd line and re-importing them back into ovirt via the gui.  Detaching the old disk and attaching the new disk.

Using this command to do so:

qemu-img convert -p -f 'raw' -O qcow2 /rhev/data-center/d776b537-16f2-4543-bd96-9b4cba69e247/e371d380-7194-4950-b901-5f2aed5dfb35/images/b9d99311-4b73-4fd5-b9f9-91b0d95c876e/fda2b437-625e-49f8-8cea-dabdb955d5a7 /tmp/10-VDI-Std_Disk1

Comment 3 Allon Mureinik 2017-06-12 15:13:36 UTC
I moved the milestone to 4.1.3 when fixing the component by mistake, mea culpa.

Comment 4 Allon Mureinik 2017-07-02 20:38:23 UTC
4.1.4 is planned as a minimal, fast, z-stream version to fix any open issues we may have in supporting the upcoming EL 7.4.

Pushing out anything unrelated, although if there's a minimal/trival, SAFE fix that's ready on time, we can consider introducing it in 4.1.4.

Comment 5 Benny Zlotnik 2017-07-25 15:53:30 UTC
Hi Kevin, 

It looks like qemu-img convert fails with 'No data available'

cmd=['/usr/bin/taskset', '--cpu-list', '0-31', '/usr/bin/nice', '-n', '19', '/usr/bin/ionice', '-c', '3', '/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', '│
raw', u'/rhev/data-center/mnt/glusterSD/vs-host-colo-1-gluster.altn.int:_vdi2/e371d380-7194-4950-b901-5f2aed5dfb35/images/c2563715-81b5-4086-bef3-97f998722161/d863fe86-ce75-46cf-a3c2-0480193│
d0cb7', '-O', 'raw', u'/rhev/data-center/mnt/glusterSD/vs-host-colo-1-gluster.altn.int:_server-vs1-storage-1/58a7c5dd-0b31-4066-ae05-8f541614dfde/images/c2563715-81b5-4086-bef3-97f998722161/│
d863fe86-ce75-46cf-a3c2-0480193d0cb7'], ecode=1, stdout=, stderr=qemu-img: error while reading sector 22020091: No data available                                                             │
, message=None

What could cause this?

Comment 6 Kevin Wolf 2017-07-25 16:06:06 UTC
This error doesn't seem to come from qemu-img, but from the kernel. You can try
and verify this under strace, from the error message I expect a pread() there
that returns -1/ENODATA.

I don't know why reading your source image would return ENODATA, though. This
depends probably on the exact driver that is involved in accessing the image.
What kind of storage is this? Once you know this, you can ask someone working
on the respective driver what ENODATA could mean here.

Comment 7 Sahina Bose 2017-07-31 06:00:19 UTC
This seems related to https://bugzilla.redhat.com/show_bug.cgi?id=1472758.
The issue is with single brick gluster volume. For volumes used as VM store, we advise to use replica 3 gluster volume.

Comment 8 Sahina Bose 2017-07-31 06:02:10 UTC
Could you provide the gluster mount log and brick log of the single brick gluster volume?

Comment 9 Johan Bernhardsson 2017-07-31 08:19:28 UTC
I have the same problem. And yes it seems to be gluster related. 

We use disperse volumes for two storages. Most operations works as it should but when we try to copy or move volumes it breaks sometimes. 

We can however scan the disk with dd if=<virtual disk image> of=/dev/null and then move/copy and that works. 

Gluster volume fs02:
Volume Name: fs02
Type: Disperse
Volume ID: 7f3d96e7-8d1e-48b8-bad0-dc5b3de13b38
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: vbgsan01:/gluster/fs02/fs02
Brick2: vbgsan02:/gluster/fs02/fs02
Brick3: vbgsan03:/gluster/fs02/fs02



From vdsm.log:
2017-07-30 15:36:57,258+0200 ERROR (tasks/8) [storage.Image] Copy image error: image=cab4e8d0-a82a-4048-b0e8-a9c1bd2e38bf, src domain=0924ff77-ef51-435b-b90d-50bfbf2e8de7, dst domain=5d47a297-a21f-4587-bb7c-dd00d52010d5 (image:535)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/image.py", line 526, in _interImagesCopy
    self._wait_for_qemuimg_operation(operation)
  File "/usr/share/vdsm/storage/image.py", line 141, in _wait_for_qemuimg_operation
    operation.wait_for_completion()
  File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 329, in wait_for_completion
    self.poll(timeout)
  File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 324, in poll
    self.error)
QImgError: cmd=['/usr/bin/taskset', '--cpu-list', '0-15', '/usr/bin/nice', '-n', '19', '/usr/bin/ionice', '-c', '3', '/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', u'/rhev/data-center/mnt/glusterSD/vbgsan02:_fs02/0924ff77-ef51-43
5b-b90d-50bfbf2e8de7/images/cab4e8d0-a82a-4048-b0e8-a9c1bd2e38bf/797faedf-b7e2-4b3c-bff6-82264efa11f5', '-O', 'raw', u'/rhev/data-center/mnt/glusterSD/10.137.30.105:_fs03/5d47a297-a21f-4587-bb7c-dd00d52010d5/images/cab4e8d0-a82a-4048-b0e8-a9c1bd2e38bf/797faedf-b7e2-4b3c-bff
6-82264efa11f5'], ecode=1, stdout=, stderr=qemu-img: error while reading sector 10657790: No data available
, message=None



From gluster mount log for fs02:
[2017-07-30 13:36:57.236163] W [MSGID: 114031] [client-rpc-fops.c:2933:client3_3_lookup_cbk] 0-fs02-client-0: remote operation failed. Path: /.shard/3997817a-4678-4e75-8131-438db9faca9a.1300 (00000000-0000-0000-0000-000000000000) [No data available]
[2017-07-30 13:36:57.237191] W [MSGID: 114031] [client-rpc-fops.c:2933:client3_3_lookup_cbk] 0-fs02-client-2: remote operation failed. Path: /.shard/3997817a-4678-4e75-8131-438db9faca9a.1300 (00000000-0000-0000-0000-000000000000) [No data available]
[2017-07-30 13:36:57.237220] W [MSGID: 122053] [ec-common.c:121:ec_check_status] 0-fs02-disperse-0: Operation failed on 1 of 3 subvolumes.(up=111, mask=111, remaining=000, good=101, bad=010)
[2017-07-30 13:36:57.237229] W [MSGID: 122002] [ec-common.c:71:ec_heal_report] 0-fs02-disperse-0: Heal failed [Invalid argument]
[2017-07-30 13:36:57.248807] W [MSGID: 114031] [client-rpc-fops.c:2933:client3_3_lookup_cbk] 0-fs02-client-0: remote operation failed. Path: /.shard/3997817a-4678-4e75-8131-438db9faca9a.1301 (00000000-0000-0000-0000-000000000000) [No data available]
[2017-07-30 13:36:57.248973] W [MSGID: 114031] [client-rpc-fops.c:2933:client3_3_lookup_cbk] 0-fs02-client-1: remote operation failed. Path: /.shard/3997817a-4678-4e75-8131-438db9faca9a.1301 (00000000-0000-0000-0000-000000000000) [No data available]
[2017-07-30 13:36:57.249502] W [MSGID: 122053] [ec-common.c:121:ec_check_status] 0-fs02-disperse-0: Operation failed on 1 of 3 subvolumes.(up=111, mask=111, remaining=000, good=011, bad=100)
[2017-07-30 13:36:57.249524] W [MSGID: 122002] [ec-common.c:71:ec_heal_report] 0-fs02-disperse-0: Heal failed [Invalid argument]
[2017-07-30 13:36:57.249535] E [MSGID: 133010] [shard.c:1725:shard_common_lookup_shards_cbk] 0-fs02-shard: Lookup on shard 1301 failed. Base file gfid = 3997817a-4678-4e75-8131-438db9faca9a [No data available]
[2017-07-30 13:36:57.249585] W [fuse-bridge.c:2228:fuse_readv_cbk] 0-glusterfs-fuse: 84787: READ => -1 gfid=3997817a-4678-4e75-8131-438db9faca9a fd=0x7f0cfa192210 (No data available)
[2017-07-30 13:38:06.922042] I [MSGID: 109066] [dht-rename.c:1569:dht_rename] 0-fs02-dht: renaming /0924ff77-ef51-435b-b90d-50bfbf2e8de7/images/cab4e8d0-a82a-4048-b0e8-a9c1bd2e38bf/4c3cc823-e977-4fa1-b233-718c445d632e.meta.new (hash=fs02-disperse-0/cache=fs02-disperse-0) => /0924ff77-ef51-435b-b90d-50bfbf2e8de7/images/cab4e8d0-a82a-4048-b0e8-a9c1bd2e38bf/4c3cc823-e977-4fa1-b233-718c445d632e.meta (hash=fs02-disperse-0/cache=fs02-disperse-0)
[2017-07-30 13:38:06.978945] I [MSGID: 109066] [dht-rename.c:1569:dht_rename] 0-fs02-dht: renaming /0924ff77-ef51-435b-b90d-50bfbf2e8de7/images/cab4e8d0-a82a-4048-b0e8-a9c1bd2e38bf/797faedf-b7e2-4b3c-bff6-82264efa11f5.meta.new (hash=fs02-disperse-0/cache=fs02-disperse-0) => /0924ff77-ef51-435b-b90d-50bfbf2e8de7/images/cab4e8d0-a82a-4048-b0e8-a9c1bd2e38bf/797faedf-b7e2-4b3c-bff6-82264efa11f5.meta (hash=fs02-disperse-0/cache=fs02-disperse-0)
[2017-07-30 13:38:06.946285] I [MSGID: 109066] [dht-rename.c:1569:dht_rename] 0-fs02-dht: renaming /0924ff77-ef51-435b-b90d-50bfbf2e8de7/images/cab4e8d0-a82a-4048-b0e8-a9c1bd2e38bf/4c3cc823-e977-4fa1-b233-718c445d632e.meta.new (hash=fs02-disperse-0/cache=fs02-disperse-0) => /0924ff77-ef51-435b-b90d-50bfbf2e8de7/images/cab4e8d0-a82a-4048-b0e8-a9c1bd2e38bf/4c3cc823-e977-4fa1-b233-718c445d632e.meta (hash=fs02-disperse-0/cache=fs02-disperse-0)
[2017-07-30 13:49:08.634767] I [MSGID: 109066] [dht-rename.c:1569:dht_rename] 0-fs02-dht: renaming /0924ff77-ef51-435b-b90d-50bfbf2e8de7/images/ae6dad99-e7f0-4a7c-b707-f01d782b0a7d/947e0c2b-bccb-4b54-8861-f8139e6e42e2.meta.new (hash=fs02-disperse-0/cache=fs02-disperse-0) => /0924ff77-ef51-435b-b90d-50bfbf2e8de7/images/ae6dad99-e7f0-4a7c-b707-f01d782b0a7d/947e0c2b-bccb-4b54-8861-f8139e6e42e2.meta (hash=fs02-disperse-0/cache=fs02-disperse-0)
[2017-07-30 13:49:10.235438] I [MSGID: 109066] [dht-rename.c:1569:dht_rename] 0-fs02-dht: renaming /0924ff77-ef51-435b-b90d-50bfbf2e8de7/images/e51c567f-5b4b-43ce-ae02-e211ef848c70/755d5745-f44a-4f7f-ae00-6e03f073a497.meta.new (hash=fs02-disperse-0/cache=fs02-disperse-0) => /0924ff77-ef51-435b-b90d-50bfbf2e8de7/images/e51c567f-5b4b-43ce-ae02-e211ef848c70/755d5745-f44a-4f7f-ae00-6e03f073a497.meta (hash=fs02-disperse-0/cache=fs02-disperse-0)
[2017-07-30 13:49:10.121184] I [MSGID: 109066] [dht-rename.c:1569:dht_rename] 0-fs02-dht: renaming /0924ff77-ef51-435b-b90d-50bfbf2e8de7/images/ae6dad99-e7f0-4a7c-b707-f01d782b0a7d/947e0c2b-bccb-4b54-8861-f8139e6e42e2.meta.new (hash=fs02-disperse-0/cache=fs02-disperse-0) => /0924ff77-ef51-435b-b90d-50bfbf2e8de7/images/ae6dad99-e7f0-4a7c-b707-f01d782b0a7d/947e0c2b-bccb-4b54-8861-f8139e6e42e2.meta (hash=fs02-disperse-0/cache=fs02-disperse-0)
[2017-07-30 13:49:10.852038] I [MSGID: 109066] [dht-rename.c:1569:dht_rename] 0-fs02-dht: renaming /0924ff77-ef51-435b-b90d-50bfbf2e8de7/images/e51c567f-5b4b-43ce-ae02-e211ef848c70/755d5745-f44a-4f7f-ae00-6e03f073a497.meta.new (hash=fs02-disperse-0/cache=fs02-disperse-0) => /0924ff77-ef51-435b-b90d-50bfbf2e8de7/images/e51c567f-5b4b-43ce-ae02-e211ef848c70/755d5745-f44a-4f7f-ae00-6e03f073a497.meta (hash=fs02-disperse-0/cache=fs02-disperse-0)


brick log brick1:
[2017-07-30 13:35:59.927113] I [login.c:76:gf_auth] 0-auth/login: allowed user names: 555f32d7-f95c-4389-a246-0c23c81ae28a
[2017-07-30 13:35:59.927156] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-fs02-server: accepted client from vbgkvm03-6489-2017/07/30-13:35:55:901248-fs02-client-0-0-0 (version: 3.8.14)
[2017-07-30 13:35:59.948638] I [MSGID: 115036] [server.c:548:server_rpc_notify] 0-fs02-server: disconnecting connection from vbgkvm03-6489-2017/07/30-13:35:55:901248-fs02-client-0-0-0
[2017-07-30 13:35:59.948686] I [MSGID: 101055] [client_t.c:415:gf_client_unref] 0-fs02-server: Shutting down connection vbgkvm03-6489-2017/07/30-13:35:55:901248-fs02-client-0-0-0
[2017-07-30 13:36:57.235875] E [MSGID: 113002] [posix.c:266:posix_lookup] 0-fs02-posix: buf->ia_gfid is null for /gluster/fs02/fs02/.shard/3997817a-4678-4e75-8131-438db9faca9a.1300 [No data available]
[2017-07-30 13:36:57.235950] E [MSGID: 115050] [server-rpc-fops.c:156:server_lookup_cbk] 0-fs02-server: 83764: LOOKUP /.shard/3997817a-4678-4e75-8131-438db9faca9a.1300 (be318638-e8a0-4c6d-977d-7a937aa84806/3997817a-4678-4e75-8131-438db9faca9a.1300) ==> (No data available) [
No data available]
[2017-07-30 13:36:57.248671] E [MSGID: 113002] [posix.c:266:posix_lookup] 0-fs02-posix: buf->ia_gfid is null for /gluster/fs02/fs02/.shard/3997817a-4678-4e75-8131-438db9faca9a.1301 [No data available]
[2017-07-30 13:36:57.248725] E [MSGID: 115050] [server-rpc-fops.c:156:server_lookup_cbk] 0-fs02-server: 83817: LOOKUP /.shard/3997817a-4678-4e75-8131-438db9faca9a.1301 (be318638-e8a0-4c6d-977d-7a937aa84806/3997817a-4678-4e75-8131-438db9faca9a.1301) ==> (No data available) [
No data available]
[2017-07-30 13:40:33.157302] I [login.c:76:gf_auth] 0-auth/login: allowed user names: 555f32d7-f95c-4389-a246-0c23c81ae28a


Brick log brick2:
[2017-07-30 13:35:52.637008] I [login.c:76:gf_auth] 0-auth/login: allowed user names: 555f32d7-f95c-4389-a246-0c23c81ae28a
[2017-07-30 13:35:52.637065] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-fs02-server: accepted client from vbgkvm03-6489-2017/07/30-13:35:55:901248-fs02-client-1-0-0 (version: 3.8.14)
[2017-07-30 13:35:52.652812] I [MSGID: 115036] [server.c:548:server_rpc_notify] 0-fs02-server: disconnecting connection from vbgkvm03-6489-2017/07/30-13:35:55:901248-fs02-client-1-0-0
[2017-07-30 13:35:52.652861] I [MSGID: 101055] [client_t.c:415:gf_client_unref] 0-fs02-server: Shutting down connection vbgkvm03-6489-2017/07/30-13:35:55:901248-fs02-client-1-0-0
[2017-07-30 13:36:49.952097] E [MSGID: 113002] [posix.c:266:posix_lookup] 0-fs02-posix: buf->ia_gfid is null for /gluster/fs02/fs02/.shard/3997817a-4678-4e75-8131-438db9faca9a.1301 [No data available]
[2017-07-30 13:36:49.952171] E [MSGID: 115050] [server-rpc-fops.c:156:server_lookup_cbk] 0-fs02-server: 83890: LOOKUP /.shard/3997817a-4678-4e75-8131-438db9faca9a.1301 (be318638-e8a0-4c6d-977d-7a937aa84806/3997817a-4678-4e75-8131-438db9faca9a.1301) ==> (No data available) [No data available]
[2017-07-30 13:40:25.860720] I [login.c:76:gf_auth] 0-auth/login: allowed user names: 555f32d7-f95c-4389-a246-0c23c81ae28a


brick log brick3:
[2017-07-30 13:35:59.936342] I [login.c:76:gf_auth] 0-auth/login: allowed user names: 555f32d7-f95c-4389-a246-0c23c81ae28a
[2017-07-30 13:35:59.936381] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-fs02-server: accepted client from vbgkvm03-6489-2017/07/30-13:35:55:901248-fs02-client-2-0-0 (version: 3.8.14)
[2017-07-30 13:35:59.949660] I [MSGID: 115036] [server.c:548:server_rpc_notify] 0-fs02-server: disconnecting connection from vbgkvm03-6489-2017/07/30-13:35:55:901248-fs02-client-2-0-0
[2017-07-30 13:35:59.949724] I [MSGID: 101055] [client_t.c:415:gf_client_unref] 0-fs02-server: Shutting down connection vbgkvm03-6489-2017/07/30-13:35:55:901248-fs02-client-2-0-0
[2017-07-30 13:36:57.237101] E [MSGID: 113002] [posix.c:266:posix_lookup] 0-fs02-posix: buf->ia_gfid is null for /gluster/fs02/fs02/.shard/3997817a-4678-4e75-8131-438db9faca9a.1300 [No data available]
[2017-07-30 13:36:57.237162] E [MSGID: 115050] [server-rpc-fops.c:156:server_lookup_cbk] 0-fs02-server: 84044: LOOKUP /.shard/3997817a-4678-4e75-8131-438db9faca9a.1300 (be318638-e8a0-4c6d-977d-7a937aa84806/3997817a-4678-4e75-8131-438db9faca9a.1300) ==> (No data available) [
No data available]
[2017-07-30 13:40:33.164197] I [login.c:76:gf_auth] 0-auth/login: allowed user names: 555f32d7-f95c-4389-a246-0c23c81ae28a

Comment 10 Johan Bernhardsson 2017-07-31 08:27:56 UTC
Happens after upgrading to ovirt 4.1.4 as well. Here is the software versions used. 

OS Version:
RHEL - 7 - 3.1611.el7.centos
OS Description:
CentOS Linux 7 (Core)
Kernel Version:
3.10.0 - 514.16.1.el7.x86_64
KVM Version:
2.6.0 - 28.el7.10.1
LIBVIRT Version:
libvirt-2.0.0-10.el7_3.9
VDSM Version:
vdsm-4.19.24-1.el7.centos
SPICE Version:
0.12.4 - 20.el7_3
GlusterFS Version:
glusterfs-3.8.14-1.el7

qemu-img version 2.6.0 (qemu-kvm-ev-2.6.0-28.el7.10.1

/Johan

Comment 11 Sahina Bose 2017-08-09 07:07:06 UTC
Krutika, could you provide the gluster version where this is fixed?

Comment 12 Krutika Dhananjay 2017-08-09 09:32:55 UTC
I have posted the patches into release-3.12 and release-3.11 branches in upstream:

https://review.gluster.org/18009
https://review.gluster.org/18010

The fix should hopefully be available in the next .x release of 3.11 and 3.12

-Krutika

Comment 13 Johan Bernhardsson 2017-08-09 09:54:57 UTC
This also affects 3.8 fyi.  so a patch would be good for that as well (or should ovirt use another version of gluster?)

Comment 14 Sahina Bose 2017-08-09 09:58:23 UTC
(In reply to Johan Bernhardsson from comment #13)
> This also affects 3.8 fyi.  so a patch would be good for that as well (or
> should ovirt use another version of gluster?)

Are you using a single brick gluster volume as well?
AFAIK, this is not supported with oVirt.

Comment 15 Johan Bernhardsson 2017-08-09 10:01:11 UTC
We are using disperse on two volumes. 

Gluster volume fs02:
Volume Name: fs02
Type: Disperse
Volume ID: 7f3d96e7-8d1e-48b8-bad0-dc5b3de13b38
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: vbgsan01:/gluster/fs02/fs02
Brick2: vbgsan02:/gluster/fs02/fs02
Brick3: vbgsan03:/gluster/fs02/fs02


We get this problem on that one.

Comment 16 Johan Bernhardsson 2017-08-09 10:02:34 UTC
I know that that configuration is not supported.

But to point out that it is not only on single bricks.

Comment 17 Krutika Dhananjay 2017-08-10 04:13:25 UTC
(In reply to Johan Bernhardsson from comment #13)
> This also affects 3.8 fyi.  so a patch would be good for that as well (or
> should ovirt use another version of gluster?)

I'm not sure if 3.8 has reached end-of-life. Niels, could you confirm?

Comment 18 Niels de Vos 2017-08-10 10:25:21 UTC
(In reply to Krutika Dhananjay from comment #17)
> I'm not sure if 3.8 has reached end-of-life. Niels, could you confirm?

3.8 becomes EOL when 3.12 is released, currently planned for later this month.

There will be one more update for 3.8 over the next few days. If a backport gets sent soon, it may still get included.

Comment 19 Krutika Dhananjay 2017-08-10 10:44:05 UTC
Thanks Niels. Backport on its way. ;)

-Krutika

Comment 20 Allon Mureinik 2017-08-16 13:14:54 UTC
Benny, are there any AI on us, or is it all on gluster **server**'s side?

Comment 21 Benny Zlotnik 2017-08-17 09:40:59 UTC
Doesn't look like there are AI on us

Comment 22 Sahina Bose 2017-08-17 10:51:23 UTC
Moving from oVirt to gluster

Comment 23 Krutika Dhananjay 2017-08-17 10:55:55 UTC
Patch at https://review.gluster.org/#/c/18015/ fixes this issue and has been merged.

Comment 24 Niels de Vos 2017-11-07 10:41:56 UTC
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.