Description of problem: When VDSM reads SD metadata from a Gluster storage domain, it performs direct i/o, which results in a full block being read. This will contain extra binary zeroes to pad the block. VDSM will then report; 2020-02-08 16:03:36,304-0500 WARN (jsonrpc/0) [storage.PersistentDict] Could not parse line `^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^ .... etc., etc. Version-Release number of selected component (if applicable): RHV 4.3.7 How reproducible: 100% in a RHV DC with a gluster SD. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Please refer to https://bugzilla.redhat.com/show_bug.cgi?id=1800803#c22 for a proposed patch that would circumvent the problem. This was originally reported in Gluster BZ 1737141.
The issue should be fixed in all gluster versions now: - mainline: bug 1738419 - gluster 6: bug 1737141 - gluster 7: bug 1740316 Krutika, is this correct? Do we have an expected release date for this fix?
Sahina, can you please have a look?
(In reply to Nir Soffer from comment #2) > The issue should be fixed in all gluster versions now: > - mainline: bug 1738419 > - gluster 6: bug 1737141 > - gluster 7: bug 1740316 > > Krutika, is this correct? Do we have an expected release date for this fix? See this - https://bugzilla.redhat.com/show_bug.cgi?id=1802013 It's being targeted for RHGS-3.5.2. -Krutika
This bug is getting fixed with RHGS 3.5.2, and targeting this bug for RHV 4.4
Fix is now available with RHGS 3.5.2 interim build
Verified with RHVH 4.4.1 and RHGS 3.5.2 - glusterfs-6.0-37.el8rhgs with the following steps: [root@ ~]# ls /rhev/data-center/mnt/glusterSD/rhsqa-grafton7.lab.eng.blr.redhat.com\:_vmstore/977e8d86-afd8-46c1-bf15-ed19d3cb6ed1/dom_md/ ids inbox leases metadata outbox xleases [root@ ~ ]# stat metadata File: metadata Size: 391 Blocks: 1 IO Block: 131072 regular file Device: 34h/52d Inode: 10208956554895298979 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 36/ vdsm) Gid: ( 36/ kvm) Context: system_u:object_r:fusefs_t:s0 Access: 2020-06-03 18:59:17.547192000 +0000 Modify: 2020-06-03 18:59:17.548192011 +0000 Change: 2020-06-03 18:59:17.600192582 +0000 Birth: - [root@ ~ ]# cat metadata ALIGNMENT=1048576 BLOCK_SIZE=4096 CLASS=Data DESCRIPTION=vmstore IOOPTIMEOUTSEC=10 LEASERETRIES=3 LEASETIMESEC=60 LOCKPOLICY= LOCKRENEWALINTERVALSEC=5 POOL_UUID=0f3fc724-a5ca-11ea-a7a6-004755204901 REMOTE_PATH=rhsqa-grafton7.lab.eng.blr.redhat.com:/vmstore ROLE=Regular SDUUID=977e8d86-afd8-46c1-bf15-ed19d3cb6ed1 TYPE=GLUSTERFS VERSION=5 _SHA_CKSUM=771d06cb29cd1ee6a7e5b4c72be119cd5078a87e [root@ ~]# dd if=metadata of=/dev/null bs=4096 count=1 0+1 records in 0+1 records out 391 bytes copied, 0.000101469 s, 3.9 MB/s [root@ ~]# dd if=metadata of=/dev/null bs=4096 count=1 iflag=direct 0+1 records in 0+1 records out 391 bytes copied, 0.00143502 s, 272 kB/s So there are no zeroes padded
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (RHV RHEL Host (ovirt-host) 4.4), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:3246