Bug 1801892 - Direct read from a Gluster storage domain results in a full block containing zero padding
Summary: Direct read from a Gluster storage domain results in a full block containing ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 4.3.7
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ovirt-4.4.0
: 4.4.0
Assignee: Gobinda Das
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1802013 1802016
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-02-11 20:43 UTC by Gordon Watson
Modified: 2020-08-04 13:28 UTC (History)
15 users (show)

Fixed In Version: gluster-6.0-34
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-08-04 13:27:53 UTC
oVirt Team: Gluster
Target Upstream Version:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4819861 None None None 2020-02-11 21:46:05 UTC
Red Hat Product Errata RHEA-2020:3246 None None None 2020-08-04 13:28:24 UTC

Description Gordon Watson 2020-02-11 20:43:38 UTC
Description of problem:

When VDSM reads SD metadata from a Gluster storage domain, it performs direct i/o, which results in a full block being read. This will contain extra binary zeroes to pad the block.

VDSM will then report;

2020-02-08 16:03:36,304-0500 WARN  (jsonrpc/0) [storage.PersistentDict] Could not parse line `^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^ .... etc., etc.



Version-Release number of selected component (if applicable):

RHV 4.3.7


How reproducible:

100% in a RHV DC with a gluster SD.


Steps to Reproduce:
1. 
2. 
3.

Actual results:


Expected results:


Additional info:

Comment 1 Gordon Watson 2020-02-11 20:46:20 UTC
Please refer to https://bugzilla.redhat.com/show_bug.cgi?id=1800803#c22 for a proposed patch that would circumvent the problem.

This was originally reported in Gluster BZ 1737141.

Comment 2 Nir Soffer 2020-02-17 11:48:09 UTC
The issue should be fixed in all gluster versions now:
- mainline: bug 1738419
- gluster 6: bug 1737141
- gluster 7: bug 1740316

Krutika, is this correct? Do we have an expected release date for this fix?

Comment 3 Tal Nisan 2020-02-17 15:27:19 UTC
Sahina, can you please have a look?

Comment 4 Krutika Dhananjay 2020-02-18 06:21:49 UTC
(In reply to Nir Soffer from comment #2)
> The issue should be fixed in all gluster versions now:
> - mainline: bug 1738419
> - gluster 6: bug 1737141
> - gluster 7: bug 1740316
> 
> Krutika, is this correct? Do we have an expected release date for this fix?

See this - https://bugzilla.redhat.com/show_bug.cgi?id=1802013
It's being targeted for RHGS-3.5.2.

-Krutika

Comment 5 SATHEESARAN 2020-05-14 13:05:54 UTC
This bug is getting fixed with RHGS 3.5.2, and targeting this bug for RHV 4.4

Comment 9 SATHEESARAN 2020-05-19 05:08:32 UTC
Fix is now available with RHGS 3.5.2 interim build

Comment 10 SATHEESARAN 2020-06-06 11:59:15 UTC
Verified with RHVH 4.4.1 and RHGS 3.5.2 - glusterfs-6.0-37.el8rhgs with the following steps:

[root@ ~]# ls /rhev/data-center/mnt/glusterSD/rhsqa-grafton7.lab.eng.blr.redhat.com\:_vmstore/977e8d86-afd8-46c1-bf15-ed19d3cb6ed1/dom_md/
ids  inbox  leases  metadata  outbox  xleases

[root@ ~ ]# stat metadata 
  File: metadata
  Size: 391       	Blocks: 1          IO Block: 131072 regular file
Device: 34h/52d	Inode: 10208956554895298979  Links: 1
Access: (0644/-rw-r--r--)  Uid: (   36/    vdsm)   Gid: (   36/     kvm)
Context: system_u:object_r:fusefs_t:s0
Access: 2020-06-03 18:59:17.547192000 +0000
Modify: 2020-06-03 18:59:17.548192011 +0000
Change: 2020-06-03 18:59:17.600192582 +0000
 Birth: -

[root@ ~ ]# cat metadata 
ALIGNMENT=1048576
BLOCK_SIZE=4096
CLASS=Data
DESCRIPTION=vmstore
IOOPTIMEOUTSEC=10
LEASERETRIES=3
LEASETIMESEC=60
LOCKPOLICY=
LOCKRENEWALINTERVALSEC=5
POOL_UUID=0f3fc724-a5ca-11ea-a7a6-004755204901
REMOTE_PATH=rhsqa-grafton7.lab.eng.blr.redhat.com:/vmstore
ROLE=Regular
SDUUID=977e8d86-afd8-46c1-bf15-ed19d3cb6ed1
TYPE=GLUSTERFS
VERSION=5
_SHA_CKSUM=771d06cb29cd1ee6a7e5b4c72be119cd5078a87e

[root@ ~]# dd if=metadata of=/dev/null bs=4096 count=1
0+1 records in
0+1 records out
391 bytes copied, 0.000101469 s, 3.9 MB/s
[root@ ~]# dd if=metadata of=/dev/null bs=4096 count=1 iflag=direct
0+1 records in
0+1 records out
391 bytes copied, 0.00143502 s, 272 kB/s


So there are no zeroes padded

Comment 16 errata-xmlrpc 2020-08-04 13:27:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHV RHEL Host (ovirt-host) 4.4), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:3246


Note You need to log in before you can comment on or make changes to this bug.