| Summary: | Updates not happening with replication | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Jacob Shucart <jacob> |
| Component: | core | Assignee: | Amar Tumballi <amarts> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |
| Severity: | high | Docs Contact: | |
| Priority: | urgent | ||
| Version: | 3.1.0 | CC: | allen, anush, arohter, divya, gluster-bugs, jacob, shehjart, vijay, vikas, vraman |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | --- | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | DA | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Jacob Shucart
2010-10-27 12:46:31 UTC
a Some more findings: - echo 3 > /proc/sys/vm/drop_caches seemed to fix it, which points to a quick-read issue. - It appears that the issue does not happen for files larger than 64kb, which again points to quick-read. - It happens intermittently on Jacob's setup. - Happens on both a Distributed-Replicate volume and a plain Distribute volume. - Haven't been able to reproduce it on Jacob's setup if I take out quick-read from the volume file and mount using it. - The issue was reported by a customer, so we know it has been reproduced in atleast one other setup. I created a volume called mirror with 2x2 replication. I started it. Then I mounted it on the localhost on each of the storage nodes which are CentOS 5.5 64-bit at /mirror using glusterfs:
[root@jacobgfs31-s1 mirror]# df -h /mirror
Filesystem Size Used Avail Use% Mounted on
glusterfs#localhost:/mirror
2.9G 900M 1.9G 33% /mirror
I can create a file called /mirror/file.txt, and it will show up in /mirror on all 4 nodes. If I then edit /mirror/file.txt using nano on node 1 and I then look at /mirror/file.txt on all of the other nodes, I don't see the change I just made - I can only see it on the node I made the change on.
I looked at the mirror pair that has the file outside of the Gluster mount point and the file has the correct contents on both of the servers.
> - It happens intermittently on Jacob's setup. I am able to reproduce it 100% of the time on my systems with small files(<1k). > - The issue was reported by a customer, so we know it has been reproduced in > atleast one other setup. The customer reports this occurs on files greater than 100k as well intermittantly. I have not found any of the caching parameters that can be set using gluster volume set that help this issue. Is quickread something that can be turned off without having to edit a volfile? Also, how can I disable the quick read cache without modifying a vol file? (In reply to comment #1) > NFS has the same issue... With NFS, you need to close the file for the changes to be visible on the other nfs clients. This is behavior is more prominent for small files. In nano, if you edit and then save the file the contents will be visible on other clients also(assuming nano closes the file after saving it) Shehjar, The files were closed... Did you try to reproduce? Even running echo "something" >> test.txt reproduced the issue... Only turning quick read off causes the issue to go away. I will email you connection info to my test systems. Vikas took a look at my systems and saw the same issue I did. volume set mirror performance.quick-read off
figured it out looking at:
gluster src/xlators/mgmt/glusterd/src/glusterd-volgen.c
in the:
static struct volopt_map_entry glusterd_volopt_map[] = {
PATCH: http://patches.gluster.com/patch/5595 in master (performance/quick-read: set right validation checks) PATCH: http://patches.gluster.com/patch/5596 in master (performance/quick-read: white space cleanup) PLease use the target field when closing bugs. We'd like to know when this will make it into the GA releases. (In reply to comment #11) Sure. Unless otherwise specified, all bug fixes will be part of the next maintenance release. Added the details (http://gluster.qotd.co/q/what-is-the-recommended-cache-size-parameter-for-quick-read-volume-set-option/) in Gluster QOTD site. |