Bug 1388837
Summary: | enable features.shard glusterfs replication or arbiter volume performance bad | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | humaorong <maorong.hu> |
Component: | sharding | Assignee: | bugs <bugs> |
Status: | CLOSED DUPLICATE | QA Contact: | bugs <bugs> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.8 | CC: | bugs, kdhananj, lambert.olivier, ravishankar |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-11-14 15:24:30 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
humaorong
2016-10-26 09:38:00 UTC
Hi, So https://bugzilla.redhat.com/show_bug.cgi?id=1384906 is also a similar bug and Ravi who works on arbiter has fixed this bug and the patch has made it into 3.7, 3.8 and 3.9. I am therefore closing this bug as a clone of the other bug. Ravi should be able to tell you the exact .x releases the patch has made it into. -Krutika *** This bug has been marked as a duplicate of bug 1384906 *** Krutika, this is not a duplicate. humaorong did try the arbiter fix (https://bugzilla.redhat.com/show_bug.cgi?id=1375125#c11) but the problem was seen in replicate volumes too. The performance impact he observed is due to the shard xattr being updated due to appending writes, amplified by replication. You might want to confirm that this is expected behaviour and not a bug per se in sharding. (Unless there is some form of delayed size updation thingy we can do in the happy path for shard size xattr updation). I confirm I can reproduce the issue in the exact same condition: * Replica 2 with our without sharding: OK * Replica 3 + 1 arbiter without sharding: OK * Replica 3 + 1 arbiter **with** sharding: NOT OK (1MB/s against ~100MB/s for 1 and 2) (In reply to Olivier LAMBERT from comment #3) > I confirm I can reproduce the issue in the exact same condition: > > * Replica 2 with our without sharding: OK > * Replica 3 + 1 arbiter without sharding: OK > * Replica 3 + 1 arbiter **with** sharding: NOT OK (1MB/s against ~100MB/s > for 1 and 2) How does * replica 3 without arbiter and without sharding * replica 3 without arbiter and with sharding compare with the data in comment #3? Could you share that information as well? -Krutika Sadly, I only have a 2 nodes setup ATM, I can't create a "real" replica 3. |