Bug 970068
Summary: | Switching between multiple profiles wrongly update the barrier mount option for brick. | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Neependra Khare <nkhare> |
Component: | storage-server-tools | Assignee: | Bug Updates Notification Mailing List <rhs-bugs> |
Status: | CLOSED EOL | QA Contact: | storage-qa-internal <storage-qa-internal> |
Severity: | high | Docs Contact: | |
Priority: | medium | ||
Version: | 2.1 | CC: | bengland, dshaks, perfbz, rhs-bugs, vbellur |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-12-03 17:23:14 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Neependra Khare
2013-06-03 12:07:59 UTC
Does /proc/mounts should the same behavior? I think we should try to eliminate write barrier disabling from tuned profiles for RHS if we can establish that it doesn't hurt performance for MegaRAID at least. For small-file tests I could not detect any change in performance with barriers enabled, but I haven't tested large-file case yet. If we can do this, then we don't have to care about this bug anymore. Let's shoot for this in Corbett release. Setting NEEDINFO till I get fix from Ben/Neependra This should be fixed now because we no longer set barrier mount option in tuned profiles. Should be easy for QE to verify fix on RHS 3.0. Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release. |