| Summary: | Wrong log level on dht_rename when using the cluster.extra-hash-regex option | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Wojtek <wojtek.ostapowicz> |
| Component: | logging | Assignee: | Nithya Balachandran <nbalacha> |
| Status: | CLOSED EOL | QA Contact: | |
| Severity: | low | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.7.5 | CC: | bugs, joe, nbalacha, sarumuga |
| Target Milestone: | --- | Keywords: | Triaged |
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-03-08 10:51:41 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Wojtek
2016-03-15 18:01:30 UTC
This was from http://review.gluster.org/8582 This should be gf_msg_trace IMHO. This was added to help with debugging - renames (both files and dirs) sometimes caused some rather odd issues and there was no other way to figure out if a rename operation had occurred. This little log message has helped us figure out what had happened while debugging several issues. To be clear, the issues this helped us debug refer to those caused by interactions between renames and layout changes. It does not refer to rename failures per se. It is also unrelated to the cluster.extra-hash-regex setting I would like to keep this around a tad longer. In production this adds a huge amount of logging. If you need this debug info, can't you just set it to debug and enable debug logging? Why is this in production logs? Setting it to debug does not help because we usually have to figure out what happend after the problem would have already occurred. Most users usually do not keep track of what ops were performed on a file and this message has helped us out in quite a few cases. This bug is getting closed because GlusteFS-3.7 has reached its end-of-life. Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS. If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release. |