Bug 1857697
Summary: | get-state output left in /var/run/gluster/ after collecting the sosreport [rhel-7.9.z] | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Sanju <srakonde> |
Component: | sos | Assignee: | Jan Jansky <jjansky> |
Status: | CLOSED ERRATA | QA Contact: | Maros Kopec <makopec> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 7.9 | CC: | agk, bmr, fkrska, jreznik, mhradile, plambri, pmoravec, sbradley |
Target Milestone: | rc | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | sos-3.9-5.el7_9.2 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-02-02 11:59:00 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Sanju
2020-07-16 11:48:22 UTC
What generates these and under what circumstances? These files do not exist on the test system we have been using: # rm -f /var/run/gluster/*.dump.* /var/run/gluster/*state* # killall -USR1 glusterfs glusterfsd glusterd # ls /var/run/gluster/*state* ls: cannot access /var/run/gluster/*state*: No such file or directory # ls /var/run/gluster/*.dump.* /var/run/gluster/glusterdump.1452.dump.1594903714 /var/run/gluster/glusterdump.24961.dump.1594903714 /var/run/gluster/mnt-data1-1.24697.dump.1594903714 /var/run/gluster/glusterdump.1454.dump.1594903714 /var/run/gluster/glusterdump.8205.dump.1594903714 /var/run/gluster/mnt-data2-2.24719.dump.1594903714 /var/run/gluster/glusterdump.24959.dump.1594903714 /var/run/gluster/glusterdump.8245.dump.1594903714 /var/run/gluster/var-lib-glusterd-ss_brick.14293.dump.1594903714 This is why it's important for us to have a clear specification of the files to operate on. Gluster is unique in that it expects sos to clean up after this operation so it's essential we know what to remove and what to leave behind. # rpm -qa|grep gluster glusterfs-libs-6.0-29.el7rhgs.x86_64 glusterfs-cli-6.0-29.el7rhgs.x86_64 glusterfs-6.0-29.el7rhgs.x86_64 glusterfs-api-6.0-29.el7rhgs.x86_64 glusterfs-server-6.0-29.el7rhgs.x86_64 glusterfs-fuse-6.0-29.el7rhgs.x86_64 glusterfs-geo-replication-6.0-29.el7rhgs.x86_64 glusterfs-client-xlators-6.0-29.el7rhgs.x86_64 python2-gluster-6.0-29.el7rhgs.x86_64 glusterfs-rdma-6.0-29.el7rhgs.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (sos bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:0333 |