RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1857697 - get-state output left in /var/run/gluster/ after collecting the sosreport [rhel-7.9.z]
Summary: get-state output left in /var/run/gluster/ after collecting the sosreport [rh...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: sos
Version: 7.9
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Jan Jansky
QA Contact: Maros Kopec
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-16 11:48 UTC by Sanju
Modified: 2021-02-02 11:59 UTC (History)
8 users (show)

Fixed In Version: sos-3.9-5.el7_9.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-02 11:59:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github sosreport sos pull 2155 0 None closed [gluster] remove generated state files 2021-01-18 12:38:23 UTC

Description Sanju 2020-07-16 11:48:22 UTC
Description of problem:
With the fix of https://bugzilla.redhat.com/show_bug.cgi?id=1856417, get-state output files are left in /var/run/gluster/ un-removed.

Version-Release number of selected component (if applicable):


How reproducible:
always

Steps to Reproduce:
1. collect sos report
2. check /var/run/gluster

Actual results:
get-state output file is present in /var/run/gluster/

Expected results:
all the files generated as a part of sos-report collection should be cleaned up, so as get-state output file

Additional info:
--> before taking the sos-report:

[root@dhcp35-73 ~]# ll /var/run/gluster
total 0
srwxr-xr-x. 1 root root  0 Jul 16 02:45 13ced41ca255988b.socket
srwxr-xr-x. 1 root root  0 Jul 16 02:45 38faa4216970ba88.socket
srwxr-xr-x. 1 root root  0 Jul 16 02:45 6fd06042f3dac66e.socket
drwxr-xr-x. 2 root root 40 Jul 16 02:25 bitd
srwxr-xr-x. 1 root root  0 Jul 16 02:45 changelog-4d9e0b5248cc1bbc.sock
srwxr-xr-x. 1 root root  0 Jul 16 02:45 changelog-70e50b0d0bc9a093.sock
srwxr-xr-x. 1 root root  0 Jul 16 02:45 changelog-a018987aa8af39ae.sock
drwxr-xr-x. 2 root root 40 Jul 16 02:25 glustershd
drwxr-xr-x. 2 root root 40 Jul 16 02:25 nfs
drwxr-xr-x. 2 root root 40 Jul 16 02:25 quotad
drwxr-xr-x. 2 root root 40 Jul 16 02:25 scrub
drwxr-xr-x. 2 root root 40 Jul 16 02:25 snaps
drwxr-xr-x. 3 root root 60 Jul 16 02:35 vols
[root@dhcp35-73 ~]#

--> sos-report is collected and statedump is present:
[root@dhcp35-73 ~]# ll /var/tmp/sosreport-dhcp35-73-2020-07-16-rgjessx/run/gluster/glusterd_state_20200716_024731 
-rw-r--r--. 1 root root 2186 Jul 16 02:47 /var/tmp/sosreport-dhcp35-73-2020-07-16-rgjessx/run/gluster/glusterd_state_20200716_024731
[root@dhcp35-73 ~]# 

--> after taking the sos-report:
[root@dhcp35-73 ~]# ll /var/run/gluster
total 8
srwxr-xr-x. 1 root root    0 Jul 16 02:45 13ced41ca255988b.socket
srwxr-xr-x. 1 root root    0 Jul 16 02:45 38faa4216970ba88.socket
srwxr-xr-x. 1 root root    0 Jul 16 02:45 6fd06042f3dac66e.socket
drwxr-xr-x. 2 root root   40 Jul 16 02:25 bitd
srwxr-xr-x. 1 root root    0 Jul 16 02:45 changelog-4d9e0b5248cc1bbc.sock
srwxr-xr-x. 1 root root    0 Jul 16 02:45 changelog-70e50b0d0bc9a093.sock
srwxr-xr-x. 1 root root    0 Jul 16 02:45 changelog-a018987aa8af39ae.sock
-rw-r--r--. 1 root root 2186 Jul 16 02:47 glusterd_state_20200716_024731  <-- file should be cleaned up
drwxr-xr-x. 2 root root   40 Jul 16 02:25 glustershd
drwxr-xr-x. 2 root root   40 Jul 16 02:25 nfs
drwxr-xr-x. 2 root root   40 Jul 16 02:25 quotad
drwxr-xr-x. 2 root root   40 Jul 16 02:25 scrub
drwxr-xr-x. 2 root root   40 Jul 16 02:25 snaps
drwxr-xr-x. 3 root root   60 Jul 16 02:35 vols
[root@dhcp35-73 ~]#

Comment 2 Bryn M. Reeves 2020-07-16 12:49:59 UTC
What generates these and under what circumstances? These files do not exist on the test system we have been using:

  # rm -f /var/run/gluster/*.dump.* /var/run/gluster/*state*
  # killall -USR1 glusterfs glusterfsd glusterd
  # ls /var/run/gluster/*state*
  ls: cannot access /var/run/gluster/*state*: No such file or directory
  # ls /var/run/gluster/*.dump.*
  /var/run/gluster/glusterdump.1452.dump.1594903714   /var/run/gluster/glusterdump.24961.dump.1594903714  /var/run/gluster/mnt-data1-1.24697.dump.1594903714
  /var/run/gluster/glusterdump.1454.dump.1594903714   /var/run/gluster/glusterdump.8205.dump.1594903714   /var/run/gluster/mnt-data2-2.24719.dump.1594903714
  /var/run/gluster/glusterdump.24959.dump.1594903714  /var/run/gluster/glusterdump.8245.dump.1594903714   /var/run/gluster/var-lib-glusterd-ss_brick.14293.dump.1594903714

This is why it's important for us to have a clear specification of the files to operate on. Gluster is unique in that it expects sos to clean up after this operation so it's essential we know what to remove and what to leave behind.

Comment 3 Bryn M. Reeves 2020-07-16 12:50:29 UTC
# rpm -qa|grep gluster
glusterfs-libs-6.0-29.el7rhgs.x86_64
glusterfs-cli-6.0-29.el7rhgs.x86_64
glusterfs-6.0-29.el7rhgs.x86_64
glusterfs-api-6.0-29.el7rhgs.x86_64
glusterfs-server-6.0-29.el7rhgs.x86_64
glusterfs-fuse-6.0-29.el7rhgs.x86_64
glusterfs-geo-replication-6.0-29.el7rhgs.x86_64
glusterfs-client-xlators-6.0-29.el7rhgs.x86_64
python2-gluster-6.0-29.el7rhgs.x86_64
glusterfs-rdma-6.0-29.el7rhgs.x86_64

Comment 17 errata-xmlrpc 2021-02-02 11:59:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (sos bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:0333


Note You need to log in before you can comment on or make changes to this bug.