Bug 1570958 - [RFE] GlusterFS client mount should not fail if /var/log/glusterfs is not present.
Summary: [RFE] GlusterFS client mount should not fail if /var/log/glusterfs is not pre...
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs
Version: cns-3.9
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: RHGS 3.4.z Batch Update 4
Assignee: Amar Tumballi
QA Contact: Bala Konda Reddy M
Depends On:
TreeView+ depends on / blocked
Reported: 2018-04-23 20:30 UTC by Ryan Howe
Modified: 2019-03-27 03:44 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.12.2-41
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2019-03-27 03:43:39 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0658 0 None None None 2019-03-27 03:44:55 UTC

Description Ryan Howe 2018-04-23 20:30:27 UTC
Description of problem:

  If /var/log/glusterfs gets removed, a mount.glusterfs should still be possible and not fail. 

Version-Release number of selected component (if applicable):
glusterfs   3.8.4-53.el7   

How reproducible:

Steps to Reproduce:
1. Remove /var/log/* 
2. mount.glusterfs host:/test /mnt/

Actual results:

Error mounting /tmp/openshift-glusterfs-registry-niCewy: 
       ERROR: failed to create logfile "/var/log/glusterfs/mnt.log" (No such file or directory)
       ERROR: failed to open logfile /var/log/glusterfs/mnt.log
        Mount failed. Please check the log file for more details.

Expected results:
Success or creating of log file.

Comment 9 Amar Tumballi 2018-11-02 04:49:07 UTC
Considering we do similar (mkdir_p()) directory creation for different files (like monitor, statedump etc), we should do this for logging too IMO.

Comment 10 Amar Tumballi 2018-11-02 05:19:49 UTC
Upstream Fix: https://review.gluster.org/#/c/glusterfs/+/21536

Comment 11 Atin Mukherjee 2018-11-19 05:12:44 UTC
Upstream fix is already in. Proposing this for 3.4 BU4.

Comment 18 Amar Tumballi 2019-02-12 13:12:36 UTC
> Scenario 1:

Valid test case, but no need for step 6

Before the fix:

* as the mount itself would fail, that can be checked with df (like step 7)

After the fix:

* df -h would show successful mount.

> Scenario 2:

Not valid test case, ie, behavior would be no changes to process, but user wouldn't see log files, as the files were removed after they are 'opened'. Which means, fd will be valid, but the entries are deleted. The behavior would be expected that, there would be no /var/log/glusterfs at the end of it.

Please test, but the expected result is, no crashes in process, but no logs.

> Scenario 3:

I am not sure of the outcome. Technically, even with permission 000, a process started by root would be able to use the files. So it may work without issues IMO. Please try and see what to expect.


Regardless of all these, a user can easily hit this bug if they install glusterfs through RPM on a machine, which is expected to be a client, and they don't install glusterfs-server part of the RPM. (because earlier, RPM used to create /var/log/glusterfs before installing glusterd etc).

Now, if you try to mount client, (regardless of volume type, it can be tested with just 1 brick in volume), it would not start because there is no /var/log/glusterfs (ie, before this version). With the fix, the mount works fine.

Hope this gives the idea for what does it involve for testing this fix.

Comment 21 Amar Tumballi 2019-02-13 14:28:56 UTC
Ack! But interestingly, IMO, we mostly need a separate Blocker bug for this, as without the fix, you wouldn't even succeed to mount.

I am looking at the issue, and will update what is the issue soon.

Comment 25 errata-xmlrpc 2019-03-27 03:43:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.