Bug 1768719

Summary: Unnecessary messages when glusterd is not running
Product: [Community] GlusterFS Reporter: Nithya Balachandran <nbalacha>
Component: cliAssignee: Sanju <srakonde>
Status: CLOSED WORKSFORME QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-11-07 06:59:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Nithya Balachandran 2019-11-05 05:56:31 UTC
Description of problem:
Executing a gluster cli command when glusterd is not running causes the following messages to be displayed:


Traceback (most recent call last):
  File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 340, in <module>
    main()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 41, in main
    argsupgrade.upgrade()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/argsupgrade.py", line 85, in upgrade
    init_gsyncd_template_conf()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/argsupgrade.py", line 50, in init_gsyncd_template_conf
    fd = os.open(path, os.O_CREAT | os.O_RDWR)
FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/glusterd/geo-replication/gsyncd_template.conf'
Connection failed. Please check if gluster daemon is operational.



Version-Release number of selected component (if applicable):


How reproducible:

Consistently with the latest master source install (commit 3ab23415804502b1ba89360c55ac3e8143822a0b)

Steps to Reproduce:
1. Build and install glusterfs from the latest master
2. Make sure glusterd is not running (pkill glusterd)
3. Run a gluster command:

gluster v start vol1


Actual results:

[root@server glusterfs]# gluster v start vol1
Traceback (most recent call last):
  File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 340, in <module>
    main()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 41, in main
    argsupgrade.upgrade()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/argsupgrade.py", line 85, in upgrade
    init_gsyncd_template_conf()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/argsupgrade.py", line 50, in init_gsyncd_template_conf
    fd = os.open(path, os.O_CREAT | os.O_RDWR)
FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/glusterd/geo-replication/gsyncd_template.conf'
Connection failed. Please check if gluster daemon is operational.


Expected results:

Only the following message should be printed:
Connection failed. Please check if gluster daemon is operational.


Additional info:

Comment 1 Sanju 2019-11-05 06:19:24 UTC
Nithya, I don't see this happening. Do I need to issue any commands before killing glusterd?

[root@localhost glusterfs]# pkill glusterd
[root@localhost glusterfs]# ps -ax | grep gluster
16869 pts/0    S+     0:00 grep --color=auto gluster
[root@localhost glusterfs]# gluster v status
Connection failed. Please check if gluster daemon is operational.
[root@localhost glusterfs]# gluster v status vol1
Connection failed. Please check if gluster daemon is operational.
[root@localhost glusterfs]# glusterd
[root@localhost glusterfs]# gluster v create vol1 10.215.99.127:/tmp/b{1..3} force
volume create: vol1: success: please start the volume to access data
[root@localhost glusterfs]# gluster v start vol1
volume start: vol1: success
[root@localhost glusterfs]# gluster v status vol1
Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.215.99.127:/tmp/b1                 49152     0          Y       17096
Brick 10.215.99.127:/tmp/b2                 49153     0          Y       17116
Brick 10.215.99.127:/tmp/b3                 49154     0          Y       17136
 
Task Status of Volume vol1
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@localhost glusterfs]# pkill glusterd
[root@localhost glusterfs]# gluster v status vol1
Connection failed. Please check if gluster daemon is operational.
[root@localhost glusterfs]# git pull
Already up-to-date.
[root@localhost glusterfs]#

Comment 2 Nithya Balachandran 2019-11-05 06:24:09 UTC
That is strange - I am running on RHEL but there have been a lot of builds and deletes and so on.

If you cannot reproduce this, feel free to close it. I will reopen if I can figure out why this shows up on my system.

Comment 3 Sanju 2019-11-07 06:59:40 UTC
Based on above comments, closing this bug.