Description of problem: Executing a gluster cli command when glusterd is not running causes the following messages to be displayed: Traceback (most recent call last): File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 340, in <module> main() File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 41, in main argsupgrade.upgrade() File "/usr/local/libexec/glusterfs/python/syncdaemon/argsupgrade.py", line 85, in upgrade init_gsyncd_template_conf() File "/usr/local/libexec/glusterfs/python/syncdaemon/argsupgrade.py", line 50, in init_gsyncd_template_conf fd = os.open(path, os.O_CREAT | os.O_RDWR) FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/glusterd/geo-replication/gsyncd_template.conf' Connection failed. Please check if gluster daemon is operational. Version-Release number of selected component (if applicable): How reproducible: Consistently with the latest master source install (commit 3ab23415804502b1ba89360c55ac3e8143822a0b) Steps to Reproduce: 1. Build and install glusterfs from the latest master 2. Make sure glusterd is not running (pkill glusterd) 3. Run a gluster command: gluster v start vol1 Actual results: [root@server glusterfs]# gluster v start vol1 Traceback (most recent call last): File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 340, in <module> main() File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 41, in main argsupgrade.upgrade() File "/usr/local/libexec/glusterfs/python/syncdaemon/argsupgrade.py", line 85, in upgrade init_gsyncd_template_conf() File "/usr/local/libexec/glusterfs/python/syncdaemon/argsupgrade.py", line 50, in init_gsyncd_template_conf fd = os.open(path, os.O_CREAT | os.O_RDWR) FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/glusterd/geo-replication/gsyncd_template.conf' Connection failed. Please check if gluster daemon is operational. Expected results: Only the following message should be printed: Connection failed. Please check if gluster daemon is operational. Additional info:
Nithya, I don't see this happening. Do I need to issue any commands before killing glusterd? [root@localhost glusterfs]# pkill glusterd [root@localhost glusterfs]# ps -ax | grep gluster 16869 pts/0 S+ 0:00 grep --color=auto gluster [root@localhost glusterfs]# gluster v status Connection failed. Please check if gluster daemon is operational. [root@localhost glusterfs]# gluster v status vol1 Connection failed. Please check if gluster daemon is operational. [root@localhost glusterfs]# glusterd [root@localhost glusterfs]# gluster v create vol1 10.215.99.127:/tmp/b{1..3} force volume create: vol1: success: please start the volume to access data [root@localhost glusterfs]# gluster v start vol1 volume start: vol1: success [root@localhost glusterfs]# gluster v status vol1 Status of volume: vol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.215.99.127:/tmp/b1 49152 0 Y 17096 Brick 10.215.99.127:/tmp/b2 49153 0 Y 17116 Brick 10.215.99.127:/tmp/b3 49154 0 Y 17136 Task Status of Volume vol1 ------------------------------------------------------------------------------ There are no active volume tasks [root@localhost glusterfs]# pkill glusterd [root@localhost glusterfs]# gluster v status vol1 Connection failed. Please check if gluster daemon is operational. [root@localhost glusterfs]# git pull Already up-to-date. [root@localhost glusterfs]#
That is strange - I am running on RHEL but there have been a lot of builds and deletes and so on. If you cannot reproduce this, feel free to close it. I will reopen if I can figure out why this shows up on my system.
Based on above comments, closing this bug.