Description of problem: Other than for brick-multiplexing process, there is no reason for glusterfs to have any other mandatory cmd-line-args, to start the process. Specially, when we have a local volfile. Today, if I do start a server process (one with protocol/server volume is present) with just a local volfile, the process crashes. Version-Release number of selected component (if applicable): master How reproducible: 100% Steps to Reproduce: 1. glusterd; volume create (1 brick); killall glusterd; 2. glusterfsd -f /var/lib/glusterd/vols/<volname>/<volname>.<hostname>-brick-path.vol Actual results: process crashes Expected results: shouldn't crash
REVIEW: https://review.gluster.org/19893 (protocol/server: don't assume there would be a volfile id) posted (#5) for review on master by Amar Tumballi
COMMIT: https://review.gluster.org/19893 committed in master by "Jeff Darcy" <jeff.us> with a commit message- protocol/server: don't assume there would be a volfile id Earlier glusterfs never had an assumption someone would start it with right arguments, and brick processes would be spawned by a management layer. It just assume the role based on the volfile. Other than volfile, no other arguments should be technically mandatory for working of glusterfs. With this patch, that assumption holds true. Updates: github issue # 352 A note on why this particular issue for this basic sanity? As per the design of thin-arbiter/tie-breaker, it can be started independently on any machine, without need of glusterd. So, similar to 'glusterd', we should be able to spawn a process with any translator without options/volume id etc. fixes: bz#1569399 Change-Id: I5c0650fe0bfde35ad94ccba60e63f6cdcd1ae5ff Signed-off-by: Amar Tumballi <amarts>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/