Hide Forgot
Description of problem: OpenShift's glusterfs fuse client logs are not readily available for inspection. In fact they seem to only be available in case of failures and in non-obvious directories. Version-Release number of selected component (if applicable): Included in Openshift 3.2.1.15 How reproducible: Always Steps to Reproduce: 1.create GlusterFS volume 2.Try to mount it to the pod in a project that doesn't have endpoints pointing to the glusterfs cluster. 3.Pod cannot be created because mount fails 4. View events for the pod Actual results: Sample event for the pod is: 48s 14s 3 {kubelet gprfc063.o.internal} Warning FailedSync Error syncing pod, skipping: glusterfs: mount failed: Mount failed: exit status 1 Mounting arguments: 172.18.10.64:vol_43b6957343b6815f6842a1c9ef80c1d5 /var/lib/origin/openshift.local.volumes/pods/4a35e5bf-7eb4-11e6-87ee-d4bed9f4c83b/volumes/kubernetes.io~glusterfs/glusterfs-43b69573 glusterfs [log-file=/var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/glusterfs-43b69573/glusterfs.log] Output: Mount failed. Please check the log file for more details. For the actual log I need to go to /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/glusterfs-43b69573/glusterfs.log on the node `gprfc063.o.internal` Expected results: fuse client logs are readily available in /var/log/ or in the journal of the node where the mount was attempted. Otherwise it is very hard to find the log and troubleshoot the problem. Additional info:
(In reply to Anton Sherkhonov from comment #0) > Sample event for the pod is: > 48s 14s 3 {kubelet gprfc063.o.internal} Warning FailedSync > Error syncing pod, skipping: glusterfs: mount failed: Mount failed: exit > status 1 > Mounting arguments: 172.18.10.64:vol_43b6957343b6815f6842a1c9ef80c1d5 > /var/lib/origin/openshift.local.volumes/pods/4a35e5bf-7eb4-11e6-87ee- > d4bed9f4c83b/volumes/kubernetes.io~glusterfs/glusterfs-43b69573 glusterfs > [log-file=/var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/ > glusterfs/glusterfs-43b69573/glusterfs.log] > Output: Mount failed. Please check the log file for more details. > > For the actual log I need to go to > /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/ > glusterfs-43b69573/glusterfs.log on the node `gprfc063.o.internal` > Yes, because you have specified this path in the "log-file=" mount argument above. > Expected results: > fuse client logs are readily available in /var/log/ or in the journal of the > node where the mount was attempted. Otherwise it is very hard to find the > log and troubleshoot the problem. > This is how it currently works. If the log-file path is not explicitly mentioned while mounting, it does get created in /var/log/glusterfs/. Unless I am missing something, shall I close the BZ?
I didn't specify any mount arguments. I believe OpenShift does all the mounting.
(In reply to Anton Sherkhonov from comment #3) > I didn't specify any mount arguments. I believe OpenShift does all the > mounting. Sure, my point was that the log file seems to be be located in the specified location.
Any idea whose bug this is then?
Not sure. If you can tweak the arguments provided by Openshift and remove the log-file option and confirm that it fixes the issue for you (i.e. the log then gets created in /var/log/glusterfs/), you could probably assign it to them.
Anton, This is where Openshift chooses to place the logs as it is abstracting over the Gluster mounts. It is a bug in Openshift, we should talk to Luis Pabon in that case. I suggest we close this bug.
Reassigning for investigation by the OpenShift team.
I think this has to do with JBoss Fuse not Fabric8 code. I'm going to assign it back.
(In reply to Kurt T Stam from comment #9) > I think this has to do with JBoss Fuse not Fabric8 code. I'm going to assign > it back. Who is the dev for JBoss Fuse'? Could you please assign the BZ to the concerned person?
Ravishankar, sorry I have no idea. Maybe Kevin knows. --Kurt
If this is related to Fuse then Hiram would be the likely person to involve.
This is not a JBoss FUSE issue. It's a Gluster FUSE issue.
(In reply to Hiram Chirino from comment #14) > This is not a JBoss FUSE issue. It's a Gluster FUSE issue. (In reply to Ravishankar N from comment #10) > (In reply to Kurt T Stam from comment #9) > > I think this has to do with JBoss Fuse not Fabric8 code. I'm going to assign > > it back. > > Who is the dev for JBoss Fuse'? Could you please assign the BZ to the > concerned person? This log file location is set in GlusterFS kubernetes plugin. I can work on this bug. I am taking this bug.
GlusterFS log file has been formed with below combination: glusterfs plugin directory in OSE + GlusterFS plugin Name + GlusterFS volume Name + glusterfs.log. For ex: /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/glusterfs/glusterfs-43b69573/glusterfs.log This has been designed in this way to have more specific log path so each pod has its own log based on PV. Also log level has been set to 'ERROR' to reduce the log entries. However I will revisit this log path again and update the bugzilla soon.