Description of problem: (via forum discussion at https://www.openshift.com/forums/openshift/rhc-tail-myapp-first-logs-well-then-silently-stops-logging ) --------------------------- $ rhc --version rhc 1.11.4 --------------------------- To my observation, the rhc tail command silently stops providing log messages even though new log messages are generated on the remote application. --------------------------- # on my client shell $ rhc tail myapp .... (steady logging from remote application appears) .... (some time later) .... (logging still fine) .... (some more time later) (no more logging, but process still running) _ --------------------------- When logging silently stops, no error messages are generated, and the rhc tail process is still running. To an observer, it looks as if currently no new log messages are generated, even though in fact, they are generated. This behavior complicates remote debugging and remote activity observation. It requires to manually re-connect by terminating the logging process and running "rhc tail myapp" again from time to time. A default usage of "rhc tail anyapp" expects to have logging go on indefinetly until requested explicitly to be terminated. Therefore, it is good if "rhc tail anyapp" logs indefinetly. A current workaround is: --------------------------- $ rhc ssh myapp # sshing to remote application, providing credentials # on remote application $ unset TMOUT # no more logout after idle time $ tail_all --------------------------- ======================================== Version-Release number of selected component (if applicable): rhc 1.11.4 ======================================== Steps to Reproduce: 1. On local machine, run $ rhc tail myapp Actual results: After some time, no new logging messages appear, even though they are generated on the remote application. Expected results: Logging shall run indefinitely, until rhc tail process terminates.
Please test and try to reproduce. This is probably already fixed in prod.
Tried on devenv_4154 and PROD with rhc-1.19.2, can not reproduce this issue, so verified this issue.