This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 999437 - It shows log message on command prompt (same log message are present in brick log also)
It shows log message on command prompt (same log message are present in brick...
Status: CLOSED NOTABUG
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.1
x86_64 Linux
unspecified Severity low
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
amainkar
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-21 06:26 EDT by Rachana Patel
Modified: 2015-04-20 07:56 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-11-07 04:30:21 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rachana Patel 2013-08-21 06:26:30 EDT
Description of problem:
It shows log message on command prompt (same log message are present in brick log also) 

Version-Release number of selected component (if applicable):
3.4.0.20rhs-2.el6rhs.x86_64

How reproducible:
not always

Steps to Reproduce:
1. had a DHT volume having 3 bricks (2 bricks on one RHSS and one brick on another RHSS)
2. by mistake deleted brick from second RHSS

It shows log message on command prompt


[root@DVM4 nufa]# rm -rf /rhs/brick3/n1
[root@DVM4 nufa]# 
Message from syslogd@DVM4 at Aug 21 07:23:09 ...
 GlusterFS[22768]: [2013-08-21 01:53:09.033036] M [posix-helpers.c:1192:posix_health_check_thread_proc] 0-nufa-posix: health-check failed, going down

Message from syslogd@DVM4 at Aug 21 07:23:39 ...
[2013-08-21 01:53:39.033485] M [posix-helpers.c:1197:posix_health_check_thread_proc] 0-nufa-posix: still alive! -> SIGTERM

[root@DVM4 nufa]# 


Actual results:
should not show message on command prompt

Expected results:


Additional info:

brick log :-

[2013-08-21 01:53:09.032927] W [posix-helpers.c:1172:posix_health_check_thread_proc] 0-nufa-posix: stat() on /rhs/brick3/n1 returned: No such file or directory
[2013-08-21 01:53:09.033036] M [posix-helpers.c:1192:posix_health_check_thread_proc] 0-nufa-posix: health-check failed, going down
[2013-08-21 01:53:38.274903] I [server.c:773:server_rpc_notify] 0-nufa-server: disconnecting connection from DVM1.lab.eng.blr.redhat.com-25631-2013/08/21-01:50:07:149908-nufa-client-2-0, Number of pending operations: 1
[2013-08-21 01:53:38.274941] I [server-helpers.c:752:server_connection_put] 0-nufa-server: Shutting down connection DVM1.lab.eng.blr.redhat.com-25631-2013/08/21-01:50:07:149908-nufa-client-2-0
[2013-08-21 01:53:38.274964] I [server-helpers.c:585:server_log_conn_destroy] 0-nufa-server: destroyed connection of DVM1.lab.eng.blr.redhat.com-25631-2013/08/21-01:50:07:149908-nufa-client-2-0  
[2013-08-21 01:53:39.033485] M [posix-helpers.c:1197:posix_health_check_thread_proc] 0-nufa-posix: still alive! -> SIGTERM
[2013-08-21 01:53:39.042505] W [glusterfsd.c:1062:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x3c2a6e890d] (-->/lib64/libpthread.so.0() [0x3c2ae07851] (-->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xcd) [0x4052dd]))) 0-: received signum (15), shutting down
Comment 1 Rachana Patel 2013-08-21 06:30:56 EDT
volume info :-

[root@DVM1 ~]# gluster v info nufa
 
Volume Name: nufa
Type: Distribute
Volume ID: ec850869-99e9-497c-b650-b5c1443acb3d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.37.128:/rhs/brick3/n1
Brick2: 10.70.37.128:/rhs/brick3/n2
Brick3: 10.70.37.192:/rhs/brick3/n1
Options Reconfigured:
cluster.nufa: on
Comment 4 Vijaikumar Mallikarjuna 2013-11-07 04:30:21 EST
syslogd will send emergency messages to everybody who are logged in.

In this case a brick is unusable and needs immediate action, so these message are logged with level 'LOG_EMERG'.

Closing the bug as not a bug.

Note You need to log in before you can comment on or make changes to this bug.