Bug 999437

Summary: It shows log message on command prompt (same log message are present in brick log also)
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Rachana Patel <racpatel>
Component: glusterdAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED NOTABUG QA Contact: amainkar
Severity: low Docs Contact:
Priority: unspecified    
Version: 2.1CC: rhs-bugs, vbellur, vmallika
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-11-07 09:30:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Rachana Patel 2013-08-21 10:26:30 UTC
Description of problem:
It shows log message on command prompt (same log message are present in brick log also) 

Version-Release number of selected component (if applicable):
3.4.0.20rhs-2.el6rhs.x86_64

How reproducible:
not always

Steps to Reproduce:
1. had a DHT volume having 3 bricks (2 bricks on one RHSS and one brick on another RHSS)
2. by mistake deleted brick from second RHSS

It shows log message on command prompt


[root@DVM4 nufa]# rm -rf /rhs/brick3/n1
[root@DVM4 nufa]# 
Message from syslogd@DVM4 at Aug 21 07:23:09 ...
 GlusterFS[22768]: [2013-08-21 01:53:09.033036] M [posix-helpers.c:1192:posix_health_check_thread_proc] 0-nufa-posix: health-check failed, going down

Message from syslogd@DVM4 at Aug 21 07:23:39 ...
[2013-08-21 01:53:39.033485] M [posix-helpers.c:1197:posix_health_check_thread_proc] 0-nufa-posix: still alive! -> SIGTERM

[root@DVM4 nufa]# 


Actual results:
should not show message on command prompt

Expected results:


Additional info:

brick log :-

[2013-08-21 01:53:09.032927] W [posix-helpers.c:1172:posix_health_check_thread_proc] 0-nufa-posix: stat() on /rhs/brick3/n1 returned: No such file or directory
[2013-08-21 01:53:09.033036] M [posix-helpers.c:1192:posix_health_check_thread_proc] 0-nufa-posix: health-check failed, going down
[2013-08-21 01:53:38.274903] I [server.c:773:server_rpc_notify] 0-nufa-server: disconnecting connection from DVM1.lab.eng.blr.redhat.com-25631-2013/08/21-01:50:07:149908-nufa-client-2-0, Number of pending operations: 1
[2013-08-21 01:53:38.274941] I [server-helpers.c:752:server_connection_put] 0-nufa-server: Shutting down connection DVM1.lab.eng.blr.redhat.com-25631-2013/08/21-01:50:07:149908-nufa-client-2-0
[2013-08-21 01:53:38.274964] I [server-helpers.c:585:server_log_conn_destroy] 0-nufa-server: destroyed connection of DVM1.lab.eng.blr.redhat.com-25631-2013/08/21-01:50:07:149908-nufa-client-2-0  
[2013-08-21 01:53:39.033485] M [posix-helpers.c:1197:posix_health_check_thread_proc] 0-nufa-posix: still alive! -> SIGTERM
[2013-08-21 01:53:39.042505] W [glusterfsd.c:1062:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x3c2a6e890d] (-->/lib64/libpthread.so.0() [0x3c2ae07851] (-->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xcd) [0x4052dd]))) 0-: received signum (15), shutting down

Comment 1 Rachana Patel 2013-08-21 10:30:56 UTC
volume info :-

[root@DVM1 ~]# gluster v info nufa
 
Volume Name: nufa
Type: Distribute
Volume ID: ec850869-99e9-497c-b650-b5c1443acb3d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.37.128:/rhs/brick3/n1
Brick2: 10.70.37.128:/rhs/brick3/n2
Brick3: 10.70.37.192:/rhs/brick3/n1
Options Reconfigured:
cluster.nufa: on

Comment 4 Vijaikumar Mallikarjuna 2013-11-07 09:30:21 UTC
syslogd will send emergency messages to everybody who are logged in.

In this case a brick is unusable and needs immediate action, so these message are logged with level 'LOG_EMERG'.

Closing the bug as not a bug.